id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2306.00031 | Morphological Classification of Radio Galaxies using Semi-Supervised
Group Equivariant CNNs | Out of the estimated few trillion galaxies, only around a million have been
detected through radio frequencies, and only a tiny fraction, approximately a
thousand, have been manually classified. We have addressed this disparity
between labeled and unlabeled images of radio galaxies by employing a
semi-supervised learning approach to classify them into the known
Fanaroff-Riley Type I (FRI) and Type II (FRII) categories. A Group Equivariant
Convolutional Neural Network (G-CNN) was used as an encoder of the
state-of-the-art self-supervised methods SimCLR (A Simple Framework for
Contrastive Learning of Visual Representations) and BYOL (Bootstrap Your Own
Latent). The G-CNN preserves the equivariance for the Euclidean Group E(2),
enabling it to effectively learn the representation of globally oriented
feature maps. After representation learning, we trained a fully-connected
classifier and fine-tuned the trained encoder with labeled data. Our findings
demonstrate that our semi-supervised approach outperforms existing
state-of-the-art methods across several metrics, including cluster quality,
convergence rate, accuracy, precision, recall, and the F1-score. Moreover,
statistical significance testing via a t-test revealed that our method
surpasses the performance of a fully supervised G-CNN. This study emphasizes
the importance of semi-supervised learning in radio galaxy classification,
where labeled data are still scarce, but the prospects for discovery are
immense. | Mir Sazzat Hossain, Sugandha Roy, K. M. B. Asad, Arshad Momen, Amin Ahsan Ali, M Ashraful Amin, A. K. M. Mahbubur Rahman | 2023-05-31T06:50:32Z | http://arxiv.org/abs/2306.00031v1 | # Morphological Classification of Radio Galaxies using Semi-Supervised Group Equivariant CNNs
###### Abstract
Out of the estimated few trillion galaxies, only around a million have been detected through radio frequencies, and only a tiny fraction, approximately a thousand, have been manually classified. We have addressed this disparity between labeled and unlabeled images of radio galaxies by employing a semi-supervised learning approach to classify them into the known Fanaroff-Riley Type I (FRI) and Type II (FRI) categories. A Group Equivariant Convolutional Neural Network (G-CNN) was used as an encoder of the state-of-the-art self-supervised methods SimCLR (A Simple Framework for Contrastive Learning of Visual Representations) and BYOL (Bootstrap Your Own Latent). The G-CNN preserves the equivariance for the Euclidean Group E(2), enabling it to effectively learn the representation of globally oriented feature maps. After representation learning, we trained a fully-connected classifier and fine-tuned the trained encoder with labeled data. Our findings demonstrate that our semi-supervised approach outperforms existing state-of-the-art methods across several metrics, including cluster quality, convergence rate, accuracy, precision, recall, and the F1-score. Moreover, statistical significance testing via a t-test revealed that our method surpasses the performance of a fully supervised G-CNN. This study emphasizes the importance of semi-supervised learning in radio galaxy classification, where labeled data are still scarce, but the prospects for discovery are immense.
Radio Galaxy, Fanaroff-Riley, G-CNN, SimCLR, BYOL, Semi-supervised Learning
## I Introduction
Radio astronomy has given us unprecedented access to explore galaxies far beyond what is visible to the naked eye. The Karl G. Jansky Very Large Array (VLA), a U.S.-based radio telescope array, has imaged a significant number of radio galaxies that are exceptionally bright at radio frequencies. These galaxies display a variety of morphologies, such as Fanaroff-Riley (FR) Types I and II [1], head-tailed [2], ringlike [3] and x-shaped [4], among others. Classifying these morphologies is crucial for our understanding of the universe. However, manual classification has become nearly impossible because the number of detected galaxies is exploding with the improvement of telescope technologies. As such, the scientific community is exploring artificial intelligence, particularly machine learning techniques, to address this challenge.
In this work, we present a semi-supervised approach for classifying radio galaxies. Our proposed method has two steps: task-agnostic self-supervised learning and task-specific fine-tuning. Initially, we used the self-supervised techniques SimCLR (A Simple Framework for Contrastive Learning of Visual Representations) [5] and BYOL (Bootstrap Your Own Latent) [6] to learn representations from a large unlabeled dataset. We then fine-tuned these representations using a smaller labeled dataset, specifically for classifying the galaxies into either FRI or FRI categories. This semi-supervised approach uses losses that encourage the encoder to extract robust representations from large amounts of unlabeled data.
Galaxy images can have random orientations. In order to address this challenge, we have modified the encoders of the self-supervised models so that the network is equivariant to different isometries, such as translation, rotation, and mirror reflection. Specifically, we have used a D16 (Diehdral group with 16 rotations) equivariant CNN proposed by Scaife and Porter [7] as a feature extractor in SimCLR and BYOL, which ensures that the extracted features are equivariant to the isometries. After representation learning, we fine-tuned the D16 equivalent CNN with labeled data and trained fully connected layers to classify the FR-type radio galaxies.
Using a semi-supervised approach involving self-supervised learning followed by supervised fine-tuning, we devised a strategy that effectively utilizes a large number of unlabeled radio galaxy images and successfully tackles the random orientations of galaxies.
In summary, our paper presents the following novel contributions to the classification of radio galaxies:
* We have used Group Equivariant Convolutional Neural Network (G-CNN) as an encoder of SimCLR and BYOL for the first time.
* Our semi-supervised approach outperforms the state-of-the-art methods in FR classification.
* We have found evidence that representation learning from a large dataset of unlabeled data is beneficial.
* Our method can be successful in tasks where the labelled dataset is very small compared to the available unlabelled data.
This paper is structured to provide a clear and organized description of our work on radio galaxy classification. Section II gives a brief overview of radio galaxies and the need
for using machine learning in classifying them. Section III describes the relevant previous works of radio galaxy classification using machine learning. In Section IV, we discuss the relevant parts of the specific models used in our work. Section V presents our proposed model and its unique training strategy. Section VI covers the datasets used in our study. Our experimental setup and results are presented in Sections VII-A and VII-B, respectively. Finally, in Section VIII, we summarize our key findings and conclusions. We emphasize the superior performance of our radio galaxy classification strategy compared to other existing models.
## II Scientific background
The cosmos teems with a diverse array of galaxies, but due to technological limitations, astronomers have only been able to observe a fraction of them. These observations are made at various frequencies such as optical (around 400-800 THz), infrared (around 1-30 THz) and radio (few MHz - few GHz). Galaxies appear to be different at different frequencies. Radio galaxies, which emit a significant amount of radiation at radio wavelengths, have been categorized into various morphologies.
The Fanaroff-Riley (FR) categorization, based on the physical characteristics of a galaxy, is crucial for understanding the underlying science behind the formation and development of galaxies. Typical radio galaxies have a supermassive black hole in their core that actively accretes gas and stars along its equator and, as a result, ejects multiple jets through its poles. Consequently, radio images of a galaxy usually consist of a core and multiple lobes. FRI galaxies, as depicted in Fig.1a, have the peak of their radio emission near the core, and the lobes have a darker edge than their core. On the other hand, FRII galaxies, as shown in Fig.1b, have peak emissions at the edge of the lobes far from the core.
Despite many detected radio galaxies, a mere fraction, approximately one thousand, have been manually classified as either FRI or FRII [8]. However, with the imminent operation of the Square Kilometre Array (SKA), the most significant scientific facility ever constructed, and its pathfinder telescopes, sensitivity and survey speed will increase significantly. This will enable the imaging of tens of thousands of radio sources in a matter of hours, a task that would have taken months to accomplish only a few years ago [9]. The sheer volume of data generated by the SKA requires using artificial intelligence to supplement human analysis. Hence, the radio astronomy and computing communities are actively engaged in the application of machine-learning techniques to classify radio galaxies.
## III Related previous works
Images provided by the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey have been crucial for researchers in radio galaxy classification. This survey, conducted using the VLA radio telescope, offers some of the most detailed observations of the radio sky, predominantly at a frequency of 1.4 GHz. In the quest to classify these complex radio galaxy images, various machine-learning techniques have been adopted. Such as using a Convolutional Neural Network (CNN) in the form of a slightly modified AlexNet as done by Aniyan and Thorat [10], or using a CNN with three convolution layers as demonstrated by Alhassan et al. [11]. Wu et al. [12] employed Faster Region-based Convolutional Neural Networks (Faster R-CNN) to classify galaxies into six categories based on the number of components and peaks within a source. Tang et al. [13] adopted a 13-layer CNN and investigated the identification ability of classification networks on cross-survey data using Transfer Learning. Since deep networks provide better performance in classification, researchers are increasingly utilizing such networks to classify radio galaxies.
One of the main challenges in classifying radio galaxies using CNNs is the need for the network to be equivariant to different isometries such as translation, rotation, and mirror reflection. Since radio galaxies can have different orientations, CNN needs to be able to classify them regardless of their position or symmetry. While CNNs are typically translation-invariant, they must also be able to handle other types of isometries. Previous methods have attempted to address this issue by augmenting the data with rotated images. However, this approach has proven ineffective as the CNN may learn multiple copies of the same kernel differently. Scaife and Porter [7] addressed this issue by using G-CNNs [14], which can preserve equivariance for all isometries, allowing for a more accurate classification of radio galaxies.
Another key challenge in classifying radio galaxies is leveraging the vast amount of unlabeled data. A significant portion of available radio galaxy data needs to be labeled, making it difficult for supervised methods to utilize this potential wealth of information fully. Using this unlabeled data could increase the model's generalization ability, but the absence of labels makes it challenging to derive meaningful insights directly. Self-supervised learning techniques can, however, be used to extract useful information from these unlabeled data and apply it in a task-specific manner to improve the performance of our models. This approach effectively addresses the scarcity of labelled data in radio galaxy classification, as researchers have already begun to explore this method. For example, Ma et al.
Fig. 1: Examples of Fanaroff-Riley galaxies. (a) FRI galaxy, characterized by peak radio emission near the core and darker edges of the lobes. (b) FRII galaxy, characterized by peak radio emission at the edge of the lobes far from the core.
[15] developed a CNN-based autoencoder named Morphological Classification of Radio Galaxy Network (MCRGNet), consisting of an encoder and a decoder block. First, they trained the MCRGNet (Morphological Classification of Radio Galaxy Network) autoencoder with unlabeled data and then fine-tuned the pre-trained encoder with labelled data to classify radio morphologies. This method performed significantly better than traditional supervised classifications [10, 11].
A later work by Ma et al. [16] used a VGG16 network as an autoencoder and achieved improved performance. However, more than a simple reconstruction error is needed to learn invariant representations under various noises and transformations. Slijepcevic et al. [17] used the semi-supervised learning technique FixMatch, which combines labelled and unlabeled data to improve model performance. This method uses a self-supervised approach to generate pseudo-labels for unlabeled data, which are then used to fine-tune the model with labelled data. This helps to extract useful information from large amounts of unlabeled data, resulting in improved model performance. However, the improvement is still relatively small compared to the baseline [7] with fewer labels [17].
## IV Related deep-learning architectures
In the previous section, we discussed various techniques used for radio galaxy classification, highlighting the challenges and limitations in the field. Motivated by these, our approach utilizes the G-CNN, and self-supervised learning techniques such as SimCLR [5] and BYOL [6]. These methods form the cornerstone of our proposed technique, which adopts the 'unsupervised pretrain, supervised fine-tune' paradigm. This section outlines the pertinent features of these deep learning architectures, setting the groundwork for the detailed discussion of our proposed method in Section V.
### _Group Equivariant CNN (G-CNN)_
G-CNN [14] is a specialized neural network architecture that possesses the capability of being equivariant to symmetries present in the input data, such as translations, rotations, and reflections. This feature ensures that the network produces the same output when the input data is transformed in a particular way in accordance with the symmetries of a group.
G-CNN employs the theory of group representations to establish the convolution operation. This is achieved by substituting the traditional convolution operation with a group convolution operation, mathematically represented as
\[(f*\phi)(g)=\sum_{h\in G}f(h)\phi(h^{-1}g), \tag{1}\]
where \(f\) and \(\phi\) are the input and kernel functions, respectively, \(G\) is the group of symmetries while \(g\) and \(h\) are its elements.
### _A Simple Framework for Contrastive Learning of Visual Representation (SimCLR)_
SimCLR [5] is a self-supervised learning method that aims to learn representations of the input data by maximizing the agreement between two different views of the same data. The method utilizes the concept of contrastive learning, which is based on the idea of comparing the similarity between two different data samples.
SimCLR uses two neural networks, an encoder and a projection head. The encoder is trained to map the input data to a feature space, while the projection head is trained to map the features back to the input space. The two networks are trained together using a contrastive loss function, defined as the negative log-likelihood of the correct pair of views, mathematically
\[\mathcal{L}_{\text{SimCLR}}=-\log\frac{\exp(\text{sim}(z_{i},z_{j})/\tau)}{ \sum_{k=1}^{2n}\mathds{1}_{[k\neq i]}\exp(\text{sim}(z_{i},z_{k})/\tau)}, \tag{2}\]
where \(z_{i}\) and \(z_{j}\) are the features of the two views of the same data sample, \(z_{k}\) are the features of a different sample; the dataset has \(n\) number of samples in total. The indicator function \(\mathds{1}_{[k\neq i]}\) equals 1 when \(k\neq i\), ensuring the sum considers only negative pairs (different sample) and excludes the current positive pair (same data sample). \(\text{sim}(\cdot)\) is a similarity function, such as cosine similarity, and \(\tau\) is a temperature parameter that controls the concentration of the distribution. The objective of the loss function is to maximize the agreement between the two views of the same sample and minimize the agreement between different samples.
### _Bootstrap Your Own Latent (BYOL)_
BYOL [6] is a self-supervised learning method that utilizes the idea of "global contrastive learning" to learn representations of input data. The method simultaneously trains two neural networks, an online network and a target network. The online network, parametrized by \(\theta\), is used to generate predictions of the input data. The target network, parametrized by \(\xi\), is used to generate predictions of the predictions of the online network. The goal of BYOL is to learn similar representations between the two networks, which are, in turn, learned from the input data.
The training process of BYOL is guided by a contrastive loss function, defined as the mean squared error (MSE) between the feature representations of the online network and the target network. Mathematically, this loss function is represented as
\[\mathcal{L}_{\theta,\xi}\triangleq\left\|\overline{q}_{\theta}(z_{\theta})- \overline{z}_{\xi}^{\prime}\right\|_{2}^{2}=2-2\cdot\frac{\left\langle q_{ \theta}(z_{\theta}),z_{\xi}^{\prime}\right\rangle}{\left\|q_{\theta}(z_{\theta} )\right\|_{2}\cdot\left\|z_{\xi}^{\prime}\right\|_{2}}, \tag{3}\]
where \(q_{\theta}(z_{\theta})\) is the feature representation of the input data generated by the online network, and \(z_{\xi}^{\prime}\) is the feature representation of the input data generated by the target network. The goal of the loss function is to encourage the online network to learn representations similar to the representations of the target network.
The main advantage of BYOL is that it does not require explicit data augmentation or negative samples during training; it only requires the data to learn the representations.
## V Our proposed method
In this paper, we propose a semi-supervised approach for radio galaxy classification. Our method consists of two main steps: task-agnostic self-supervised learning and task-specific fine-tuning. In the first step, we use the state-of-the-art self-supervised representation learning techniques SimCLR and BYOL to learn robust representations from a large amount of unlabeled data. In the second step, we fine-tune these representations using a small amount of labelled data for the specific task of radio galaxy classification. The use of a semi-supervised approach allows us to leverage a large amount of unlabeled data while still achieving high performance in task-specific classification. We have described these two steps in more detail and provided an in-depth analysis of our proposed method below.
### _Task-agnostic self-supervised learning_
In this step, we employ the BYOL and SimCLR methods to learn representations of the input data using an unlabelled dataset. Fig. 2 illustrates the SimCLR and BYOL architectures adopted for self-supervised representation learning from the unlabeled dataset.
SimCLR, depicted in Fig. 1(a), utilizes an encoder \(f(\cdot)\) that extracts features from different augmented views \(x_{i}\) and \(x_{j}\) of the same image \(x\). These augmented views generated from the same image are considered positive pairs of samples. Assuming a batch size of \(N\), the other \(2(N-1)\) augmented examples are treated as negative pairs of samples. The representations, \(h_{i}=f(x_{i})\) and \(h_{j}=f(x_{j})\), are then mapped to the latent space using a projection head \(g(\cdot)\), which is a Multi-Layer Perceptron (MLP) with one hidden layer. The contrastive loss (Eq. 2) is applied to the mapped representations \(z_{i}=g(h_{i})\) and \(z_{j}=g(h_{j})\).
BYOL, illustrated in Fig. 1(b), creates two augmented views, \(v\) and \(v^{\prime}\), of an image. The feature extractor of the online network outputs a representation \(y_{\theta}\triangleq f_{\theta}(v)\), and the projection head provides a projection \(z_{\theta}\triangleq g_{\theta}(y_{\theta})\) from the first augmented view \(v\). From the second augmented view \(v^{\prime}\), the target network produces \(y^{\prime}_{\xi}\triangleq f_{\xi}(v^{\prime})\) and \(z^{\prime}_{\xi}\triangleq g_{\xi}(y^{\prime}_{\xi})\). An additional MLP, serving as a predictor, is used in the online network. This MLP maps the online network's projection \(z_{\theta}\) to a space corresponding to the target network's output. Here, \(q_{\theta}(z_{\theta})\) seeks to approximate the output of the target network's projection, \(z^{\prime}_{\xi}\). Minimizing the MSE loss (Eq. 3) between the \(l_{2}\)-normalized \(q_{\theta}(z_{\theta})\) and \(z^{\prime}_{\xi}\) serves to enhance the alignment between \(q_{\theta}(z_{\theta})\) and \(z^{\prime}_{\xi}\).
In the case of SimCLR and BYOL, the authors used a combination of cropping, colour distortion, and Gaussian blur as data augmentation techniques. They found that this combination of augmentations improved the performance of these self-supervised models. However, the default augmentations used in SimCLR and BYOL were unsuitable for radio galaxy images because the galaxies occupy a very small portion of the images. Random cropping and resizing could result in many empty patches, negatively impacting the model's training.
Our contribution to this work is redesigning the data augmentation policy for radio galaxy images. We retained the colour distortion and Gaussian blur augmentations, set lower and upper bounds for the random aspect ratio of the crop to 0.8 and 1, respectively, and added an extra \(360^{\circ}\) random rotation to the augmentations. This helped to ensure that most of the cropped patches contained the galaxy while increasing the diversity of the training data. By redesigning the data augmentation strategy, we improved the quality of the learned representations and achieved better results.
In addition to redesigning the data augmentation strategy, we have also made another significant contribution to this work by replacing the ResNet-50 feature extractor of both BYOL and SimCLR with the E(2)-Equivariant Steerable G-CNN, specifically the D16 Steerable G-CNN. This network contains 2 group convolution layers with kernel size \(5\times 5\), each followed by a ReLU activation and max pooling. The output is then passed through a linear layer that maps the 20736-dimensional feature maps to a feature map of dimension 2048. This feature map is then used as input for the MLP projection head. This change allows the network to be equivariant to different isometries, enabling it to classify radio galaxies with different orientations, regardless of their position or symmetry.
Fig. 2: Illustration of (a) SimCLR and (b) BYOL architectures used for self-supervised representation learning from an unlabeled dataset. The data augmentation strategy for radio galaxy images has been modified by setting bounds for aspect ratio, adding an extra random rotation, and replacing the ResNet-50 feature extractor with an E(2)-Equivariant Steerable G-CNN for improved performance on the downstream radio galaxy classification task.
### _Task-specific fine-tuning_
In order to optimize the model for the task of FR classification, we perform task-specific fine-tuning on our labelled dataset next. This step involves modifying the pre-trained encoder obtained through task-agnostic self-supervised learning to better suit our labelled data's characteristics.
As shown in Fig. 3, the final fully connected layer of the encoder is replaced with a new architecture that consists of three fully connected linear layers. The first linear layer transforms the 20736-dimensional feature maps into a 120-dimensional feature map, the second linear layer maps the 120-dimensional feature maps to an 84-dimensional feature map, and finally, the third linear layer reduces it to a 2-dimensional feature map. We applied ReLU activation functions after the first and second linear layers and a dropout layer after the second linear layer.
This modification allows the network to adapt to the specific task of FR classification and enhances its ability to recognize the distinctive characteristics of the labelled data. Fine-tuning uses a supervised learning approach and cross-entropy loss function on the labelled data. We aim to adapt the model better to recognize the unique characteristics of our labeled data.
## VI Datasets
Throughout this work, we utilized radio galaxy images from the Karl G. Jansky VLA as part of the FIRST survey. The survey covered a vast area of 10,000 square degrees of the sky, producing images with a resolution of 5 arcsec and a sensitivity of \(0.15\) mJy (milli-jansky; 1 Jy = \(10^{-26}\) W m\({}^{-2}\) Hz\({}^{-1}\)). Each pixel in the final images corresponds to an angular size of 1.8 arcsec. We used pre-processed FIRST survey images to perform representation learning and radio galaxy classification.
For representation learning, we used the Radio Galaxy Zoo (RGZ) dataset [18], pre-processed by Wu et al. [12]1. The original dataset consisted of 11,678 images2, of which we selected approximately 9,700 images where only one galaxy was visible. Each image originally had dimensions of \(132\times 132\) pixels, subsequently padded with zeros to \(150\times 150\) pixels. This unlabeled dataset will be referred to as Dataset-U in the following sections.
Footnote 1: [https://github.com/chenwuperth/rgz_rcnn](https://github.com/chenwuperth/rgz_rcnn)
Footnote 2: [https://cloudator.aarnet.edu.au/plus/s/agKNekOJK87hOh0](https://cloudator.aarnet.edu.au/plus/s/agKNekOJK87hOh0)
For the fine-tuning step with a labeled dataset, we have used FIRST images labeled by Miraghaei and Best [8] and pre-processed for FR classification by Porter [19]. The dataset, referred to as the MiraBest Batched Dataset, contains 1256 images, each of which is assigned a three-digit numerical identifier. The first digit of this identifier denotes the source class: 1 for FRI, 2 for FRII, and 3 for the hybrid sources. The second digit indicates the level of confidence in the classification: 0 for confidently classified and 1 for uncertainly classified. The third digit denotes the morphology of the source: 0 for standard morphology, 1 for double-double, 2 for wide-angled tail, 3 for diffuse source, and 4 for head-tail. The meaning of each digit is summarized in Table I.
In our work, we have only utilized confident sources from the MiraBest Batched Dataset, specifically those with numerical identifications 100, 102, 104, 200, or 201. We named this labeled data Dataset-F which was utilized for training and testing purposes. Like Dataset-U, each image in Dataset-F has a \(150\times 150\) pixels resolution. The breakdown of source counts for FRI and FRII sources in the training and testing sets of Dataset-F can be found in Table II.
## VII Experimental analysis
This section presents our experimental setup and provides a detailed analysis of the results obtained through our semi-supervised approach for radio galaxy classification.
### _Experimental setup_
We utilized two self-supervised methods, SimCLR and BYOL, separately on the unlabeled dataset, Dataset-U, to extract the underlying representations of the radio galaxy images.
For the SimCLR model, we fixed the following hyper-parameters: (i) learning rate of 0.02, (ii) weight decay of
Fig. 3: Illustration of the fine-tuning step for FR classification. The pre-trained encoder from task-agnostic self-supervised learning is modified by replacing the last fully connected layer with three new fully connected layers, allowing the model to adapt to the specific task of FR classification.
\(10^{-6}\), (iii) batch size of 16, (iv) MLP hidden layer dimension of 2048, (v) MLP output layer dimension of 128, and (vi) number of epochs of 500. We used the Layer-wise Adaptive Rate Scaling (LARS) optimizer and Cosine Annealing Warm Restarts scheduler with a minimum learning rate 0.05.
For the BYOL model, we tuned the following hyperparameters: (i) learning rate of \(3\times 10^{-4}\), (ii) moving average decay of 0.99, (iii) batch size of 16, (iv) MLP hidden layer dimension of 4092, (v) MLP output layer dimension of 256, and (vi) number of epochs of 500. We used the Adam optimizer to train this model.
Once the representations were learned from the unlabeled dataset, we used the learned representations to fine-tune the G-CNN model using a supervised approach on the labelled dataset, Dataset-F. We tuned the following hyperparameters for this step: (i) learning rate of 0.0001, (ii) weight decay of \(10^{-6}\), (iii) batch size of 50, and (iv) number of epochs of 600. We used Adam optimizer and a Reduce Learning Rate (LR) on Plateau strategy with patients of 2 and a factor of 0.9.
The experiments in this paper were carried out on a high-performance computing environment equipped with a powerful Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz and 64 Gigabytes of RAM, paired with a cutting-edge GeForce RTX 2080 Ti GPU. The training process for each self-supervised model took approximately 17 hours to complete on this setup, while the downstream training required roughly 40 minutes to run.
### _Result analysis_
Several experiments were conducted to compare our method with a traditional supervised approach. Our model's effectiveness is demonstrated through various qualitative and quantitative metrics.
At first, we evaluated the quality of the clusters formed by learned representations with our semi-supervised approaches (BYOL and SimCLR) and compared the results with the supervised model. To do so, we passed the labeled dataset (Dataset-F) through the fine-tuned encoder for semi-supervised approaches (BYOL, SimCLR). We extracted the representation from the first fully connected (FC) layer. We have performed the same process for the supervised model. The output of the first FC layer is a 120-dimensional vector, which is impossible to visualize. We employed t-Distributed Stochastic Neighbor Embedding (t-SNE) to visualize the feature maps in a lower-dimensional space. The results of the t-SNE visualization, plotted in Fig. 4, indicate that the cluster quality formed by the representations learned from the semi-supervised models is significantly better than those from the supervised model. It means the data points are more clearly clustered.
In order to further validate our observations, we used two widely accepted metrics to quantify the cluster quality: Silhouette Score and Davies Bouldin Score. The Silhouette Score measures the similarity of a sample to its own cluster compared to other clusters, with higher scores indicating better cluster formation. On the other hand, the Davies Bouldin Score measures the average similarity between each cluster and its closest cluster, with lower scores representing better cluster quality. The results of our evaluation showed that the fine-tuned encoders of semi-supervised BYOL and SimCLR had Silhouette Scores of 0.276 and 0.129, respectively. In contrast, the supervised model had a score of 0.058. The semi-supervised BYOL and SimCLR models also received Davies Bouldin Scores of 1.509 and 2.080, respectively, compared to 4.814 for the supervised model. These results demonstrate that our semi-supervised models have learned more effective data representations, resulting in significantly improved clustering performance compared to the supervised model.
In order to evaluate the convergence of our models during the fine-tuning stage, we performed 5-fold cross-validation on the labelled dataset (Dataset-F) using the representations learned by the encoders of SimCLR and BYOL. We also compared the convergence of the loss function in the fine-tuning step with that of the supervised G-CNN approach. For each repetition, we used a different fold for validation and the remaining four for training. Fig. 4(a) and 4(b) show the convergence plots for the training and validation loss, respectively. The plots were generated by taking the mean and standard deviation of the loss across the five repeats. As seen from
Fig. 4: Visualization of the representations obtained from the first fully connected layer of the fine-tuned encoders of (a) BYOL, (b) SimCLR, and (c) the Supervised G-CNN Model on our labelled dataset using t-SNE. The blue points represent the FRI class, while the orange points represent the FRI class. Our models, as evidenced by the higher Silhouette scores and the lower Davies Bouldin scores, show improved clustering performance, demonstrating that the models learned more effective data representations than the supervised model.
the plots, our fine-tuning approach's training and validation losses converge very quickly compared to the supervised G-CNN approach, indicating that the representations learned in the first stage are adequate for the downstream task.
In order to further evaluate the performance of our semi-supervised models, we fine-tuned the encoders of SimCLR and BYOL using a labelled dataset (Dataset-F). We compared the results to a state-of-the-art supervised model, D16-Steerable G-CNN. We trained and tested all models using the same setup and repeated the process 15 times for each model. As shown in Table III, the results include the average precision, recall, and F1 scores for both FRI and FRII classes, along with the standard deviation. The best metrics are highlighted in bold, and the second-best is underlined.
The results demonstrate that our semi-supervised models outperform the supervised model. Both BYOL and SimCLR achieved higher accuracy than D16-Streeable G-CNN. Semi-supervised BYOL achieved the highest accuracy of \(97.12\%\), followed by SimCLR at \(95.77\%\). In addition, both models exhibited superior performance in other classification metrics. For example, SimCLR achieved precision and recall of \(98\%\) for the FRI and FR II radio galaxies, respectively, and BYOL achieved an excellent f-score of \(97\%\) for both classes. These results demonstrate the effectiveness of our proposed semi-supervised models in accurately classifying radio galaxies.
In addition to the results obtained from the classification metrics, we further evaluated the performance of our models using Receiver Operating Characteristic (ROC) curves and the corresponding Area Under the Curve (AUC) scores. The ROC curve is a graphical representation of the performance of a binary classifier system, while the AUC score provides a single numeric value to represent the classifier's performance. It ranges from 0 to 1, with a score of 1 indicating a perfect classifier and a score of 0.5 indicating a classifier that performs no better than random.
The evaluation results are depicted in Fig. 6. The AUC scores for semi-supervised BYOL and SimCLR's fine-tuned encoders were found to be 0.99 and 0.98, respectively, while the score for the Supervised model was 0.89. These high AUC scores for BYOL and SimCLR indicate their exceptional ability to distinguish between the two classes of radio galaxies accurately. The models can effectively differentiate between radio galaxies with a minimal rate of false positives, which implies that they can correctly identify most radio galaxies without misclassifying them.
Fig. 5: Convergence plots for fine-tuning the encoders of SimCLR and BYOL, compared to training a supervised G-CNN on Dataset-F. The plots show the mean and standard deviation of (a) training loss and (b) validation loss across 5-fold cross-validation. The fast convergence of the fine-tuned encoders compared to the supervised G-CNN suggests the effectiveness of the learned representations.
Overall, these results support the effectiveness of our proposed semi-supervised method in classifying radio galaxies. The high AUC scores and the shape of the ROC curves indicate that our models can effectively differentiate between the two classes of radio galaxies. This makes them valuable for further analysis and studies in the field.
To further demonstrate the superiority of our proposed models over the supervised approach, we conducted a statistical t-test on the accuracy scores of 15 runs of each model. The paired t-test between the supervised G-CNN and semi-supervised SimCLR models yielded a t-value of approximately \(-1.89\) and a p-value of approximately \(0.08\). Similarly, the test between the supervised G-CNN and semi-supervised BYOL models yielded a t-value of approximately \(-3.47\) and a p-value of approximately \(0.0038\). These results indicate that our proposed semi-supervised method performs significantly better than the supervised G-CNN, with a high level of statistical significance.
## VIII Conclusion
This paper presents a novel approach for classifying radio galaxies with considerable effectiveness. Using a vast amount of unlabeled images during the pre-training stage allowed our models to learn robust and invariant features. The group-equivariant architecture is vital in enhancing the model's capacity to extract structural and symmetry information, which is crucial for accurate classification.
Fine-tuning the pre-trained encoder with a limited amount of labelled data significantly improved its performance compared to traditional supervised methods. Our quantitative analysis, using Silhouette Score and Davies Bouldin Score, showed that the fine-tuned encoders produced higher Silhouette scores and lower Davies Bouldin scores, indicating better cluster formation compared to the supervised model. The visualization of the representations, obtained from the first fully connected layer of the fine-tuned encoders, demonstrated a clear separation between FRI and FRII galaxies into different clusters. This highlights the effectiveness of our task-agnostic self-supervised learning approach in utilizing limited labelled data to learn effective data representations.
Our semi-supervised method significantly improved over traditional supervised methods, achieving a high accuracy of \(97.12\%\) using the fine-tuned BYOL encoder compared to only \(94.80\%\) for the supervised approach. Furthermore, our training and validation losses converged much faster than in the supervised approach, and the ROC curves and AUC score confirmed the ability of our models to distinguish between the two classes of radio galaxies effectively.
To further solidify our findings, a paired t-test on the accuracy score of 15 runs showed a statistically significant improvement in our proposed models over the supervised model. These results highlight the immense potential of semi-supervised learning in classifying radio galaxies and demonstrate the importance of utilizing vast amounts of high-quality images.
Our work aims to contribute to the classification of the many radio galaxies yet to be discovered, and we hope it encourages further research to improve classification accuracy. While our method shows promise in this context, further studies are required to determine its applicability to other classification tasks with limited labelled data.
With the arrival of upcoming radio telescopes such as SKA, we anticipate a significant increase in the number of high-quality images available for astronomical object classification. Despite its limitations, our method represents an attempt to provide a scalable and efficient solution to this challenge.
## IX Acknowledgment
This project has been jointly sponsored by Independent University, Bangladesh and the ICT Division of the Bangladesh Government.
|
2309.14819 | Discrepancy Matters: Learning from Inconsistent Decoder Features for
Consistent Semi-supervised Medical Image Segmentation | Semi-supervised learning (SSL) has been proven beneficial for mitigating the
issue of limited labeled data especially on the task of volumetric medical
image segmentation. Unlike previous SSL methods which focus on exploring highly
confident pseudo-labels or developing consistency regularization schemes, our
empirical findings suggest that inconsistent decoder features emerge naturally
when two decoders strive to generate consistent predictions. Based on the
observation, we first analyze the treasure of discrepancy in learning towards
consistency, under both pseudo-labeling and consistency regularization
settings, and subsequently propose a novel SSL method called LeFeD, which
learns the feature-level discrepancy obtained from two decoders, by feeding the
discrepancy as a feedback signal to the encoder. The core design of LeFeD is to
enlarge the difference by training differentiated decoders, and then learn from
the inconsistent information iteratively. We evaluate LeFeD against eight
state-of-the-art (SOTA) methods on three public datasets. Experiments show
LeFeD surpasses competitors without any bells and whistles such as uncertainty
estimation and strong constraints, as well as setting a new state-of-the-art
for semi-supervised medical image segmentation. Code is available at
\textcolor{cyan}{https://github.com/maxwell0027/LeFeD} | Qingjie Zeng, Yutong Xie, Zilin Lu, Mengkang Lu, Yong Xia | 2023-09-26T10:33:20Z | http://arxiv.org/abs/2309.14819v1 | Discrepancy Matters: Learning from Inconsistent Decoder Features for Consistent Semi-supervised Medical Image Segmentation
###### Abstract
Semi-supervised learning (SSL) has been proven beneficial for mitigating the issue of limited labeled data especially on the task of volumetric medical image segmentation. Unlike previous SSL methods which focus on exploring highly confident pseudo-labels or developing consistency regularization schemes, our empirical findings suggest that inconsistent decoder features emerge naturally when two decoders strive to generate consistent predictions. Based on the observation, we first analyze the treasure of discrepancy in learning towards consistency, under both pseudo-labeling and consistency regularization settings, and subsequently propose a novel SSL method called LeFeD, which learns the feature-level discrepancy obtained from two decoders, by feeding the discrepancy as a feedback signal to the encoder. The core design of LeFeD is to enlarge the difference by training differentiated decoders, and then learn from the inconsistent information iteratively. We evaluate LeFeD against eight state-of-the-art (SOTA) methods on three public datasets. Experiments show LeFeD surpasses competitors without any bells and whistles such as uncertainty estimation and strong constraints, as well as setting a new state-of-the-art for semi-supervised medical image segmentation. Code is available at [https://github.com/maxwell0027/LeFeD](https://github.com/maxwell0027/LeFeD)
Semi-supervised learning, medical image segmentation, inconsistent feature learning
## I Introduction
Accurate segmentation of medical images is a crucial task in computer-aided diagnosis [1]. Deep learning models trained on large-scale datasets have recently shown promising performance on this task [2, 3]. However, collecting medical image datasets requires ineluctably expertise for data annotation, which is time-consuming and labor-intensive, especially for volumetric data. Considering unlabeled data are relatively easier to collect from clinical sites, semi-supervised learning (SSL) [4, 5] has attracted increasing research attention due to its ability to improve model generalization by leveraging massive unlabeled data to augment limited labeled data.
According to the usage of unlabeled data, the paradigm of SSL can be approximately categorized into pseudo-labeling [6, 7, 8] and consistency regularization [9, 10]. The first category of SSL methods focuses on generating accurate pseudo-labels. For instance, model ensemble was employed in the teacher-student framework to enhance pseudo-label quality [11, 12], and various criteria were defined to select accurately pseudo-labeled data [13, 14]. The second category of SSL methods put emphasis on designing the regularization that enforces the model to give consistent outputs for an input and its realistically perturbed variants. The consistency regularization can be the constraints imposed at either the data-level [15, 16], task-level [17], or prediction-level [18]. Despite the differences of pseudo-labeling and consistency regularization, they share the same crux that is learning invariant predictions by gradually learning from the inconsistency. For example, [18] aligns the pseudo-label of
Fig. 1: A brief illustration of the significance of inconsistent predictions in semi-supervised learning (SSL). **Top**: cross-pseudo supervision (pseudo-labeling) and **Bottom**: consistent logical distribution (consistency regularization). Inconsistent regions are highlighted by red arrow. SSL can be concluded as learning to be consistent by learning from naturally generated inconsistency.
strongly-augmented branch to the weakly-augmented branch, and [19] keeps the logits distribution similar between predictions of CNN and Transformer.
To better realize this, we present a brief view for the workflow of pseudo-labeling and consistency regularization. As Fig. 1 shows, the SSL framework is composed of a single encoder and two decoders - a structure extensively employed in both pseudo-labeling [20; 21] and consistency regularization methods [22; 23]. Let us consider an instance where cross-pseudo supervision (a pseudo-labeling strategy displayed in the top of Fig. 1) is utilized. In this scenario, one decoder's pseudo-label is used to oversee the predictions of the other. It is in this context that inconsistent predictions become significant as they can provide complementary information. Similarly, if we maintain the logical distribution similar for learning from unlabeled data (for example, using KL divergence - a common consistency-based strategy exhibited in the bottom of Fig. 1) between both branches, inconsistent predictions retain a crucial function. This is because the gradient primarily originates from the losses computed within these areas. From these observations, it becomes evident that inconsistency plays a pivotal role in promoting consistency in learning. Although prior SSL methods have effectively leveraged unlabeled data from the perspective of consistent learning, they have overlooked the natural emergence of inconsistent information when decoders attempt to produce inherently consistent predictions. Moreover, they have failed to acknowledge the significance of discrepancies between those two decoders.
To this end, we propose a novel SSL method called **Learning** From the **Feature**-level **D**iscrepancy (LeFeD) from the perspective of learning inconsistent decoder features. Our hypothesis is that these discrepancies play a significant role in consistency learning, and properly harnessing this inconsistent information can enhance model performance. Our strategy distinguishes itself from existing methods on two fronts. Firstly, instead of primarily focusing on creating constraints to ensure prediction consistency, we place emphasis on feature discrepancy. Secondly, rather than striving to improve pseudo-label quality, we leverage the discrepancies to augment learning. In implementation, we first try to enlarge the discrepancy by training two differentiated decoders using distinct loss functions and deep supervision, and then iteratively learn from the inconsistency obtained at all scales. Our main contributions are three-fold.
* We propose a novel perspective for SSL, \(i.e.\), learning from the inconsistent features produced by two differentiated decoders.
* We observe the phenomenon that, when two decoders attempt to make consistent predictions, there always exists a discrepancy between two predictions, whose contribution to model performance has been verified empirically.
* We propose an accurate SSL method called LeFeD, which beats eight advanced SSL methods on three public medical image datasets, setting a new state of the art for semi-supervised medical image segmentation.
## 2 Related Works
### Medical Image Segmentation
It is well-acknowledged that delineating voxel-wise medical images is expensive and tedious, as well as requiring clinical expertise. Recently, deep learning methods present superior performance in automatic medical image segmentation, and a series of frameworks have emerged. For the CNN-based network, nnU-Net [24] extends U-Net [25] and validates the necessity of image pre-processing/post-processing and abundant data augmentation. With the rise of ViT [26], many attempts are made to combine the advantages of Transformer and CNN, including nnFormer [27], TransUNet [28], CoTr [29], Swin-Unet [30], \(etc\). These methods use convolutions and long-range self-attention to capture both local and global features, and present good results on numerous medical image segmentation tasks.
Meanwhile, there are different settings in medical image segmentation, containing self-supervised learning [31], few-shot learning [32], weakly-supervised [33] and semi-supervised learning [4; 5]. In this paper, we focus on the topic of semi-supervised learning, which aims to achieve supervised performance using only limited labeled data along with substantial unlabeled data, thereby mitigating the cost of data annotation.
### Semi-supervised Learning in Medical Image Analysis
Among existing SSL methods, pseudo-labeling and consistency regularization are the most popular. Pseudo-labeling intends to discover high-quality pseudo-labels for re-training. For instance, BoostMIS [7] takes into account the model performance in different training stages and uses an adaptive threshold to select pseudo-labels. UA-MT [10] treats the prediction made by the teacher branch as the pseudo-label and uses it to supervise the student branch. ACPL [6] introduces an anti-curriculum learning scheme, which trains models using hard samples first, and then easy samples. PEFAT [13] selects trustworthy pseudo-labels from the perspective of loss distribution. As for the consistency-based SSL methods, different types of constraints are commonly investigated. For example, I\({}^{2}\)CS [15] simulates the relation between labeled and unlabeled data by calculating attention maps. URPC [34] designs uncertainty rectified pyramid consistency for multi-scale features obtained from one decoder. DTC [17] discovers the consistency between segmentation and regression tasks. MC-Net+ [35] uses three decoders with different structures and enforces the outputs to be consistent.
By contrast, our LeFeD neither focuses on pseudo-label generation nor separately processing the regions with different confidence (or uncertainty) levels. It pays attention to the discrepancy produced in learning towards consistency.
### Inconsistent Learning in Medical Image Analysis
We also review the application of inconsistent learning in medical image processing. In adversarial training, VANTGAN [36] formulates visual attribution according to the discrepancy map between normal and abnormal counterparts to
learn abnormal-to-normal mapping. GAN-BiLSTM-CRF [37] incorporates a generator and a discriminator to address the issue of annotation inconsistency for those pseudo-labels generated by active learning. Specifically, in semi-supervised learning, CoraNet [38] treats the consistent and inconsistent predicted regions separately and designs an uncertainty estimation strategy to filter out the inconsistent pseudo-labels. NonAdjLoss [39] proposes a non-adjacency constraint to discover segmentation anomalies, which also directly removes inconsistent segmentation. All these methods do not realise the significance of inconsistent information in unlabeled data mining, and most of them tend to discard the inherent discrepancy.
By comparison, we analyze the importance of discrepancy in detail from the perspective both of pseudo-labeling and consistency regularization. To our knowledge, we are the first to advocate learning from inconsistency under the SSL setting.
## 3 Method
As delineated in Section 1, we posit that the observed discrepancies play a pivotal role, particularly in the context of achieving learning consistency. Such nuances, when harnessed appropriately, can be invaluable. As illustrated in Fig. 2, the foundational concept of LeFeD is anchored on two key steps: firstly, amplifying the variance between the feature maps generated by dual decoders at each hierarchical level, and subsequently, assimilating insights from this inconsistency in a recursive manner. To engender this diversified discrepancy, we approach it from two strategic dimensions. First, distinct loss functions are designated for each of the decoders. The cross-entropy (CE) loss is tailored to foster precise voxel-wise predictions, juxtaposed with the Dice loss which is oriented towards emphasizing region-wise predictions. Second, deep supervision [40] is employed to enlarge the difference. Furthermore, the discrepancy obtained from the last iteration is integrated into the encoder in the current iteration, serving as auxiliary information. It means that each sample will be learned several times with the discrepancy except the first time in each iteration.
### Training Differentiated Decoders
Drawing from intuitive understanding, the greater the variation between decoders--whether in terms of their architectural designs, training methodologies, or input types--the more unique their respective outputs tend to be. To effectively harness this inconsistency, we initially employ two decoders distinguished by their up-sampling techniques: one utilizes tri-linear interpolation, while the other employs transposed convolution. Furthermore, to augment the variance between the decoders, deep supervision is applied exclusively to one of them, ensuring that features across all scales manifest differently. In the context of the deeply supervised decoder, we use one convolution layer with the kernel size of 1\(\times\)1\(\times\)1 to adjust the output channel. Subsequently, the feature map
Figure 2: Illustration of our LeFeD model. The paired feature maps on the left were produced by the two decoders and were presented as examples to show the discrepancies. We can find decoder features vary significantly in all scales, and LeFeD is designed to learn from these discrepancies by feeding the discrepancies as supplementary information to the encoder.
undergoes up-sampling to align with the dimensions of the ground truth. Moreover, it's essential to note that while both decoders target identical outcomes on labeled data, their optimization processes are deliberately varied. Specifically, the CE loss is paired with one, while the Dice loss is reserved for the other, ensuring distinctive optimization trajectories for each. To this end, the objective for model training on labeled data can be formulated as
\[L_{sup}=L_{ds}+L_{dice}(f_{\theta_{A}}(x_{i}),y_{i})+L_{ce}(f_{\theta_{B}}(x_{i} ),y_{i}) \tag{1}\]
\[L_{ds}=\sum_{s=1}^{S}w_{s}L_{dice}(\zeta(f^{s}_{\theta_{A}}(x_{i})),y_{i}) \tag{2}\]
where \(f_{\theta_{A}}\) and \(f_{\theta_{B}}\) stand for the decoders that are optimized by Dice loss and CE loss, respectively. \(L_{ds}\) is the deep supervision loss, \(S\) is the number of scales and \(w_{s}\) is a constant to balance the weight of loss calculated from the \(s\)-th scale. \(\zeta\) is the combination of convolution and up-sampling. \(D_{L}=\{(x_{i},y_{i})\}_{i=1}^{N}\) is the labeled data set and \(N\) is the number of labeled images. In experiments, \(w_{s}\) are set to \(\{0.8,0.6,0.4,0.2,0.1\}\) for the deeply supervised loss calculated from low-level to high-level features.
As for the unlabeled data, we do not put emphasis on finding truthworthy pseudo-labels or investigating regularized approaches, but simply employ Mean Squared Error (MSE) loss to make predictions consistent, which is written as
\[L_{unsup}=L_{mse}(f_{\theta_{A}}(x_{u}),f_{\theta_{B}}(x_{u})) \tag{3}\]
Fig. 3: 2D visualization results of different SSL methods when training with 10% labeled data. Top two rows are segmentation results on the pancreas dataset, medium two rows are results on the lung tumor dataset, and bottom two rows are results on the left atrium dataset.
where \(D_{U}=\{x_{u}\}_{u=1}^{M}\) is the unlabeled data set and \(M\) is the number of unlabeled images. The overall loss is the sum of the supervised and unsupervised losses, formulated as
\[L_{total}=L_{sup}+L_{unsup} \tag{4}\]
By training differentiated decoders, our model can capture more inconsistent information and thereby show better results.
### Learning from Discrepancy
As illustrated in Fig. 1, we observe inherent inconsistencies in the features when two decoders strive to produce identical predictions. And we argue that this observed discrepancy can serve as valuable supplementary information, enhancing the model's capacity to learn effectively from unlabeled data. Consequently, we propose the incorporation of this discrepancy into the encoder. This strategy involves repeatedly utilizing this supplementary information for multiple iterations per sample, thereby enriching the learning process and potentially improving the model's performance. Concretely, the process can be formulated as:
\[Output^{t}=\begin{cases}E^{t}(x),&\text{if }t=1,\\ E^{t}(x+\lambda(f_{\theta_{A}}^{t-1}(x)-f_{\theta_{B}}^{t-1}(x))),&\text{if }t>1, \end{cases} \tag{5}\]
where \(Output^{t}\) is the feature embedding of \(t\)-th iteration. \(E\) symbolizes the encoder and \(\lambda\) serves as a hyper-parameter designed to regulate the influence of feature discrepancies (here some subscripts, \(e.g.\), \(i\), \(s\) and \(u\) are omitted for the sake of clarity and simplicity of explanation).
In summary, LeFeD is centered on the concept of discrepancy learning. This approach is characterized by the training of distinct decoders through the strategic employment of varied loss functions and deep supervision. The primary objective underpinning this strategy is to accentuate and leverage inconsistent information to a significant degree.
## 4 Experiments
### Datasets
We employed three publicly available datasets for the evaluation of our method, including the pancreas dataset [41], the lung tumor dataset [42], and the left atrium dataset [43]. The pancreas dataset consists of 82 contrast-enhanced abdomen CT scans, which are split into 62 scans for training and 20 scans for testing. The lung tumor dataset includes 63 cases, with a split of 50 for training and 13 for testing purposes. The left atrium dataset contains 100 gadolinium-enhanced MR images, out of which 80 serve as training images and the rest, 20, are for testing. For both the pancreas and left atrium datasets, we strictly adhered to a consistent pre-processing protocol. This involved center cropping with an enlargement margin of \(25\) voxels, respacing to attain an isotropic resolution of 1.0mm \(\times\) 1.0mm \(\times\) 1.0mm, and normalization to achieve zero mean with unit variance. To make fair comparisons with [11, 17, 35, 44, 45, 46], results were detailed with a label percentage of both 10% and 20%.
For the lung tumor dataset, an initial Hounsfield Units (HU) threshold, ranging from -500 to 275, was employed for voxel value clipping. Thereafter, the aforementioned pre-processing strategy was utilized. Given the heightened complexity associated with tumor segmentation compared to organ segmentation, results were presented with 20% and 30% labeled data.
### Implementation Details
Following previous works [11, 17, 35, 44, 45, 46], V-Net [47] was set as the baseline for easy implementation and comparison. For model training, we used an SGD optimizer and set the learning rate to 0.01, the weight decay to 0.0001 and the momentum to 0.9. Hyper-parameter of iteration times \(t\) and \(\lambda\) were set to 3 and 1e-3. The input size was respectively set to 96\(\times\)96\(\times\)96, 96\(\times\)96\(\times\)96 and 112\(\times\)112\(\times\)80 for pancreas dataset, lung tumor dataset and left atrium dataset. The batch size was 4, containing 2 labeled and 2 unlabeled cubic patches. The whole experiments were implemented by Pytorch [48] with one NVIDIA GeForce RTX 3080 Ti GPU. Evaluation metrics of Dice, Jaccard, Average Surface Distance (ASD) and 95% Hausdorff Distance (95HD) were reported.
## 5 Results
We compared our LeFeD with eight SOTA and relevant SSL methods: (1) UA-MT [10] incorporates model ensemble and entropy estimation to enhance pseudo-label quality; (2) SASSNet [45] learns from unlabeled data with shape prior; (3) DTC [17] investigates consistency between segmentation and regression tasks; (4) ASE-Net [44] uses dynamic inference to better adapt model to unseen test data; (5) SS-Net [46] tries to smooth the decision boundary by imposing a strong constraint in the situation of injecting adversarial noise; (6) MC-Net+ [35] pays attention to the cycled pseudo-label generation and uses a sharpening function for entropy minimization; (7) FUSSNet [11] designs uncertainty estimation schemes to split certain and uncertain regions; (8) BCP [12] aligns the kernel distribution with bidirectional copy-paste augmentation.
### Comparisons on Pancreas Dataset
Table 1 presents the comparison with the other eight competitive methods. It is notable that LeFeD exhibits marked performance advantages over the second-best method, BCP [12]. Specifically, when operating with only 10% (6) of the data labeled, LeFeD outperforms BCP by margins of 1.68% and 2.27%, as measured by the Dice and Jaccard scores, respectively. Although a marginal performance gap of 0.22% is observed in the Dice score in favor of BCP when the labeled data is increased to 20%, LeFeD compensates with a noteworthy improvement of 1.43% on the 95HD score. Beyond that, when compared with MC-Net+ [35] that employs three distinct decoders for consistency learning, LeFeD demonstrates significant Dice performance gains of 5.51% and 3.32% under label percentages of 10% and 20%, respectively. These results robustly validate the superior performance and inherent simplicity of our LeFeD. Remarkably, LeFeD achieves this by capitalizing on inherent inconsistencies within feature learning and eliminating the need for additional uncertainty estimation
schemes or strict constraints. Moreover, the first two rows of Fig. 3 visually represent the segmentation outcomes of various SSL techniques. Within these illustrations, it is apparent that LeFeD consistently produces the most comprehensive and accurate segmentations, particularly in regions that are traditionally challenging for segmentation algorithms.
### Comparisons on Lung Tumor Dataset
We have further extended the applicability of LeFeD to encompass the task of lung tumor segmentation. As evidenced in Table II, LeFeD consistently outperforms across all evaluation metrics under varying proportions of labeled data. For instance, when compared to the second-best performing model, BCP [12], LeFeD exhibits performance gains of 2.11% and 0.76% in Dice and HD scores respectively with 20%(10) labeled data, and 1.17% and 0.66% with 30%(15) labeled data. Furthermore, we observe a noticeable performance degradation in the task of lung tumor segmentation when compared to pancreas segmentation and left atrium segmentation. This performance dip is primarily attributable to the complicated shapes and textures of lung tumors. Encouragingly, LeFeD demonstrates a lower rate of performance degradation relative to competing models. This suggests that LeFeD holds considerable potential for effectively addressing challenging tasks, even when constrained by limited labeled data. Additionally, the intermediate two rows of Fig. 3 visually present the outcomes of lung tumor segmentation by LeFeD. These results reveal that LeFeD can capture more complete tumor areas with
fewer false positives.
### Comparisons on Left Atrium Dataset
As shown in Table 3, LeFeD exhibits a compelling capability, nearly matching the fully supervised baseline even when operating with a limited labeled dataset, constituting merely 20% (16 samples) of the entire pool. Specifically, LeFeD achieves a Dice score of 91.44%, which is marginally below the fully supervised baseline of 91.47%, and a Jaccard score of 84.30% compared to the 84.36% attained under full supervision. Notably, it is interesting to find that LeFeD records an ASD score of 1.39, which intriguingly surpasses the theoretical upper bound of 1.51. This phenomenon is likely attributed to the distribution of inconsistent features, which mainly localize outside the central organs, thus empowers LeFeD to capture richer information on surface characteristics. Moreover, as the proportion of labeled data diminishes further to 10% (8 samples), LeFeD's performance gains become increasingly pronounced. For example, compared to FUSSNet (as cited in [11]), LeFeD secures a superior lead, outperforming it by 1.13% and 1.81% in terms of Dice and Jaccard scores, respectively. As for the visualization results, the last two rows of Fig. 3 show that LeFeD consistently generates masks that are remarkably close to the ground truth, outperforming its competitor models.
## 6 Discussion
### Ablation Study
To evaluate the effectiveness of each component, we conduct ablation studies on the pancreas dataset under a label percentage of 10%. As detailed in the first four rows of
Figure 4: Decoder features obtained from different up-sampling stages. High-level to low-level features are listed from Scale4 to Scale1.
Figure 5: Discussion of the way to encode discrepancy. Four encoding types are compared, including **A-B**, **B-A**, **|A-B**| and **Entropy**. **A-B** means the discrepancy is generated by subtracting the output of decoder B from decoder A. Symbol | - | represents taking absolute value.
Table 4, we can observe that by solely leveraging Inconsistent Learning (IL), the performance of the LeFeD model surpasses that reported in [17, 45], which provides empirical validation of the efficacy of employing discrepancy learning as a strategy. Besides, when different loss functions (DL) are combined with deep supervision (DS) to train the decoders, consistent gains in performance are observed across evaluations. This trend suggests that highlighting inconsistent information proves advantageous when training with unlabeled data.
We also explore the effectiveness of using the decoder features across various scales. Our results in Fig. 4 show that features from scale 1 are directly aligned with the final prediction. This observation leads us to infer that the discrepancies discerned at scale 1 are potentially focused on challenging regions, such as object boundaries. Differently, features from scale 2 to scale 4 appear to contain an abundance of high-level semantic information, which somewhat complicates the interpretability of these discrepancies. For a deeper understanding, we conduct quantitative experiments to explore whether these inconsistencies have tangible effects on learning from unlabeled data. As depicted in the last four rows of Table 4, discrepancies obtained from all scales have a positive influence on model performance. We hypothesize that discrepancies derived from high-level features may be indicative of variations in the model's comprehension of image content. Therefore, when presented with identical input, the discrepancies discerned at each scale prove to be of significant consequence, particularly as the decoders endeavor to yield consistent predictions.
### Analysis of Hyper-parameters
In our experiments, \(t\) and \(\lambda\) serve as two critical hyper-parameters. Specifically, \(t\) denotes the iteration times for each sample in discrepancy learning, while \(\lambda\) is a weighting factor to balance the influence of discrepancy. As evidenced in Table 5, LeFeD exhibits optimal performance when setting \(t\) to 3. When \(t\) is set to a substantially larger value, the model may overfit as the discrepancy decreases gradually. Oppositely, when \(t\) is set to a smaller value, the process of discrepancy learning is inadequate. From Table 6, we can observe that the most suitable value for \(\lambda\) is 1e-3. It is worth highlighting that \(\lambda\) has a greater impact on results than \(t\), which means controlling \(\lambda\) is more important than \(t\).
### Analysis of Discrepancy Encoding
In this part, we discuss different strategies for encoding discrepancy, including \(A-B\), \(B-A\), \(|A-B|\) and Entropy. \(A-B\) means the discrepancy is calculated by subtracting the output of decoder B from that of decoder A. The symbol \(|\cdot|\) means taking the absolute value. From Fig. 5, we can see \(A-B\) generally yields superior results in comparison to \(B-A\), despite the performance gap being relatively narrow (commonly around a 1% difference in Dice score). The main reason for the phenomenon lies in the distinct supervision levels of the two decoders: decoder A is subjected to deep supervision, theoretically endowing it with enhanced segmentation capabilities relative to decoder B. Consequently, the discrepancy computed through \(A-B\) is more likely to highlight the false negatives in the segmentation, whereas
Figure 6: Visualization of the last layer decoder feature maps in different training stages, _i.e._, from Epoch1 to Epoch60. “Dice” and “CE” represent the feature maps produced by decoders that are optimized by Dice loss and CE loss, respectively.
\(B-A\) is more inclined to highlight the false positives. Therefore, \(A-B\) emerges as the more favorable configuration for our purposes. \(|A-B|\) and Entropy are between the case of B-A and A-B, so their effects are in the medium.
### Visualization of Features in Different Training Stages
As displayed in Fig. 6, we visualize the feature maps of the last decoder layers when training with different loss functions and training epochs. We can find that the features vary significantly especially in the early training stages (ranging from Epoch 1 to Epoch 10, approximately within the first 1500 iterations). It is clear to see that Dice loss pays attention to region-wise predictions whereas CE loss focuses on voxel-wise predictions, but the final training objective is the same, that is making consistent predictions. Beyond that, we can find the discrepancy acquired from the last layer typically exists around the boundary and ambiguous regions, which is beneficial to model performance if utilized. For example, according to Table I, LeFeD surpasses SS-Net by 5.77% on Dice score when using 10% labels on the Pancreas dataset.
### Model Size Comparison
We further conduct experiments to compare various models in terms of their parameters and multiply-accumulate operations (MACs). These results are reported using an input size of 96 \(\times\)96 \(\times\)96. It is noteworthy that, despite the observed increase in computational cost when compared to the baseline V-Net [47], there is a marked improvement in the results. For instance, when utilizing 10% of the labeled data on the left atrium dataset, we observe gains of 7.26% and 6.57% in the Dice and HD scores, respectively. Furthermore, in comparison to the MC-Net model [22], an SSL method incorporating two decoders, our LeFeD presents Dice score performance gains exceeding 3% when trained with 10% of the labels on three datasets, while keeping nearly the same computational cost. These observations validate both the efficiency and effectiveness of discrepancy learning.
## 7 Conclusion, Limitation and Future Work
In this paper, we first analyze the treasure of discrepancy in learning towards consistent predictions, and thereby propose a novel SSL method LeFeD, which regards the discrepancy as a feedback signal and feeds it to the encoder. Different from priors that emphasize filtering out inconsistent regions or exploring diverse regularized constraints, our approach stands as intuitively effective. Specifically, it involves the training of differentiated decoders, followed by the learning process informed by the resultant discrepancy. Despite its apparent simplicity, LeFeD presents a new insight to learn from unlabeled data. Compared to other SSL methods, LeFeD is more generic since there is no special design to learn lesion or organized information. Empirical results further substantiate the efficacy and superiority of LeFeD, as it consistently surpasses competing methods, particularly in scenarios where annotation resources are limited.
However, it is important to note potential challenges associated with the direct application of LeFeD in multi-center and multi-domain contexts. This is attributed to the assumption in SSL settings that the training and test data are sampled from a similar distribution. Looking forward, we plan to extend LeFeD to accommodate a broader range of datasets and tasks. This direction of development aims to enhance the reliability and applicability of LeFeD, making it a feasible option for clinical deployment.
|
2305.00575 | Dissipative Callan-Harvey mechanism in 2+1 D Dirac system: The fate of
edge states along a domain wall | The Callan-Harvey mechanism in 2+1 D Jackiw-Rebbi model is revisited. We
analyzed Callan-Harvey anomaly inflow in the massive Chern insulator (quantum
anomalous Hall system) subject to external electric field. In addition to the
conventional current flowing from the bulk to edge due to parity anomaly, we
considered the dissipation of the edge charge due to interaction with external
bosonic bath in 2+1 D and due to external bath of photons in 3+1 D. In the case
of 2+1 D bosonic bath, we found the new stationary state, which is defined by
the balance between Callan-Harvey current and the outgoing flow caused by the
dissipation processes. In the case of 3+1 D photon bath, we found a critical
electric field, below which this balance state can be achieved, but above which
there is no such a balance. Furthermore, we estimated the photon-mediated
transition rate between 2+1 D bulk and 1+1 D topological edge state of the
order of one ns$^{-1}$ (nanosecond) at the room temperature. | C. X. Zhang, M. Ulybyshev, C. Northe, E. M. Hankiewicz | 2023-04-30T21:08:50Z | http://arxiv.org/abs/2305.00575v1 | Dissipative Callan-Harvey mechanism in 2+1 D Dirac system: The fate of edge states along a domain wall
###### Abstract
The Callan-Harvey mechanism in 2+1 D Jackiw-Rebbi model is revisited. We analyzed Callan-Harvey anomaly inflow in the massive Chern insulator (quantum anomalous Hall system) subject to external electric field. In addition to the conventional current flowing from the bulk to edge due to parity anomaly, we considered the dissipation of the edge charge due to interaction with external bosonic bath in 2+1 D and due to external bath of photons in 3+1D. In the case of 2+1 D bosonic bath, we found the new stationary state, which is defined by the balance between Callan-Harvey current and the outgoing flow caused by the dissipation processes. In the case of 3+1 D photon bath, we found a critical electric field, below which this balance state can be achieved, but above which there is no such a balance. Furthermore, we estimated the photon-mediated transition rate between 2+1 D bulk and 1+1 D topological edge state of the order of one ns\({}^{-1}\) at the room temperature.
## I Introduction
An anomaly in quantum field theory (or quantum anomaly) occurs when a symmetry of the classical action is broken by quantum effects. One of the most important quantum anomalies is the chiral anomaly, also known as Adler-Bell-Jackiw anomaly [1; 2] or axial anomaly. It is related to the breaking of the conservation law of an axial vector current, which is associated with chiral symmetry, by quantum fluctuations. In odd-dimensional space-time, the chiral anomaly does not exist, and is replaced by the so-called parity anomaly: if fermions are coupled to a gauge field, parity symmetry is lost after quantization. These quantum anomalies evoked great research interest in elementary particle physics and in condensed matter physics. The chiral anomaly is important to understand the pion decay into two photons (\(\pi\rightarrow\gamma\gamma\)) and also the chiral magnetic effect in Dirac materials[3]. In contrast, the parity anomaly is essential in the quantum anomalous Hall effect (QAHE) [4], which is defined as quantized Hall conductivity in the absence of a magnetic field[5]. In both scenarios, anomalies confirm the deviation from classical physics. Thus, their presence adds to the long list of successes of quantum theory.
Parity anomaly and chiral anomaly show some certain connection when one considers a finite-size fermionic system with boundaries [6]. We take a cylinder-shaped bulk system in 2+1 D with two 1+1 D edges [7] as an example, and consider two scenarios to review such a connection. The first scenario is the work done by one of the authors [8]. Due to the parity anomaly in the 2+1 D bulk, an out-of-surface magnetic field pumps the charge to the bulk states [9], but the total charge density \(n_{tot}=n_{bulk}+n_{edge}\) is constant and zero. It demonstrates the Callan-Harvey mechanism [6]: it is the edge states that compensate the charge deficit of bulk under the magnetic field [8]. The second scenario is in the absence of the magnetic field, but in the presence of an electric field parallel to the edges, which induces a Hall current in the bulk, perpendicular to the edge, due to the parity anomaly. This bulk current pumps charge from one edge across the bulk to the other edge, and the charge accumulates at the edges, which changes the chemical potentials between them. This is another example demonstrating the Callan-Harvey mechanism [6][10]: From the viewpoint of the bulk, the current "stops" at the edge, which breaks charge conservation. At the same time, the electric field generates charges at the edge, because of the 1+1 D chiral anomaly. One has to consider the two subsystems together; only then the charge conservation law holds for the whole system. Importantly this cancels the gauge anomaly [11] that would otherwise occur.
Now one may ask the following questions: What is the fate of this surplus charge at the edge? Will the charge accumulation be boundless? We know that such an edge mode propagates in a single direction, and is protected by topology. Back-scattering is forbidden, which makes the edge mode robust to impurities [12]. However, the accumulation cannot happen infinitely; when all the edge states are occupied, one expects relaxation to the bulk bands. In reality, however, the edge states and the bulk states interact with each other. One expects that if the edge chemical potential is higher than the energy gap of the bulk, say \(\mu>m_{0}v^{2}\), relaxation occurs: the electrons at the high-energy (occupied) edge states tend to relax into the low-energy (empty) bulk states, and dissipate energy to the environment. Such an interplay has been investigated in quantum Hall systems [13][14].
In the present work, we study the interplay between edge states and bulk states in QAHE systems by introducing
electron-photon interactions [19]. In addition to the edge-to-bulk relaxation process, there is another excitation process transferring the charge from an edge state to the bulk. Even before the edge chemical potential exceeds the gap energy, i.e. \(\mu<m_{0}v^{2}\), the edge state can be excited into bulk states by absorbing a photon from the thermal fluctuations. Such an excitation is the leading order contribution to the transition, pushing the electrons to leave the edge. Our present work will focus on such an excitation process and we will calculate its rate using the Lindblad formalism.
The paper is organized as follows. In Sec.2, the Jackiw-Rebbi model is introduced, and the Callan-Harvey mechanism is explained. We also introduce the setup of the paper and recapitulate the eigenstates (the wave functions) of the non-interacting 2+1 D Jackiw-Rebbi model. In Sec.3, we investigate a toy model of QED\({}_{3}\) with a planar photon and calculate the transition rate of the edge modes in the framework of the Lindblad approach. In Sec.4, the interaction with a real 3+1 D photon is studied. Sec.5 provides the conclusion and outlook.
## II Callan-Harvey mechanism
In this section, we introduce the Callan-Harvey mechanism [6] in 2+1 D Jackiw-Rebbi model[20], and the eigenstates of the non-interacting theory to lay the foundation of the next sections.
In order to explain the Callan-Harvey mechanism, we start with a quite general 2+1 D fermion (electron) \(\psi\) in the background of an Abelian gauge field \(A_{\mu}\) (\(\mu=0\sim 2\)) with the action
\[S_{1}=\int d^{3}z\bar{\psi}(\gamma^{0}iD_{0}+v\gamma^{j}iD_{j}-mv^{2})\psi \tag{1}\]
in which \(z^{\mu}=(t,x,y)\) with \(\mu\in\{0,1,2\}\), \(D_{\mu}=\partial_{\mu}-ieA_{\mu}\) and \(j\in\{1,2\}\). We assume \(\mu=0\) is for the time component and \(\mu=1,2\) or \(j\) for the spatial components. The Dirac matrices \(\gamma^{\mu}\)'s are given by \(\gamma^{0}=\sigma_{z}\), \(\gamma^{1}=i\sigma_{y}\) and \(\gamma^{2}=-i\sigma_{x}\); \(\bar{\psi}=\psi^{\dagger}\gamma_{0}\), \(v\) is the velocity of the fermions, and the mass term \(m=m(x)\) has the following domain wall structure
\[m(x)=\left\{\begin{array}{rl}&m_{0}\quad x>0\\ &-M\quad x<0.\end{array}\right. \tag{2}\]
The mass parameters here \(m_{0}\) and \(M\) are positive. The action Eq.1 with the domain-wall mass is called Jackiw-Rebbi model [20]. In the original work of Jackiw and Rebbi, they considered the special case when \(m_{0}=M\).
Callan and Harvey considered the effective Chern-Simons action of such a fermion theory with a domain wall mass. The Chern-Simons action can be obtained by integrating out the fermions and its form is given by [6]
\[S_{CS}=\frac{e^{2}}{h}\int d^{3}x\,{\cal C}\,\epsilon^{\mu\nu\rho}A_{\mu} \partial_{\nu}A_{\rho} \tag{3}\]
with the Chern number \({\cal C}={\rm sgn}(m(x))/2\). This effective action varies by a boundary term under gauge transformations in the presence of the domain wall. This is an example of a gauge anomaly. Fortunately, a zero mode living on the domain wall was found to produce a chiral anomaly, which precisely cancels the aforementioned gauge anomaly. In this sense, the bulk and the boundary exist in mutual dependence of each other. Later in the 1990s, Chandrasekharan proved explicitly such a cancellation [10].
This anomaly cancellation can also be understood at the level of the fermionic theory i.e. before integrating the fermions to obtain the effective Chern-Simons theory, Eq.3. Consider a 2+1 D Dirac fermion with constant mass term \(m\). The coupling of the fermion to the gauge field induces the parity anomaly in the electric current [15]
\[ej_{\mu}={\cal C}\frac{e^{2}}{h}\epsilon^{\mu\nu\rho}\partial_{\nu}A_{\rho}, \tag{4}\]
with \({\cal C}={\rm sgn}(m)/2\). If the mass term is given by Eq.2, then the Hall conductivity \(\sigma_{H}=sgn(m)e^{2}/2h\), changes its sign from the \(x>0\) region to the \(x<0\) region. Now we apply an electric field in the \(y\)-direction, which induces Hall bulk currents in the \(x\)-direction. Due to the sign change of the fermion mass, the Hall currents in \(x<0\) region and \(x>0\) region flow in opposite directions (See Fig.1). It leads to charge accumulation at the edge region \(x\sim 0\). These currents are called Goldstone-Wilczek currents [10] or also anomaly inflow [16; 17]. If one neglects the edge mode, the fermions seem to disappear at the boundary which breaks the charge conservation and leads to a gauge anomaly.
One can also take the viewpoint of the edge states. According to the so-called "bulk-edge correspondence", the fact that the difference in Chern number between the two sides of the domain wall \(\Delta{\cal C}={\cal C}(x>0)-{\cal C}(x<0)=\frac{1}{2}-(-\frac{1}{2})=1\), implies that there is one (massless) chiral mode along the edge. If one considers the interface between
a Chern insulator with \(\mathcal{C}=1\) and the vacuum (\(\mathcal{C}=0\)), then the difference in Chern number is still 1, which also implies one chiral edge mode. The main results of the two cases are essentially the same. The dispersion relations for both bulk and edge modes are shown in Fig.2a. The chiral mode is described by 1+1 D massless Dirac equation, and the chiral anomaly in 1+1 D tells us \(\partial_{\mu}j^{\mu}=\partial_{\mu}j^{\mu}_{5}=eE/h\). (The first equality holds because there is only one chiral mode, left-handed or right-handed.) If one looks at the edge theory itself, the charge conservation is broken: the charge number may increase with time [6]. Therefore, one has to consider the edge and the bulk theory as a whole, and then one will find that the total charge of the whole system is conserved: the bulk loses charge, and the edge (the domain wall) gains the same amount of charge in turn.
However, what is the fate of this extra charge? A similar charge pumping process was studied before in the context of the QAHE under out-of-plane magnetic fields [8; 18], but the relaxation or dissipation process was not taken into account. One expects some kinds of relaxation or transition process which transfers the surplus electrons at the edge to the bulk (See Fig.2a ). Such a relaxation process can be mediated by an electron-boson coupling, for example, via a photon or phonon [19]. In the present work, such electron-photon interactions are investigated. Since the speed of light \(c\) is much bigger than the Fermi velocity \(v\), i.e. \(c>>v\), the edge-state electron can be excited into bulk states via absorbing one photon, at leading order in perturbation theory (see the red arrow in Fig.2a). At next-to-leading order, the edge state electron can absorb one photon first and then emit another photon. Such a Compton scattering or Raman process may also transfer the fermion from an edge state to the bulk (see the green arrows in Fig.2a).
In the following, we construct the eigenstates (wave functions) of the non-interacting theory, i.e. \(e=0\) in Eq.1 and from now on we always assume \(0<m_{0}<<M\), i.e. the vacuum gap is much bigger than the massive Chern insulator gap. Equivalently, the theory can be described in terms of the Hamiltonian
\[H=-vi\partial_{x}\sigma_{x}-vi\partial_{y}\sigma_{y}+m(x)v^{2}\sigma_{z}. \tag{5}\]
The spinor \(\psi\) has two components, thus the equation \(H\psi=E\psi\) includes two coupled first-order differential equations.
Since there is no \(y\)-dependence in the Hamiltonian (5), the momentum along the \(y\)-direction is conserved, such that the partial derivative \(-i\partial_{y}\) can be replaced by a constant \(p_{2}\). Therefore, we assume \(\psi(t,x,y)=\Xi(x)e^{-iEt+ip_{2}y}\), and then the spinor \(\Xi=(\xi_{1},\xi_{2})^{T}\) satisfies the following equation for \(x>0\)
\[\begin{pmatrix}m_{0}v^{2}&-vi\partial_{x}-ivp_{2}\\ -vi\partial_{x}+ivp_{2}&-m_{0}v^{2}\end{pmatrix}\begin{pmatrix}\xi_{1}\\ \xi_{2}\end{pmatrix}=E\begin{pmatrix}\xi_{1}\\ \xi_{2}\end{pmatrix}. \tag{6}\]
In order to solve the above differential equations, we transform them into a second order differential equation for component \(\xi_{1}\):
\[\partial_{x}^{2}\xi_{1}+[(E/v)^{2}-(m_{0}v)^{2}-p_{2}^{2}]\xi_{1}=0. \tag{7}\]
Figure 1: The setup of the system. As explained in the main text of Sec.1, the fermion mass changes sign at \(x\,=\,0\), and therefore the Chern number is \(1/2\) at the \(x\,>\,0\) region and \(-1/2\) at the \(x\,<\,0\) region. In the presence of an uniform electric field in the y-direction (green arrows), the Hall currents (blue arrows) in the two regions flow in opposite directions. Therefore, there will be charge accumulation at the edge. The red arrows denote the edge current.
The other component \(\xi_{2}\) can be expressed by \(\xi_{1}\) as
\[\xi_{2}=\frac{(-iv\partial_{x}+ivp_{2})\xi_{1}}{E+(m_{0}v)^{2}}. \tag{8}\]
There is one edge state (bound state) localized around \(x=0\), which is given by
\[\Xi^{(e)}(x)=\sqrt{m_{0}v}\begin{pmatrix}1\\ i\end{pmatrix}e^{-m_{0}vx} \tag{9}\]
with the energy \(E=vp_{2}\). The bulk states (continuous states) are
\[\Xi_{p_{1},p_{2}}(x)=a\begin{pmatrix}1\\ \frac{vp_{1}+ivp_{2}}{E+m_{0}v^{2}}\end{pmatrix}e^{ip_{1}x}+b\begin{pmatrix}1 \\ \frac{-vp_{1}+ivp_{2}}{E+m_{0}v^{2}}\end{pmatrix}e^{-ip_{1}x} \tag{10}\]
The coefficients \(a\) and \(b\) are normalization constants, and can be found by the normalization condition
Figure 2: The energy spectra for electron systems and the edge-to-bulk transition processes. The blue and black lines describe the unoccupied and occupied states, respectively. Zigzag lines represent ingoing or outgoing photons. (a) A semi-infinite electron system with one edge or domain wall. There is only one chiral mode, whose occupied states are depicted by the straight blue line. The red arrow depicts the leading-order excitation process, which absorbs one photon. The green arrows are related to the second-order relaxation process, which includes an absorption of a low-energy photon (green zigzag line) and an emission of a high-energy photon (blue zigzag line). (b) The schematic diagram of transitions in a finite-sized system with two edges. The two straight blue lines represent the occupied edge states. The red and purple arrows denote the absorption and emission processes, while the red and purple zigzag lines denote the ingoing and outgoing photons, respectively.
\(\int\Xi_{p_{1},p_{2}}^{l}(x)\Xi_{p_{1}^{\prime},p_{2}}(x)dx=\delta(p_{1}-p_{1}^{ \prime})\). The result is given by
\[a=-\frac{E+m_{0}v^{2}-vp_{2}-ivp_{1}}{2\sqrt{2\pi E(E-vp_{2})}}\qquad\mbox{and} \qquad b=\frac{E+m_{0}v^{2}-vp_{2}+ivp_{1}}{2\sqrt{2\pi E(E-vp_{2})}} \tag{11}\]
## III Interaction with a planar photon (Qed\({}_{3}\))
In this section, we consider a 2+1 D Dirac fermion interacting with a 2+1 D photon. It is a toy model for the interaction between the electrons in a two-dimensional plane and the photons. While electrons can be confined to the two-dimensional plane in the laboratory, photons only exist in three-dimensional space. 3+1 D photons will be considered in the next section. It is instructive, however, to start with QED\({}_{3}\), i.e. the case where both electrons and photons live in the two-dimensional plane. Furthermore, we take into account that the Fermi velocity for electrons in solids is much smaller than the speed of light i.e. \(v<<c\).
The action is given by
\[S_{2}=S_{1}+\frac{1}{2}\int d^{3}z\Big{(}{\dot{A_{0}}}^{2}-c^{2}\sum_{i=1,2}( \partial_{i}A_{0})^{2}\Big{)} \tag{12}\]
where \(S_{1}\) is given in Eq.1 and \(A_{0}\) is the temporal component of the photon field. The spatial components \(A_{1}\) and \(A_{2}\) are neglected in the following, because of the small Fermi velocity [21]. The interaction term \(ev^{\dagger}\psi A_{0}\) is responsible for the transitions from edge states to bulk states, and the coupling strength \(e\) is the electron charge in 2+1 D, which has the dimension 1/2, and scales as \(E^{1/2}\), where \(E\) is energy.
Since the speed of light is much bigger than the Fermi velocity, \(c>>v\), a transition process from an edge state to a upper-band bulk state with lower energy cannot happen at the first order, i.e. a high-energy edge state cannot decay into a low-energy upper-band bulk state, by emitting only one photon. On the contrary, an electron at edge state can absorb one photon and be excited into a bulk state with a higher energy (see the red arrow in Fig. 2a ). Due to the photon absorption, a finite (nonzero) temperature is necessary for such a process to occur. This is the main focus of this section. The leading order contribution to a real relaxation process (from a high energy initial state to a low energy final state) comes from the second order, which is similar to Compton scattering in quantum electrodynamics. It is depicted by the green arrows, in Fig. 2a. The corresponding process can be described by the effective Hamiltonian \(H_{eff}=\lambda\psi^{\dagger}\psi A_{0}^{2}\), where \(\lambda=e^{2}/E^{*}\), and \(E^{*}\) is a characteristic energy related to the virtual intermediate state (shown by the green dashed line in Fig.2a). However, it is a high-order process suppressed by the higher power of the coupling constant.
The time evolution of the density matrix \(\rho\) is governed by the equation \(\dot{\rho}=-i[H_{I},\rho]\), where \(H_{I}=e\psi^{\dagger}\psi A_{0}\) is the interaction term of the Hamiltonian in the interaction picture. In Born approximation, the total density matrix \(\rho\) is assumed to be factorized into \(\rho=\rho_{S}\otimes\rho_{B}\), where \(\rho_{S}\) is the density matrix of the electron system and \(\rho_{B}\) is the density matrix of the bath or the bosons (photons). Tracing out the degree of freedom of the bath environment, the evolution of the electron system \(\rho_{S}=Tr_{B}(\rho)\) can be formulated by [22]
\[\dot{\rho}_{S}=-\int_{0}^{t}dsTr_{B}[H_{I}(t),[H_{I}(s),\rho(s)]] \tag{13}\] \[=-\int_{0}^{\infty}dsTr_{B}[H_{I}(t),[H_{I}(t-s),\rho_{S}(t) \otimes\rho_{B}]]. \tag{14}\]
in which the Markov approximation has been applied in the second equation. We only consider the first order contribution in perturbation theory, and assume that multi-particle excitations are suppressed. Taking the average value on the edge state \(|k_{e}>\) (the state means adding one edge state to the Fermi sea \(|k_{e}>\otimes|FS>\), but the Fermi sea \(|FS>\) will not be mentioned below for simplicity), one obtains the time evolution of the occupation probability of the state \(|k_{e}>\), which is given by \(<k_{e}|\dot{\rho}_{S}|k_{e}>=-I_{1}+I_{2}+h.c.\) with
\[I_{1}=\int_{0}^{\infty}dsTr_{B}<k_{e}|H_{I}(t)H_{I}(t-s)\rho_{S}(t)\otimes\rho _{B}|k_{e}>, \tag{15}\]
and
\[I_{2}=\int_{0}^{\infty}dsTr_{B}<k_{e}|H_{I}(t)\rho_{S}(t)\otimes\rho_{B}H_{I}( t-s)|k_{e}>. \tag{16}\]
\(I_{1}\) is the rate of the electron leaving from the edge state \(|k_{e}>\) to bulk states, while \(I_{2}\) is the rate of the electron coming to the state \(|k_{e}>\). These rates are related to the photon number distribution law. The rate or the speed of the latter process (photon emission) is higher than the former one (photon absorption). Furthermore, at exact zero temperature, the photon absorption process can not happen at all, but the emission process can still happen.
In the present work, we consider nonzero temperature \(T\) only in the photon sector of the theory, such that the related thermal energy is much smaller than the bulk gap \(k_{B}T<<2m_{0}v^{2}\). Therefore, if the chemical potential of the edge state \(\mu<<m_{0}v^{2}\), the temperature is not large enough to efficiently supply a photon for the excitation of edge state into a bulk one. On the other hand, if edge's \(\mu>m_{0}v^{2}\) and we consider the edge state with momentum \(k_{e}\) such that \(v\sqrt{k_{e}^{2}+(m_{0}v)^{2}}-vk_{e}\sim k_{B}T\), the excitation process to the bulk can indeed happen, even at small temperature of the photon bath \(k_{B}T<<2m_{0}v^{2}\). We consider this process as a main contribution to the relaxation of the edge states, neglecting other possible processes. As was mentioned above, the "Compton-like" relaxation depicted by the green arrows in Fig. 2a is suppressed by the second power of the interaction constant. Furthermore, we neglect the backward relaxation \(I_{2}\).
In order to devise the arguments in favor of this approximation, we consider a finite-width system, e.g. a ribbon, with two edges (domain-walls). Fig.2b shows the energy occupation state and transition processes for the two-edge system. There are two edges states now denoted by the straight dashed lines and straight blue lines in Fig.2b. If the electric field is parallel to the edges, the Hall current is perpendicular to the edges and drives the charge from one edge to another. Therefore, the chemical potential of one edge will decrease (depletion process), and the chemical potential of the other edge will increase (accumulation process). Because the two edges are far from each other, direct transition from one edge to another is difficult, if not completely impossible. Direct calculation of the transition rate from edge to edge gives the estimation of the order of \(\exp(-m_{0}L_{x})\), with \(L_{x}\) the distance between the two edges, i.e. the width of the ribbon, and \(m_{0}\) is the fermion mass in the bulk. In contrast, the transition rate from edge to bulk is of the order of \(1/\sqrt{m_{0}L_{x}}\). If \(L_{x}\) is large enough (\(m_{0}L_{x}>>1\)), both rates are small, but the former is much smaller than the latter, thus we neglect the direct transitions from one edge to another. The \(I_{1}\) relaxation processes happen in both edges, leading to the appearance of holes in the lower band in the bulk. It opens the possibility for the direct transitions from upper to lower band in the bulk via the photon emission depicted by the purple arrows in Fig. 2b. These processes are of the order of 1. It means that they are much faster than all edge-bulk transitions and they keep the upper band of the bulk almost empty, thus suppressing the inverse bulk-to-edge transitions denoted by \(I_{2}\). Thus we conclude that the time of the whole edge-to-bulk relaxation is determined by the comparatively slower process \(I_{1}\) showed by red arrows in Fig.2b.
In order to further calculate \(I_{1}\), we neglect the off-diagonal elements of the density matrix and insert a complete set of states between the two \(H_{I}\) operators (many-particle excitations are neglected). Then the rate \(I_{1}\) can be reformulated into
\[I_{1} = \int_{0}^{\infty}dsTr_{B}<k_{e}|H_{I}(t)|{\bf p}>\frac{d^{2}p}{(2 \pi)^{2}}<{\bf p}|H_{I}(t-s)\rho_{S}(t)\otimes\rho_{B}|k_{e}>, \tag{17}\] \[= e^{2}\int\frac{d^{2}p}{(2\pi)^{2}}W_{k_{e}}({\bf p})\,r(k_{e},t),\]
where \(|k_{e}>\) is the edge state with momentum \(k_{e}\), \(|{\bf p}>\) is the bulk state with momentum \({\bf p}=(p_{1},p_{2})\), and the function \(r(k_{e},t)\) is defined by \(<k_{e}^{\prime}|\rho_{S}(t)|k_{e}>=r(k_{e},t)\delta(k_{e}^{\prime}-k_{e})\). The quantity
\[W_{k_{e}}({\bf p}) = \int_{0}^{+\infty}ds\int_{0}^{+\infty}dx\int_{0}^{+\infty}dx^{ \prime}\int\int dydy^{\prime}e^{iEs} \tag{18}\] \[f_{\bf p}(x)f_{\bf p}^{*}(x^{\prime})e^{i(p_{2}-k_{e})(y-y^{ \prime})}G(s,x-x^{\prime},y-y^{\prime})\]
where \(f_{\bf p}(x)=\Xi^{(e)\dagger}\Xi_{\bf p}(x)\) is the inner product of the spinors, and \(G(s,x-x^{\prime},y-y^{\prime})=Tr_{B}[\rho_{B}\phi_{-}(s,x-x^{\prime},y-y^{ \prime})\phi_{+}(0,0,0)]\) is the correlation function of the photon field. The field \(A_{0}\) is decomposed into \(A_{0}=\phi_{+}+\phi_{-}\), with \(\phi_{+}\) the positive frequency component including the annihilation operators and \(\phi_{-}\) the negative frequency component including the creation operators. The other combination \(Tr_{B}[\rho_{B}\phi_{+}\phi_{-}]\) is neglected by virtue of the rotating wave approximation [22].
The photon correlation function \(G\) can be calculated by mode expansion, and the result (in Gaussian units) is
\[G(t,x,y)=\int\frac{c^{2}d^{2}q}{(2\pi)^{2}2\omega_{q}}n_{B}(\omega_{q})e^{i \omega_{q}t-iq\cdot r}, \tag{19}\]
where \(n_{B}(\omega)=1/(e^{\beta\omega}-1)\) is the Bose-Einstein distribution function for the photon bath,
\(c\sqrt{q_{1}^{2}+q_{2}^{2}}\), \(q=(q_{1},q_{2})\) and \(r=(x,y)\). Therefore, we found
\[W_{k_{e}}({\bf p})=\int\frac{d^{2}q}{(2\pi)^{2}\omega_{q}}|F_{{\bf p},q_{1}}|^{2} L_{y}\delta(p_{2}-k_{e}+q_{2})\delta(E(k_{e})-E_{\bf p}+\omega_{q})n_{B}( \omega_{q}), \tag{20}\]
where \(F_{{\bf p},q_{1}}=\int_{0}^{+\infty}f_{\bf p}(x)e^{iq_{1}x}dx\) and
\[|F_{{\bf p},q_{1}}|^{2}=\frac{v^{2}p_{1}q_{1}}{\pi E_{\bf p}(E_{\bf p}-vp_{2})} \Big{[}\frac{m_{0}v}{(q_{1}-p_{1})^{2}+(m_{0}v)^{2}}-\frac{m_{0}v}{(q_{1}+p_{1 })^{2}+(m_{0}v)^{2}}\Big{]}. \tag{21}\]
For simplicity, the function \(\frac{m_{0}v}{q^{2}+(m_{0}v)^{2}}\) is replaced by \(\pi\delta(q)\), and then function \(W_{k_{e}}({\bf p})\) can be evaluated as
\[W_{k_{e}}({\bf p})=L_{y}\delta(E(k_{e})-E_{\bf p}+\omega_{p_{1},k_{e}-p_{2}}) \frac{v^{2}p_{1}^{2}n_{B}(\omega_{p_{1},k_{e}-p_{2}})}{\pi E_{\bf p}(E_{\bf p} -vp_{2})\omega_{p_{1},k_{e}-p_{2}}}. \tag{22}\]
Therefore, the transition rate of the edge state \(|k_{e}>\) to the bulk states is given by
\[\Gamma(k_{e})=e^{2}\int W_{k_{e}}({\bf p})\frac{d^{2}p}{(2\pi)^{2}}. \tag{23}\]
Its integrand includes the delta function \(\delta(E(k_{e})-E_{\bf p}+\omega_{p_{1},k_{e}-k})\), with \(E_{p_{1},p_{2}}=v\sqrt{p_{1}^{2}+p_{2}^{2}+(m_{0}v)^{2}}\) and \(\omega_{p_{1},p_{2}}=c\sqrt{p_{1}^{2}+p_{2}^{2}}\). If the Fermi velocity \(v\) is much smaller than the speed of light \(c\), i.e. \(v/c\sim 1/100\), then it is safe and convenient to replace \(E_{\bf p}\) in the integrand by \(E_{0,k_{e}}\). After integrations, we obtain the result of the transition rate per unit length \(\Gamma_{1}(k_{e})=\Gamma(k_{e})/L_{y}\) as follows
\[\Gamma_{1}(k_{e})=\frac{e^{2}v^{2}\Delta E}{\pi c^{2}E_{0,k_{e}}}n_{B}(\Delta E), \tag{24}\]
with \(\Delta E=E_{0,k_{e}}-E(k_{e})\). When \(k_{e}\rightarrow+\infty\), \(\Delta E\to 0^{+}\) and \(\Gamma_{1}(k_{e})\) goes to zero as \(\sim{\rm k_{B}}T/k_{e}\). The \(k_{e}\)-dependence of function \(\Gamma_{1}(k_{e})\) is shown by curve 1 in Fig.3(a).
Now let us analyze the consequences of such an excitation process, and consider the evolution of the occupancy of the edge states. Suppose at time \(t=0\), the chemical potential of the whole system is at \(\mu=0\) (Fig.2a), and one turns on the electric field in the y-direction \(E_{y}\). On the one hand, because of the electric field, there is a constant rate of
Figure 3: (a)The black curve (marked by label 1) shows the momentum dependence of the excitation rate \(\Gamma_{1}(k_{e})/e^{2}\) of the edge state \(|k_{e}>\). It is computed according to Eq.24, with \(v/c=0.01\). The horizontal axis is \(k_{e}/(m_{0}v)\). The black curve (marked by label 2) is the the distribution function \(R(k_{e})/R(0)\) in the saturation state, i.e. Eq.26, with the electric field \(E_{y}=2\times 10^{-5}em_{0}v\) related to the red dotted line in Fig.(b). (b) The saturation momentum \(K_{e}^{*}\) obtained from Eq. 25 as a function of electric filed \(E_{y}\), in the step-function approximation for the edge-state occupancy.
electrons flowing toward the edge and accumulating there. On the other hand, the accumulated electrons at the edge are excited via thermal fluctuations and transferred to the bulk. If we assume that at a time \(t\) the edge states are occupied up to the momentum \(K_{e}(t)\), what is its behavior at the later times \(t\rightarrow\infty\)? Is it possible for the system to reach a saturation? A "saturation" means a balance between the inflow current towards the edge and the excitation process depleting the edge. The excitation rate from the edge to the bulk \(\int_{0}^{K_{e}}\Gamma_{1}(k_{e})dk_{e}\) is small in the beginning (for small \(t\)) because \(K_{e}\) is small i.e. \(K_{e}(t)\sim 0\). Therefore the accumulation process is stronger than the depletion, and \(K_{e}\) starts to increase. When \(K_{e}\) increases, the rate of depletion also increases. If the depletion rate coincides with the accumulation rate, the process reaches equilibrium, and \(K_{e}\) saturates. In order to calculate the saturation momentum \(K_{e}^{*}\), we equate the two rates (number of particles per unit time and per unit length)
\[\int_{0}^{K_{e}^{*}}\Gamma_{1}(k_{e})dk_{e}=\sigma_{H}E_{y}/e. \tag{25}\]
Finite values of \(K_{e}^{*}\) can be found, as a function of \(E_{y}\), which is shown in Fig.3 (b). Asymptotically, \(K_{e}^{*}(E_{y})\) scales as \(\sim\exp(E_{y}/em_{0}v)\), for large \(E_{y}\).
Above, we assumed the distribution function on the edge \(r(k_{e})\) to be a step function: \(r(k_{e})=1\) when \(k_{e}<K_{e}\) and \(r(k_{e})=0\) when \(k_{e}>K_{e}\). Such an assumption is simple, but is not entirely realistic. In order to approach reality, we lift such an assumption, and allow the distribution function \(r(k_{e},t)\) to take any value between zero and one. We are going to find such a distribution function at the saturation (\(t\rightarrow+\infty\)). Suppose \(\Delta t\) is a very short time interval and \(k_{e}^{\prime}=k_{e}+E\Delta t\), and then we have \(r(k_{e}^{\prime},t+\Delta t)=r(k_{e},t)(1-\Gamma_{1}(k_{e})\Delta t)\). It means that the momentum of the edge-state fermions is changed by the electric field \(E_{y}\) during the time interval, and in the meantime, the fermions leave the edge (via excitation process) at the rate \(\Gamma_{1}\). In the stationary state, \(r(k_{e},t+\Delta t)=r(k_{e},t)\), which doesn't depend on time, and can be denoted by the function \(R(k_{e})\). Therefore, we obtain \(R^{\prime}(k_{e})E_{y}=-\Gamma_{1}(k_{e})R(k_{e})\), from which we find the function \(R(k_{e})\) as the final distribution along the edge:
\[R(k_{e})/R(0)=\exp\Bigl{(}-\int_{0}^{k_{e}}\Gamma_{1}(k^{\prime})dk^{\prime}/ E_{y}\Bigr{)}, \tag{26}\]
which is shown by curve 2 in Fig.3 (a).
## IV Interaction with 3+1 D photons
In this section we consider a realistic model, where the 2+1 D electrons interact with 3+1 D photons. The corresponding action is given by
\[S_{3}=S_{1}+\frac{1}{2}\int d^{4}x\Bigl{(}{\dot{A_{0}}}^{2}-c^{2}\sum_{i=1}^{ 3}(\partial_{i}A_{0})^{2}\Bigr{)}, \tag{27}\]
where \(d^{4}x=dt\,dx\,dy\,dz\). The 2+1 D electron system is located on the \(z=0\) plane.
The deduction in the previous section about the evolution of the density matrix and the transition rate can be repeated straightforwardly. However, the photon correlation function \(G\) in Eq.19 has to be modified, because of the different dimensionality. As for 3+1 D photon, the corresponding correlation function \({\cal G}(s,x-x^{\prime},y-y^{\prime})\) is defined as
\[{\cal G}(s,x-x^{\prime},y-y^{\prime})=Tr_{B}[\rho_{B}\phi_{-}(s,x,y,0)\phi_{+ }(0,x^{\prime},y^{\prime},0)], \tag{28}\]
where the \(z\) component of the spatial coordinates is fixed to be 0, because the photons interact with the fermions only at the \(z=0\) plane. The result of \({\cal G}\) can be obtained by mode expansion
\[{\cal G}(t,x,y)=\int\frac{d^{3}q}{(2\pi)^{3}2\omega_{q}}n_{B}(\omega_{q})e^{i \omega_{q}t-iq_{1}x_{1}-iq_{2}x_{2}}, \tag{29}\]
where \(q=(q_{1},q_{2},q_{3})\), and \(\omega_{q}=c\sqrt{q_{1}^{2}+q_{2}^{2}+q_{3}^{2}}\). Corresponding to Eq.22, the function \(W\) for 3+1 D photon will be given by
\[W_{k_{e}}({\bf p})=L_{y}\int\frac{dq_{3}}{2\pi}\delta(E(k_{e})-E_{\bf p}+ \omega_{p_{1},k_{e}-p_{2},q_{3}})\frac{v^{2}p_{1}^{2}n_{B}(\omega_{p_{1},k_{e} -p_{2},q_{3}})}{\pi E_{\bf p}(E_{\bf p}-vp_{2})\omega_{p_{1},k_{e}-p_{2},q_{3}}}. \tag{30}\]
From the function \(W_{k_{e}}({\bf p})\), we obtained the transition rate of the edge state \(k_{e}\) to the bulk
\[\Gamma_{1}(k_{e})=(e^{2}/L_{y})\int W_{k_{e}}({\bf p})\,d^{2}p/(2\pi)^{2},\]
and its result is given by
\[\Gamma_{1}(k_{e})=\frac{4\alpha v^{2}\Delta E^{2}n_{B}(\Delta E)}{3c^{2}E_{0,k_ {e}}}. \tag{31}\]
with \(\alpha=e^{2}/c\) and \(\Delta E=E_{0,k_{e}}-vk_{e}\). Then as in the previous section, one can figure out the charge accumulation at the edge and the stationary distribution law of the edge electrons in momentum space, which is shown in Fig.4.
From Eq.31, one notices that when \(k_{e}\rightarrow+\infty\), \(\Gamma_{1}(k_{e})\) goes to zero as \(1/k_{e}^{2}\), implying that \(\int^{+\infty}\Gamma_{1}(k_{e})dk_{e}\) is a finite number. As in the previous section, the saturation momentum \(K_{e}^{*}\) can be specified by Eq. 25 according to the assumption of step-function edge-state distribution. However, if the electric field \(E_{y}\) is larger than the critical field \(E_{y}^{(c)}=\int_{0}^{+\infty}\Gamma_{1}(k_{e})dk_{e}/(\sigma_{H}L_{y})\), then the saturation momentum \(K_{e}^{*}\) will be infinite. It means all the edge states will be occupied, if the electric field is strong enough. This "electron avalanche" phenomenon is due to our low-energy effective model which is not regularized by the high energy part of the dispersion relation as it always appears in real materials. If \(K_{e}^{*}\) is very large, the higher momentum part of the band structure should be taken into account, and the dispersion curve will bend, which prevents \(K_{e}^{*}\) from going to infinity.
We will now discuss the realization of such an effect in the laboratory, and estimate the order of magnitudes for the physical quantities. In the last section, we considered half infinite planar systems, and infinitely long ribbon-shaped systems. The former one has only one boundary, while the latter one has two boundaries. However, both of them are hard to realize in experiment. Instead of these infinite-sized systems, we consider a cylinder with finite length or an annulus as more realistic examples. Both of them are finite sized and have one hole and two edges. If the magnetic field going through the hollow part of this kind of a system varies with time, the electric field parallel to the edges appears automatically. Due to this electric field, the Hall current is perpendicular to the edges and drives the charge from one edge to another. Therefore, the chemical potential of one edge will decrease (depletion process), and the chemical potential of the other edge will increase (accumulation process), exactly as shown in Fig.2b.
At last, we estimate the orders of the main quantities, such as the transition rate \(\Gamma_{1}(k_{e})\) and the critical electric field \(E_{y}^{(c)}\). In Fig. 4 we show how the edge-to-bulk transition rate changes with the wave vector of the edge state \(k_{e}\). One can see that the transition rate is of the order of \(\rm ns^{-1}\) (nanosecond) and it decreases with the increase of the wave vector \(k_{e}\). For a massive Chern insulator with gap \(\Delta=0.1\rm eV\), at temperature given by \(\rm k_{B}T=\Delta/4\), i.e. \(T\sim 250\,\rm K\)
Figure 4: (a)The black curve (marked by label 1) is the excitation rate \(\Gamma_{1}(k_{e})\), as a function of momentum \(k_{e}\), Eq.31. ns is nanosecond. The parameters are given as follows: the bulk gap \(\Delta=2m_{0}v^{2}=0.1\rm eV\), Fermi velocity \(v=0.01c\) and the temperature \(\rm k_{B}T=\Delta/4\). The red curve (marked by label 2) is the the occupation function \(R(k_{e})/R(0)\) in the saturation state, with \(E_{y}=0.75\,V/m\). It is obtained from Eq.26, with the \(\Gamma_{1}(k_{e})\) function given by Eq.31. (b) The saturation momentum \(K_{e}^{*}\) in the step-function assumption for the edge occupation, v.s. electric filed \(E_{y}\). One observes that there is a critical electric field, around \(1.5\,V/m\), above which the value of \(K_{e}^{*}\) diverges.
the rate at which edge-state electrons transition into the bulk (per unit length of the edge) is about \(3\times 10^{14}\,m^{-1}s^{-1}\). If the size of a sample is \(1\mu m\), the rate is \(3\times 10^{8}\,s^{-1}\). It means the life time of an edge state will be about \(3\ ns\). The critical electric field is about \(1.5\) V/m, which does not depend on the the size of the sample.
## V Conclusion and Outlook
In the present work, we revisited the Callan-Harvey mechanism in Jackiw-Rebbi model with a space-dependent domain wall mass. Due to the parity anomaly, the electric field, which is parallel to the domain wall (the edge), drives the electrons to the edge. As the electrons accumulate along the edge, they starts to transfer into the bulk states via thermal fluctuation. We studied the time evolution of the surplus charge at the edge in the Lindblad formalism, and the transition rate from the edge to the bulk was calculated. In such a transition process, photon absorption is necessary. Therefore, at zero temperature, the transition process does not occur in our electron-photon interaction model, and the charge accumulation at the edge will be boundless. At finite (non-zero) temperature, we studied the stationary state at late times \(t\rightarrow+\infty\). In the planar photon (QED\({}_{3}\)) case, the stationary state can be obtained for arbitrary electric field. In the 3+1 D photon case, there is a critical electric field strength \(E^{(c)}\), below which the stationary state exists, but above which the stationary state does not exist and the charge accumulation will be boundless.
Our present study investigated the effects of electron-photon interaction on the edge states, and improved the physical picture of Callan-Harvey mechanism with dissipation processes. It has not only scholar interest from quantum field theories, but also might have potential applications in the condensed matter (optical relaxation in topological materials) and potential applications in engineering. For example, the optical processes depicted in Fig.2b might make such a system into a new light source: in the presence of an electric field, the system absorbs two low-energy photons from the thermal bath (the environment), and then emits one high-energy photon, with the energy \(\sim\Delta\) the band gap. Furthermore, if one replaces the (low-energy) thermal photons (the red zigzag lines in Fig.2b) by incident photons with the same energy, then the incident photons trigger the relaxation (the purple zigzag line in Fig.2b), and vice versa.
There are also several directions for the future. In the present work, we considered the Dirac mass term, which is a constant within a bulk region. A natural generalization is to study the momentum dependent mass, as in the Bernevig-Hughes-Zhang (BHZ) model. Besides, electron-phonon interactions should be taken into account in the condensed matter systems, and the heat dissipation effect can be studied.
###### Acknowledgements.
We acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through SFB 1170, Project-ID 258499086, through the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter - ct.qmat (EXC2147, Project- ID 390858490) as well as by the ENB Graduate School on Topological Insulators. M.U. thanks the DFG for financial support under the project UL444/2-1. C.N. thanks the support by the Israel Science Foundation (grant No. 1417/21) and by the German Research Foundation through a German-Israeli Project Cooperation (DIP) grant "Holography and the Swampland" and by Carole and Marcus Weinstein through the BGU Presidential Faculty Recruitment Fund.
|
2306.17559 | Clifford Group and Unitary Designs under Symmetry | We have generalized the well-known statement that the Clifford group is a
unitary 3-design into symmetric cases by extending the notion of unitary
design. Concretely, we have proven that a symmetric Clifford group is a
symmetric unitary 3-design if and only if the symmetry constraint is described
by some Pauli subgroup. We have also found a complete and unique construction
method of symmetric Clifford groups with simple quantum gates for Pauli
symmetries. For the overall understanding, we have also considered physically
relevant U(1) and SU(2) symmetry constraints, which cannot be described by a
Pauli subgroup, and have proven that the symmetric Clifford group is a
symmetric unitary 1-design but not a 2-design under those symmetries. Our
findings are numerically verified by computing the frame potentials, which
measure the difference in randomness between the uniform ensemble on the
symmetric group of interest and the symmetric unitary group. This work will
open a new perspective into quantum information processing such as randomized
benchmarking, and give a deep understanding to many-body systems such as
monitored random circuits. | Yosuke Mitsuhashi, Nobuyuki Yoshioka | 2023-06-30T11:21:50Z | http://arxiv.org/abs/2306.17559v2 | # Clifford Group and Unitary Designs under Symmetry
###### Abstract
We have generalized the well-known statement that the Clifford group is a unitary 3-design into symmetric cases by extending the notion of unitary design. Concretely, we have proven that a symmetric Clifford group is a symmetric unitary 3-design if and only if the symmetry constraint is described by some Pauli subgroup. We have also found a complete and unique construction method of symmetric Clifford groups with simple quantum gates for Pauli symmetries. For the overall understanding, we have also considered physically relevant U(1) and SU(2) symmetry constraints, which cannot be described by a Pauli subgroup, and have proven that the symmetric Clifford group is a symmetric unitary 1-design but not a 2-design under those symmetries. Our findings are numerically verified by computing the frame potentials, which measure the difference in randomness between the uniform ensemble on the symmetric group of interest and the symmetric unitary group. This work will open a new perspective into quantum information processing such as randomized benchmarking, and give a deep understanding to many-body systems such as monitored random circuits.
_Introduction.--_ Randomness in quantum systems is a ubiquitous concept that underpins the core of quantum information processing and quantum many-body systems [1; 2]. The uniform randomness plays central role not only in understanding fundamental phenomena such as thermalization [3; 4] and information scrambling [5; 6], but also realizing efficient quantum communication [7; 8] and encryption [9; 10]. To utilize the beautiful and powerful property of the randomness, there have been significant advancements in engineering of approximations of the Haar unitary ensemble, namely the unitary design. A unitary \(t\)-design is an ensemble of unitaries that mimics the Haar random unitaries up to the \(t\)-th moment, and it has proven useful in tasks such as data hiding [11], quantum state discrimination [12; 13], quantum advantage [14; 15; 16], quantum gravity [2], to name a few.
One of the most prominent examples of unitary designs is the Clifford group. Initial interest in the Clifford group was primarily in the context of quantum computing, e.g., the classical simulability [17; 18]. However, currently it is known to be applicable to even wider fields such as the quantum state tomography [19] and hardware verification via randomized benchmarking [20; 21; 22; 23]. Although for general qudits, the Clifford group is only a unitary 1-design [24], it elevates to a unitary 2-design if the local Hilbert space dimension is prime [25; 11; 26]. Intriguingly, the multiqubit Clifford group singularly qualifies as a unitary 3-design [27; 28].
While the concurrent presence of classical simulability and the pseudorandomness of the multiqubit Clifford group has invoked numerous applications to quantum science, we point out that existing studies have focused predominantly on the full ensemble of unitary designs; our comprehension on realistic scenarios with operational constraints/restrictions remains underdeveloped. One of the most outstanding questions pertains to the relationship with symmetry, an essential concept responsible for a wealth of phenomena in the natural sciences. To further explore the physics and quantum information processing under realistic constraints, it is an urgent task to establish how the symmetry impacts the Clifford group.
In this work, we introduce the concept of a symmetric unitary \(t\)-design and prove that the multiqubit Clifford group under symmetry forms a symmetric unitary 3-design, if and only if the symmetry constraints are essentially characterized by some Pauli subgroup. We also propose a complete and unique method for constructing symmetric Clifford operators with elementary quantum gates that operate on a maximum of two qubits. We subsequently show that, other classes of symmetry stand in stark contrast, as the Clifford groups under these symmetries are merely symmetric unitary 1-designs. For comprehensive understanding, we have highlighted such a remarkable disparity through practical examples of U(1) and SU(2) symmetries. Finally, we provide numerical evidence for our findings by computing the frame potentials for the Clifford groups under two representative types of symmetries.
_Setup.--_ We first overview the conventional Clifford group and unitary designs. The Clifford group on \(N\) qubits is defined as the normalizer of the Pauli group in the unitary group \(\mathcal{U}_{N}\), i.e., \(\mathcal{C}_{N}:=\{U\in\mathcal{U}_{N}|UP_{N}U^{\dagger}=\mathcal{P}_{N}\}\), where \(\mathcal{P}_{N}:=\{\pm 1,\pm i\}\cdot\{\mathrm{I},\mathrm{X},\mathrm{Y}, \mathrm{Z}\}^{\otimes N}\) is the group generated by the Pauli operators \(\mathrm{I}\), \(\mathrm{X}\), \(\mathrm{Y}\) and \(\mathrm{Z}\) on each qubit. It is convenient to introduce the \(t\)-fold twilling channel to characterize the randomness of a subgroup \(\mathcal{X}\) of \(\mathcal{U}_{N}\) as
\[\Phi_{t,\mathcal{X}}(L):=\int_{U\in\mathcal{X}}U^{\otimes t}LU^{\dagger\otimes t }d\mu_{\mathcal{X}}(U), \tag{1}\]
where \(L\) is a linear operator acting on \(tN\) qubits and \(\mu_{\mathcal{X}}\) denotes the normalized Haar measure on \(\mathcal{X}\). We say that the subgroup \(\mathcal{X}\) is a unitary \(t\)-design if the following is satisfied:
\[\Phi_{t,\mathcal{X}}=\Phi_{t,\mathcal{U}_{N}}. \tag{2}\]
From Eq. (1), we see that unitary \(t\)-designs with larger \(t\) better approximate the Haar random unitaries, which can be regarded as a unitary \(\infty\)-design. In this regard, it is known that the Clifford group \(\mathcal{C}_{N}\) is a unitary \(3\)-design but not a \(4\)-design [27; 28]. Note that unitary \(t\)-designs are always \(t^{\prime}\)-designs if \(t>t^{\prime}\), but the contrary does not hold in general.
The symmetric Clifford groups and symmetric unitary designs are defined as symmetric generalizations of the conventional ones. In the following, we consider symmetry that can be represented by a subgroup \(\mathcal{G}\) of \(\mathcal{U}_{N}\). We define the \(\mathcal{G}\)-symmetric unitary group \(\mathcal{U}_{N,\mathcal{G}}\) on \(N\) qubits by \(\mathcal{U}_{N,\mathcal{G}}:=\{U\in\mathcal{U}_{N}|\forall G\in\mathcal{G},\ [U,G]=0\}\), and correspondingly the \(\mathcal{G}\)-symmetric Clifford group by \(\mathcal{C}_{N,\mathcal{G}}:=\mathcal{C}_{N}\cap\mathcal{U}_{N,\mathcal{G}}\). Now it is natural to define for a subgroup \(\mathcal{X}\) of \(\mathcal{U}_{N}\) to be a \(\mathcal{G}\)-symmetric unitary \(t\)-design if the \(t\)-fold twirling channel \(\Phi_{t,\mathcal{X}}\) satisfies
\[\Phi_{t,\mathcal{X}}=\Phi_{t,\mathcal{U}_{N,\mathcal{G}}}. \tag{3}\]
Note that these definitions include the conventional ones as the special case when the symmetry is trivial, i.e., \(\mathcal{G}=\{I\}\).
_Main result_.-- We prove that the \(\mathcal{G}\)-symmetric Clifford group \(\mathcal{C}_{N,\mathcal{G}}\) is a \(\mathcal{G}\)-symmetric unitary \(3\)-design if and only if the symmetry constraint by \(\mathcal{G}\) is essentially described by some Pauli subgroup. This can be rigorously stated as follows:
**Theorem 1**.: _(Randomness of Clifford group under symmetry) Let \(\mathcal{G}\) be a subgroup of \(\mathcal{U}_{N}\). Then, \(\mathcal{C}_{N,\mathcal{G}}\) is a \(\mathcal{G}\)-symmetric unitary \(3\)-design if and only if \(\mathcal{U}_{N,\mathcal{G}}=\mathcal{U}_{N,\mathcal{Q}}\) with some subgroup \(\mathcal{Q}\) of \(\mathcal{P}_{N}\)._
This theorem provides a guarantee that, under a Pauli symmetry, the symmetric Clifford group maintains its pseudo-randomness, which is applicable to various quantum information processing tasks [11; 12; 13; 14; 15; 16]. Moreover, it is remarkable that this theorem characterizes Pauli symmetries by stating that the symmetric Clifford group is _not_ a symmetric unitary \(3\)-design under a non-Pauli symmetry. We note that symmetry constraints can be captured by \(\mathcal{U}_{N,\mathcal{G}}\) without using \(\mathcal{G}\) itself, because when two subgroups \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) of \(\mathcal{U}_{N}\) satisfy \(\mathcal{U}_{N,\mathcal{G}}=\mathcal{U}_{N,\mathcal{G}^{\prime}}\), the \(\mathcal{G}\)- and \(\mathcal{G}^{\prime}\)-symmetric Clifford groups are identical to each other, and moreover the notions of \(\mathcal{G}\)- and \(\mathcal{G}^{\prime}\)-symmetric unitary designs are the same. We do not directly present the condition for \(\mathcal{G}\) itself, because there are cases when \(\mathcal{G}\neq\mathcal{G}^{\prime}\) but \(\mathcal{U}_{N,\mathcal{G}}=\mathcal{U}_{N,\mathcal{G}^{\prime}}\), for example when \(\mathcal{G}=\{\mathrm{I},\mathrm{Z}\}\) and \(\mathcal{G}^{\prime}=\{e^{i\theta\mathcal{Z}}|\theta\in\mathbb{R}\}\).
We illustrate how we can use this theorem to know whether symmetric Clifford groups are symmetric unitary \(3\)-designs by taking the following three physically relevant examples:
\[\mathcal{G} =\left\{\mathrm{I}^{\otimes N},\mathrm{Z}^{\otimes N}\right\}, \tag{4}\] \[\mathcal{G} =\left\{\left(e^{i\theta\mathrm{Z}}\right)^{\otimes N}\ |\ \theta\in\mathbb{R}\right\},\] (5) \[\mathcal{G} =\left\{\left(e^{i(\theta\mathrm{X}\mathrm{X}+\theta_{\mathrm{ Y}}\mathrm{Y}+\theta_{\mathrm{Z}}\mathrm{Z})}\right)^{\otimes N}\ |\ \theta_{\mathrm{X}},\theta_{\mathrm{Y}},\theta_{\mathrm{Z}}\in\mathbb{R}\right\}. \tag{6}\]
These groups are isomorphic to \(\mathbb{Z}_{2}\), \(\mathrm{U}(1)\) and \(\mathrm{SU}(2)\), respectively, which appear ubiquitously in quantum systems; first-principles description of electronic structures, AOM physics, quantum spin systems, and lattice gauge theory, to name a few. When \(\mathcal{G}\) is given by Eq. (4), \(\mathcal{G}\)-symmetric Clifford group is a \(\mathcal{G}\)-symmetric unitary \(3\)-design, because \(\mathcal{G}\) itself is a Pauli subgroup. In contrast, when \(\mathcal{G}\) is given by Eq. (5) or (6) with \(N\geq 2\), \(\mathcal{G}\)-symmetric Clifford group is _not_ a \(\mathcal{G}\)-symmetric unitary \(3\)-design, because in these cases \(\mathcal{U}_{N,\mathcal{G}}\) cannot be expressed as \(\mathcal{U}_{N,\mathcal{Q}}\) with any Pauli subgroups \(\mathcal{Q}\)[29]. We note that when \(\mathcal{G}\) is given by Eq. (5) or (6) with \(N=1\), \(\mathcal{G}\)-symmetric Clifford group is again a \(\mathcal{G}\)-symmetric unitary \(3\)-design. In fact, \(\mathcal{G}\) itself is not a Pauli subgroup, but \(\mathcal{U}_{N,\mathcal{G}}\) can be expressed as \(\mathcal{U}_{N,\mathcal{Q}}\) with a Pauli subgroup \(\mathcal{Q}=\{\mathrm{I},\mathrm{Z}\}\) or \(\mathcal{P}_{1}\).
We emphasize that we can completely characterize the randomness of the examples in terms of unitary designs, i.e., we can clarify the maximal \(t\) such that \(\mathcal{C}_{N,\mathcal{G}}\) is a \(\mathcal{G}\)-symmetric unitary \(t\)-design. In fact, as expected from the non-symmetric case, we can prove the no-go theorem for \(\mathcal{G}\)-symmetric unitary \(4\)-designs except for the most constrained case of \(\mathcal{U}_{N,\mathcal{G}}=\{e^{i\theta}I|\theta\in\mathbb{R}\}\) (see Supplementary Materials (SM) for details) [30]; generally we have \(t_{\max}=3\) for Pauli symmetry. On the contrary, when \(\mathcal{G}\) is given by Eq. (5) or (6) with \(N\geq 2\), we get \(t_{\max}=1\), which we will describe in more detail later. Note that the single-qubit case is special since we have \(t_{\max}=3,\infty\) for Eq. (5) and (6), respectively [31]. We finally remark that we cannot increase \(t_{\max}\) by considering a nonuniform mixture in the definition of unitary designs [30].
While we guide readers to SM for details on the derivation, it is informative to provide a brief sketch on the proof [30]. In the proof of the "if" part, it is sufficient to show that \(\mathcal{C}_{N,\mathcal{Q}}\) is a \(\mathcal{Q}\)-symmetric unitary \(3\)-design for all Pauli subgroups \(\mathcal{Q}\). We explicitly construct a map \(\mathcal{D}\) with a certain class of symmetric Clifford operators and show that the twirling channels satisfy \(\Phi_{3,\mathcal{C}_{N,\mathcal{G}}}=\Phi_{3,\mathcal{U}_{N,\mathcal{G}}}= \mathcal{D}\) by considering the fixed-points of \(\Phi_{3,\mathcal{C}_{N,\mathcal{G}}}\) and \(\Phi_{3,\mathcal{U}_{N,\mathcal{G}}}\). We emphasize that the nontrivial and technical contribution of Theorem 1 resides in the "only if" part. Namely, if \(\mathcal{C}_{N,\mathcal{G}}\) is a \(\mathcal{G}\)-symmetric unitary \(3\)-design, then there exists a Pauli subgroup \(\mathcal{Q}\) such that \(\mathcal{U}_{N,\mathcal{G}}=\mathcal{U}_{N,\mathcal{Q}}\). Concretely, we construct \(\mathcal{Q}\) as the group generated by the set \(\mathcal{Q}^{\prime}:=\{Q\in\{\mathrm{I},\mathrm{X},\mathrm{Y},\mathrm{Z}\}^{ \otimes N}|\exists G\in\mathcal{G}\ \mathrm{s.t.}\ \mathrm{tr}(GQ)\neq 0\}\). While the inclusion \(\mathcal{U}_{N,\mathcal{G}}\supset\mathcal{U}_{N,\mathcal{Q}}\) directly follows from \(\mathrm{span}(\mathcal{G})\subset\mathrm{span}(\mathcal{Q})\), the proof of the inverse inclusion \(\mathcal{U}_{N,\mathcal{G}}\subset\mathcal{U}_{N,\mathcal{Q}}\) requires some technical lemmas [30]. We consider the function \(U\mapsto UQU^{\dagger}\) from \(\mathcal{U}_{N,\mathcal{G}}\) to \(\mathcal{U}_{N}\) for arbitrary taken \(Q\in\mathcal{Q}^{\prime}\) and show that it is a constant function.
_Construction of symmetric Clifford groups.--_ From the viewpoint of algorithms and experiments, it is crucial to give an explicit construction for the symmetric Clifford operators. In fact, for a Pauli symmetry, we show that the set of symmetric Clifford operators considered in the proof of Theorem 1 actually form a complete and unique expression of the symmetric Clifford operators (see Fig. 1 (a)). We will later discuss the case for non-Pauli symmetry. This is an symmetric extension of the result in Refs. [32, 33], where they showed that the standard Clifford operators can be uniquely decomposed by elementary gate sets.
As a preparation for stating the theorem, it is crucial to mention that every Pauli subgroup naturally gives a decomposition into three parts. Concretely, we note that any Pauli subgroup \(\mathcal{Q}\) can be transformed into the form
\[\mathcal{R}:=\mathcal{P}_{0}\{\mathrm{I},\mathrm{X},\mathrm{Y}, \mathrm{Z}\}^{\otimes N_{1}}\otimes\{\mathrm{I},\mathrm{Z}\}^{N_{2}}\otimes \{\mathrm{I}\}^{\otimes N_{3}} \tag{7}\]
by some Clifford conjugation action up to phase, i.e., \(\mathcal{P}_{0}W\mathcal{Q}W^{\dagger}=\mathcal{R}\) with some \(W\in\mathcal{C}_{N}\), where \(\mathcal{P}_{0}:=\{\pm 1,\pm i\}\). By denoting the subsystem of \(N_{k}\) qubits by \(\mathrm{A}_{k}\)\((k=1,2,3)\) and the set of indices representing the qubits in \(\mathrm{A}_{k}\) by \(\Gamma_{k}\), we derive the following theorem:
**Theorem 2**.: _(Complete and unique construction of Clifford group under Pauli symmetry) Let \(\mathcal{Q}\) be a subgroup of \(\mathcal{P}_{N}\). Then, there exists some \(W\in\mathcal{C}_{N}\) and \(\mathcal{R}\) in the form of Eq. (7) such that \(\mathcal{P}_{0}W\mathcal{Q}W^{\dagger}=\mathcal{R}\), and every \(\mathcal{Q}\)-symmetric Clifford operator \(U\) can be uniquely expressed as Fig. 1(a) as_
\[U= W^{\dagger}\left(\mathrm{T}\prod_{j\in\Gamma_{2}}\prod_{k\in \Gamma_{3}}\mathrm{C}(P_{j,k})_{j,k}\right)V\] \[\qquad\times\left(\prod_{j,k\in\Gamma_{2},j<k}\mathrm{C}\mathrm{ Z}_{j,k}^{\nu_{j,k}}\right)\left(\prod_{j\in\Gamma_{2}}\mathrm{S}_{j}^{\mu_{j}} \right)W \tag{8}\]
_with \(\mu_{j}\in\{0,1,2,3\}\), \(\nu_{j,k}\in\{0,1\}\), \(V\in\mathcal{C}_{N_{3}}\) and \(P_{j,k}\in\{\mathrm{I},\mathrm{X},\mathrm{Y},\mathrm{Z}\}\), where \(\mathrm{S}_{j}\) is the \(S\) gate on the \(j\)th qubit, \(\mathrm{C}\mathrm{Z}_{j,k}\) is the controlled-Z gate on the \(j\)th and \(k\)th qubit, \(V\) acts on the subsystem \(\mathrm{A}_{3}\), and \(\mathrm{C}(P)_{j,k}\) is the controlled-\(P\) gates with the \(j\)th qubit as the control qubit, the \(k\)th qubit as the target qubit, and \(\mathrm{T}\prod\) means the ordered product, i.e., \(\mathrm{T}\prod_{j=1}^{n}O_{j}:=O_{n}\cdots O_{2}O_{1}\)._
The complete and unique expression by Eq. (8) gives an efficient way to generate all the elements of a \(\mathcal{Q}\)-symmetric Clifford group. In fact, we can understand that it is much more efficient than to choose symmetric elements from the entire Clifford group. Namely, the size of the quotient group of the symmetric Clifford group \(\mathcal{C}_{N,\mathcal{G}}\) divided by the freedom of phase \(\mathcal{U}_{0}:=\{e^{i\theta}|\theta\in\mathbb{R}\}\) is given by
\[|\mathcal{C}_{N,\mathcal{G}}/\mathcal{U}_{0}|= 4^{N_{2}}\cdot 2^{N_{2}(N_{2}-1)/2}\cdot|\mathcal{C}_{N_{3}}/ \mathcal{U}_{0}|\cdot 4^{N_{2}N_{3}}\] \[\sim 2^{2(N_{3}+N_{2}/2)^{2}+3(N_{3}+N_{2}/2)}, \tag{9}\]
and it is much smaller than the size \(|\mathcal{C}_{N}/\mathcal{U}_{0}|\sim 2^{2N^{2}+3N}\)[34] of the entire Clifford group. The reduction rate is exponential with \(N\) in a standard setup where \(N_{1}\) and \(N_{2}\) are \(\mathrm{O}(1)\), which highlights the significance of the explicit construction of symmetric Clifford operators. We can see that each qubit in \(\mathrm{A}_{1}\), \(\mathrm{A}_{2}\) and \(\mathrm{A}_{3}\) contributes as \(0\), \(1/2\) and \(1\) qubit in the estimation of the size \(|\mathcal{C}_{N,\mathcal{G}}/\mathcal{U}_{0}|\). The representation of \(|\mathcal{C}_{N,\mathcal{G}}/\mathcal{U}_{0}|\) is derived from the fact that there are \(4\), \(2\), \(|\mathcal{C}_{N_{3}}/\mathcal{U}_{0}|\) and \(4\) choices for each \(\mu_{j}\), \(\nu_{j,k}\), \(V\) and \(P_{j,k}\), respectively.
We illustrate the construction of Pauli subgroup-symmetric Clifford operators by taking the symmetry \(\mathcal{Q}=\{\mathrm{I}^{\otimes 4},\mathrm{X}^{\otimes 4},\mathrm{Y}^{\otimes 4}, \mathrm{Z}^{\otimes 4}\}\) on four qubits as an example. We know from Theorem 2 that every \(\mathcal{Q}\)-symmetric Clifford operator \(U\) can be uniquely expressed as Fig. 1 (b) by noting that \(W\mathcal{Q}W^{\dagger}=\{\mathrm{I}_{1},\mathrm{Z}_{1}\}\otimes\{\mathrm{I}_{2 },\mathrm{Z}_{2}\}\) with \(W=\mathrm{H}_{1}\mathrm{CNOT}_{42}\mathrm{CNOT}_{13}\mathrm{CNOT}_{34}\mathrm{CNOT }_{12}\). We can confirm that the symmetry constraint greatly reduces the size of the Clifford group by seeing that \(|\mathcal{C}_{4,\mathcal{Q}}/\mathcal{U}_{0}|\sim 10^{8}\) and \(|\mathcal{C}_{4}/\mathcal{U}_{0}|\sim 10^{13}\).
For the proof of this theorem, it is sufficient only to consider the construction of \(\mathcal{C}_{N,\mathcal{R}}\), because the conjugation \(W\cdot W^{\dagger}\) gives a one-to-one correspondence from \(\mathcal{C}_{N,\mathcal{Q}}\) to \(\mathcal{C}_{N,\mathcal{R}}\). In the proof of the completeness, the key is to take the Heisenberg picture. For arbitrary \(U\in\mathcal{C}_{N,\mathcal{R}}\), we can inductively construct \(U_{j}\)\((j=1,2,...,N_{2})\) in the form of Eq. (8) such that \(\mathrm{Z}_{1},\mathrm{Z}_{2},...\mathrm{Z}_{N_{1}+N_{2}},\mathrm{X}_{1},\mathrm{ X}_{2},...,\mathrm{X}_{N_{1}+j}\) are invariant under the conjugation action of \(U_{j}^{\dagger}U\). This implies that \(U^{\prime}:=U_{N_{3}}^{\dagger}U\) is a Clifford operator acting only on the subsystem \(\mathrm{A}_{3}\), and thus \(U^{\prime}\) is in the form of Eq. (8). By noting that both \(U_{N_{2}}^{\dagger}\) and \(U^{\prime}\) can be written
Figure 1: Circuit representation of \(\mathcal{Q}\)-symmetric Clifford operators presented in Theorem 2. (a) Complete and unique expression for \(\mathcal{Q}\)-symmetric Clifford operators in a general case. (b) Example for the symmetry \(\mathcal{Q}=\{\mathrm{I}^{\otimes 4},\mathrm{X}^{\otimes 4},\mathrm{Y}^{\otimes 4}, \mathrm{Z}^{\otimes 4}\}\). Every \(\mathcal{Q}\)-symmetric Clifford operator can be uniquely expressed with \(\mu_{j}\in\{0,1,2,3\}\), \(\nu\in\{0,1\}\), \(V\in\mathcal{C}_{2}\) and \(P_{j}\in\{\mathrm{I},\mathrm{X},\mathrm{Y},\mathrm{Z}\}^{\otimes 2}\).
in the form of Eq. (8), we know that \(U=U_{N_{2}}U^{\prime}\) can be written in the form of Eq. (8). We prove the uniqueness with the proof by contradiction. Namely, we take arbitrary \(U\in\mathcal{C}_{N,\mathcal{G}}\) and suppose that there are two different sets of \((\mu_{j},\nu_{j,k},V,P_{j,k})\) that realize the same \(U\), and show that they must coincide with each other.
_Clifford groups under U(1) and SU(2) symmetries._--As prominent examples of non-Pauli symmetries, we clarify the property of the \(\mathcal{G}\)-symmetric Clifford group when \(\mathcal{G}\) is given by Eq. (5) or (6) on multiple qubits. Concretely, in these cases, the \(\mathcal{G}\)-symmetric Clifford group \(\mathcal{C}_{N,\mathcal{G}}\) is a \(\mathcal{G}\)-symmetric unitary \(1\)-design, but not a \(2\)-design. This property characterizes the randomness of the symmetric Clifford group under U(1) and SU(2) symmetries given by Eqs. (5) or (6). We rigorously present this statement as a theorem.
**Theorem 3**.: _(Randomness of Clifford groups under U(1) and SU(2) symmetries) Let \(N\geq 2\) and \(\mathcal{G}\) be given by Eq. (5) or (6). Then, \(\mathcal{C}_{N,\mathcal{G}}\) is a \(\mathcal{G}\)-symmetric unitary \(1\)-design but not a \(2\)-design._
Let us remark that, as for \(2\)-designs, we can actually show no-go theorems in a more general class of symmetries, which are described as a tensor product of representations of a nontrivial connected Lie subgroup of a unitary group. This type of symmetry represents the conservation of the total observables on the system. In the proof of \(1\)-designs and the disproof of \(2\)-designs, we use the proof idea of the "if" part and the "only if" part in the proof of Theorem 1, respectively.
By using the result and the proof idea of Theorem 2, we have also found a complete and unique expression for the \(\mathcal{G}\)-symmetric Clifford operators when the symmetry is given by Eq. (5) or (6). Concretely, in the case of Eq. (5), every \(\mathcal{G}\)-symmetric Clifford operator \(U\) is uniquely expressed as Fig. 2 (a) as
\[U=c\left(\prod_{1\leq j<k\leq N}\mathrm{C}\mathrm{Z}_{j,k}^{\nu_{j,k}}\right) \left(\prod_{j=1}^{N}\mathrm{S}_{j}^{\mu_{j}}\right)K_{\sigma} \tag{10}\]
with \(\mu_{j}\in\{0,1,2,3\}\), \(\nu_{j,k}\in\{0,1\}\), \(\sigma\in\mathfrak{S}_{N}\) and \(c\in\mathcal{U}_{0}\), where \(K_{\sigma}\) is the permutation operator that brings the \(j\)th qubit to the \(\sigma(j)\)th qubit. It follows that the size of the quotient group of the symmetric Clifford group \(\mathcal{C}_{N,\mathcal{G}}\) divided by the freedom of phase is
\[|\mathcal{C}_{N,\mathcal{G}}/\mathcal{U}_{0}|= 2^{N(N-1)/2}\cdot 4^{N}\cdot N!\] \[\sim 2^{2(N/2)^{2}+3(N/2)+(N+1/2)\log_{2}(N/e)}. \tag{11}\]
In the case of Eq. (6), the \(\mathcal{G}\)-symmetric Clifford operators are restricted to \(cK_{\sigma}\) with \(c\in\mathcal{U}_{0}\) and \(\sigma\in\mathfrak{S}_{N}\), and the size of \(\mathcal{U}_{N,\mathcal{G}}/\mathcal{U}_{0}\) is \(N!\). See SM for details [30].
_Numerical verification via frame potentials._-- We can give a numerical evidence to Theorems 1 and 2 by computing the frame potentials, which are defined for a subgroup \(\mathcal{X}\) of \(\mathcal{U}_{N,\mathcal{G}}\) as [2]
\[F_{t}(\mathcal{X}):=\int_{U,U^{\prime}\in\mathcal{X}}|\mathrm{tr}(UU^{\prime \dagger})|^{2t}d\mu_{\mathcal{X}}(U)d\mu_{\mathcal{X}}(U^{\prime}), \tag{12}\]
where the integral is replaced by a summation if \(\mathcal{X}\) is a finite set up to phase. Similarly to the conventional
Figure 3: Frame potentials \(F_{t}(\mathcal{X})\) of \(\mathcal{G}\)-symmetric groups \(\mathcal{X}=\mathcal{C}_{N,\mathcal{G}},\mathcal{U}_{N,\mathcal{G}}\) computed by numerically taking the average over randomly generated unitaries. Here we compare the results for the Pauli symmetry \(\mathcal{R}\) given by Eq. (7) with \(N_{1}=1\), \(N_{2}=2\), \(N_{3}=3\) and the U(1) symmetry given by Eq. (5) with \(N=4\). (a) Frame potentials \(F_{t}(\mathcal{X})\) for the case of the Pauli symmetry computed by taking the average over \(10^{6}\) samples. (b) Size scaling of the relative error of \(F_{t}(C_{N,\mathcal{G}})\) against the theoretical lower bound \(F_{t}(\mathcal{U}_{N,\mathcal{G}})\). Here, we independently generate \(M\) samples for \(U\) and \(U^{\prime}\) respectively, and compute the mean value of \(|\mathrm{tr}(UU^{\prime\dagger})|^{2t}\). As we increase the total data size \(M^{2}\), the errors become smaller for \(t\leq 3\), while they remain finite for \(t=4\). Panels (c) and (d) show the results for the case of U(1) symmetry on a \(4\)-qubit system. Here the \(\mathcal{G}\)-symmetric Clifford group is only a \(\mathcal{G}\)-symmetric unitary \(1\)-design, and not a \(2\)-design. The numerical simulation is performed using the library Qulacs [35].
Figure 2: Circuit representation of (a) U(1)-symmetric and (b) SU(2)-symmetric Clifford operator.
case [2], we can measure the distance between the \(t\)-fold twirling channels \(\Phi_{t,\mathcal{X}}\) and \(\Phi_{t,\mathcal{U}_{\mathcal{N},\mathcal{G}}}\) by the frame potentials. It is therefore straightforward to show that \(F_{t}(\mathcal{X})\geq F_{t}(\mathcal{U}_{\mathcal{N},\mathcal{G}})\), where the equality holds if and only if \(\mathcal{X}\) is a \(\mathcal{G}\)-symmetric unitary \(t\)-design. Figure 3 (a)(b) clearly shows that the Clifford group \(\mathcal{C}_{N,\mathcal{G}}\) under Pauli symmetry is a \(\mathcal{G}\)-symmetric unitary 3-design but not a 4-design, and also that the elements are sampled uniformly from the entire group, thanks to the complete and unique construction provided in Theorem 2 (or Fig. 1). In sharp contrast, Fig. 3 (c)(d) shows that, when the Clifford group is constrained by \(\mathcal{G}=\{(e^{i\theta Z})^{\otimes N}|\theta\in\mathbb{R}\}\), which is isomorphic to U(1), then the \(\mathcal{G}\)-symmetric Clifford group \(\mathcal{C}_{N,\mathcal{G}}\) is only a \(\mathcal{G}\)-symmetric unitary 1-design and not a 2-design. Here symmetric Clifford gates are sampled uniformly from the circuit presented in Fig. 2.
_Discussion.--_ In this Letter, we have generalized the unitary 3-design property of the multiqubit Clifford group into symmetric cases. We have rigorously shown that a symmetric Clifford group is symmetric unitary 3-design if and only if the symmetry is given by some Pauli subgroup (Theorem 1), and also have provided a way to generate all the elements without redundancy (Theorem 2). Furthermore, we have also proven that two physically important class of U(1) and SU(2) symmetries, which cannot be reduced to Pauli subgroups, only yields symmetric unitary 1-design. Finally, for numerical validation, we have computed the frame potentials by randomly sampling symmetric unitaries, and confirmed that our findings indeed hold.
We envision two important future directions. First, it is theoretically crucial to reveal the requirement to achieve symmetric unitary design in the approximate sense. While it is known for the non-symmetric case that the approximate \(t\)-designs require only polynomial gate depth with respect to both the qubit count \(N\) and order \(t\)[36; 37], it is far from trivial whether this is also true for the symmetric case. Second, it is practically intriguing to develop a constructive way to generate symmetric Clifford circuits under general symmetry \(\mathcal{G}\) that cannot be described by some Pauli subgroup. While we provide such an example for both U(1) and SU(2) symmetry, it is important to construct circuits in an automated way for general situations, in particular when one is interested in performing Clifford gate simulation for the purpose of quantum many-body simulation [38].
_Acknowledgements.--_ The authors wish to thank Hidetaka Manabe and Takahiro Sagawa for insightful discussions. This work was supported by JST Grant Number JPMJPF2221. Y.M. is supported by World-leading Innovative Graduate Study Program for Materials Research, Information, and Technology (MERIT-WINGS) of the University of Tokyo. Y.M. is also supported by JSPS KAKENHI Grant No. JP23KJ0421. N.Y. wishes to thank JST PRESTO No. JPMJPR2119, and the support from IBM Quantum.
|
2309.07871 | Gradient Dynamics in Linear Quadratic Network Games with Time-Varying
Connectivity and Population Fluctuation | In this paper, we consider a learning problem among non-cooperative agents
interacting in a time-varying system. Specifically, we focus on repeated linear
quadratic network games, in which the network of interactions changes with time
and agents may not be present at each iteration. To get tractability, we assume
that at each iteration, the network of interactions is sampled from an
underlying random network model and agents participate at random with a given
probability. Under these assumptions, we consider a gradient-based learning
algorithm and establish almost sure convergence of the agents' strategies to
the Nash equilibrium of the game played over the expected network.
Additionally, we prove, in the large population regime, that the learned
strategy is an $\epsilon$-Nash equilibrium for each stage game with high
probability. We validate our results over an online market application. | Feras Al Taha, Kiran Rokade, Francesca Parise | 2023-09-14T17:19:13Z | http://arxiv.org/abs/2309.07871v2 | Gradient Dynamics in Linear Quadratic Network Games with Time-Varying Connectivity and Population Fluctuation
###### Abstract
In this paper, we consider a learning problem among non-cooperative agents interacting in a time-varying system. Specifically, we focus on repeated linear quadratic network games, in which the network of interactions changes with time and agents may not be present at each iteration. To get tractability, we assume that at each iteration, the network of interactions is sampled from an underlying random network model and agents participate at random with a given probability. Under these assumptions, we consider a gradient-based learning algorithm and establish almost sure convergence of the agents' strategies to the Nash equilibrium of the game played over the expected network. Additionally, we prove, in the large population regime, that the learned strategy is an \(\epsilon\)-Nash equilibrium for each stage game with high probability. We validate our results over an online market application.
## I Introduction
The prominence of big data analytics, Internet-of-Things, and machine learning has transformed large societal and technological systems. Of special interest are systems in which a large number of independent agents interacting over a network structure make strategic decisions, while affected by the actions of other agents. Examples include pricing in demand-side electricity markets, supply chain management, wireless networks, public goods provision, financial exchanges, and international trades, among others. In many settings, agents lack information or computational capabilities to determine a priori the best strategy to guide their decisions. Instead, agents need to learn their best action over time based on observations of their rewards and other agents actions, while adapting to changes in their environment.
Standard models of learning dynamics assume that the population of agents and their underlying network of interactions are static. However, in systems with large populations and high-volume interactions, this assumption is not realistic since individuals may enter and leave the system, leading to fluctuations in the population size, and participating agents might change their interconnections with other agents over time. It then becomes unclear whether players can reach a stable outcome in this nonstationary environment.
Accordingly, there is a need for a theory of multi-agent learning in dynamic network settings in which the composition of the agents and their interactions can change over time. In this paper, we start addressing these issues by focusing on gradient play dynamics in linear quadratic network games with time-varying connectivity and dynamic population.
### _Main contributions_
To obtain a tractable framework, we model the time-varying network of interactions and population of agents as stochastic network realizations from an underlying known distribution. Using stochastic approximation methods [1], we demonstrate that for linear quadratic network games, when all agents follow a projected gradient descent scheme, they almost surely converge to a Nash equilibrium of the game played over the expected network. Moreover, by using concentration inequalities, we show that with high probability, the learned strategy profile is an \(\epsilon\)-Nash equilibrium of the game played over any realized network, where \(\epsilon\) decreases as the population size increases.
### _Related works_
_Learning in time-varying settings with dynamic populations_ has been previously studied for games with a special structure such as congestion games, bandwidth allocation, markets, first-price auctions or public good games [2, 3, 4, 5]. The efficiency of outcomes in such games was investigated for low-adaptive-regret learning [2] and later generalized to low-approximate-regret learning [6], under the assumption that the population size is fixed. The setting with a changing number of agents was studied for congestion games in [7]. Similarly, the effect of changing populations has been studied in the context of truthful mechanism design [8, 9, 4]. None of these works cover the setting of network games considered in this paper.
In terms of learning dynamics to reach Nash equilibria in _static noncooperative games or multi-agent settings_, many schemes have been studied. These include fictitious play [10], parallel and distributed computation [11], projection-based algorithms (e.g., projected gradient dynamics) [12], and regret minimization (e.g., no-regret dynamics) [13]. Among these, a growing literature considered learning dynamics for _static aggregative and network games_ focusing, for example, on best response dynamics [14, 15], projection-based algorithms [16, 17], forward-backward operator splitting [18, 19], passivity-based schemes [20, 21], distributed asynchronous schemes [22, 23], and state-dependent costs [24], among others.
When considering non-stationary environments, the setting of _agents communicating according to a time-varying network_ is not novel in the control literature. Specifically, a large literature focused on cooperative problems such as consensus/gossip algorithms [25, 26] as well as distributed optimization [27] over time-varying networks, mainly based on an assumption of uniform connectivity over contiguous
intervals of time. Similar results have been derived for distributed Nash equilibrium seeking in noncooperative games [19, 28, 29, 30] and for averaging algorithms over randomized networks with gossip communication [31, 32]. One common property of these works is that despite the changing communication network, the underlying problem to solve (achieving consensus, optimizing some objective in a decentralized fashion, or reaching a Nash equilibrium) is time-invariant. In contrast, in this paper, agents take part in a different network game at every iteration. Thus, the Nash equilibrium that the players are trying to learn is itself time-varying since their payoff function depends on the time-varying network. The question at hand is then not solely about the convergence of agents' strategies, but also about the nature of the learned strategy and its relation to the time-varying stage games. We show that, in our setting, strategies converge to the equilibrium of the game played over the expected network and that this is approximately optimal for large populations. Finally, convergence of optimistic gradient descent in games with time-varying utility functions has recently been studied in [33]. Our model is different as it applies to network games in which payoff-variability is due to variability of interconnections. Moreover, we consider a setting in which the population itself may vary, with agents joining and leaving the system. Open multi-agent systems with random arrivals and departures of agents have been recently studied for cooperating homogeneous agents in consensus dynamics [34] or optimal resource allocation problems [35]. We instead focus on non-cooperating agents with network interactions.
### _Paper organization_
The rest of the paper is organized as follows. Section II presents the repeated network game setup and recaps known results on learning dynamics in static environments. Section III introduces the main results on the convergence of gradient dynamics and the suboptimality guarantees of the learned strategy for a time-varying network and fixed population. Section IV extends these results to dynamic populations. Section V demonstrates the convergence of the proposed projected gradient dynamics with numerical simulations and Section VI concludes the paper. Omitted proofs are given in the Appendix.
### _Notation_
We denote by \([v]_{j}\) the \(j\)th component of a vector \(v\in\mathbb{R}^{n}\) and by \(A_{ij}\) the \(ij\)th entry of a matrix \(A\in\mathbb{R}^{m\times n}\). We denote the Frobenius norm of a matrix by \(\|\cdot\|_{F}\) and the Euclidean norm of a vector as well as the matrix norm induced by the Euclidean norm by \(\|\cdot\|_{2}\). The symbol \(\mathbb{I}_{n}\) denotes the \(n\times n\) identity matrix. The operator \(\Pi_{\mathcal{X}}[\cdot]\) denotes the projection onto the set \(\mathcal{X}\). The symbol \(\otimes\) denotes the Kronecker product. Given \(n\) matrices \(A_{1},\ldots,A_{n}\) of appropriate dimensions, we let \(\text{diag}(A_{1},\ldots,A_{n})\) denote the matrix formed by stacking \(A_{1},\ldots,A_{n}\) block-diagonally. Given vectors \(x_{1},\ldots,x_{n}\), we let \([x_{i}]_{i\in\{1,\ldots,n\}}\) denote the vector formed by stacking \(x_{1},\ldots,x_{n}\) vertically. For a given square matrix \(A\), \(\lambda_{\max}(A)\) and \(\lambda_{\min}(A)\) denote the maximum and minimum eigenvalues of \(A\), respectively.
## II Recap on Linear Quadratic Games over Static Networks
We present a recap of known results for static games.
### _One shot game_
Consider a linear quadratic (LQ) game played by a population of agents indexed by \(i\in\mathcal{N}:=\{1,\ldots,N\}\) over a static network with no self-loops, represented by an adjacency matrix \(\tilde{A}\in[0,1]^{N\times N}\). Each agent \(i\in\mathcal{N}\) aims at selecting a strategy \(s_{i}\in\mathcal{S}_{i}\subseteq\mathbb{R}^{n}\) to minimize a cost function
\[J_{i}(s_{i},s_{-i},\tilde{A})=\frac{1}{2}s_{i}^{\top}Q_{i}s_{i}-s_{i}^{\top} \Big{(}\theta_{i}+\frac{\alpha}{N}\sum_{j=1}^{N}\tilde{A}_{ij}s_{j}\Big{)} \tag{1}\]
where \(s_{-i}:=[s_{j}]_{j\in\mathcal{N}\setminus\{i\}}\in\mathbb{R}^{(N-1)n}\) is the strategy profile of all other agents, \(\alpha\in\mathbb{R}\) models the strength of network effects, \(Q_{i}\in\mathbb{R}^{n\times n}\) and \(\theta_{i}\in\mathbb{R}^{n}\) are parameters modeling agent specific heterogeneity, and \(Q_{i}=Q_{i}^{\top}\succ 0\). Let \(\mathcal{S}:=\prod_{i\in\mathcal{N}}\mathcal{S}_{i}\) be the set of all strategy profiles of the game. Given a strategy profile \(s:=[s_{i}]_{i\in\mathcal{N}}\in\mathcal{S}\) and a network \(\tilde{A}\), the term \(\frac{1}{N}\sum_{j=1}^{N}\tilde{A}_{ij}s_{j}\) represents the local aggregate sensed by agent \(i\). We denote this LQ game by \(\mathcal{G}(Q,\theta,\alpha,\tilde{A})\) where \(\theta:=[\theta_{i}]_{i\in\mathcal{N}}\) and \(Q:=\text{diag}(Q_{1},\ldots,Q_{N})\).
**Definition 1** (\(\epsilon\)-Nash equilibrium): _Given \(\epsilon>0\), a strategy profile \(\bar{s}\in\mathbb{R}^{Nn}\) is an \(\epsilon\)-Nash equilibrium of the game \(\mathcal{G}(Q,\theta,\alpha,\tilde{A})\) if for all \(i\in\mathcal{N}\), we have \(\bar{s}_{i}\in\mathcal{S}_{i}\) and_
\[J_{i}(\bar{s}_{i},\bar{s}_{-i},\tilde{A})\leq J_{i}(s_{i},\bar{s}_{-i},\tilde{ A})+\epsilon\qquad\text{for all $s_{i}\in\mathcal{S}_{i}$}. \tag{2}\]
_If (2) holds for \(\epsilon=0\), then \(\bar{s}\) is a Nash equilibrium._
It is well known that the Nash equilibria of convex games can be characterized by using the game Jacobian
\[F(s,\tilde{A}):=[\nabla_{s_{i}}J_{i}(s_{i},s_{-i},\tilde{A})]_{i\in\mathcal{N}} \tag{3}\]
which is a map composed of each player's cost gradient with respect to their own strategy. For LQ games, this is
\[F(s,\tilde{A}) =\Big{[}Q_{i}s_{i}-\theta_{i}-\frac{\alpha}{N}\sum_{j=1}^{N} \tilde{A}_{ij}s_{j}\Big{]}_{i\in\mathcal{N}}\] \[=Qs-\theta-\frac{\alpha}{N}(\tilde{A}\otimes\mathbb{I}_{n})s.\]
A classic approach to characterizing Nash equilibria is to interpret them as solutions to variational inequalities (see [12, 36] for general games and [14] for network games). We summarize this relation in the following lemma, after introducing an assumption that guarantees existence and uniqueness of the Nash equilibrium.
**Assumption 1** (Equilibrium uniqueness): 1. _For each_ \(i\in\mathcal{N}\)_,_ \(\mathcal{S}_{i}\) _is nonempty, convex and compact. There exists a compact set_ \(\bar{\mathcal{S}}\subseteq\mathbb{R}^{n}\) _such that_ \(\mathcal{S}_{i}\subseteq\bar{\mathcal{S}}\) _for all_ \(i\)_, and_ \(s_{\max}:=\max_{s\in\mathcal{S}}\|s\|_{2}<\infty\)_._
2. _The following relation holds_ \[\lambda_{\min}(Q)-\frac{|\alpha|}{N}\|\tilde{A}\|_{2}>0.\] (4)
**Lemma II.1**: _(Variational inequality equivalence, [12, Proposition 1.4.2], [14, Proposition 1 and Proposition 2]) Suppose that Assumption 1-(i) holds. Then, the strategy profile \(\bar{s}\) is a Nash equilibrium of \(\mathcal{G}(Q,\theta,\alpha,\tilde{A})\) if and only if it solves the variational inequality \((s-\bar{s})^{\top}F(\bar{s},\tilde{A})\geq 0,\quad\forall s\in\mathcal{S}\). If, additionally, Assumption 1-(ii) holds, then, \(F\) is strongly monotone and the equilibrium is unique._
### _Learning dynamics in repeated games_
Consider now a repetition of the stage LQ game defined in Section II-A, played over the static network \(\tilde{A}\). Suppose that each agent \(i\) tries to learn the equilibrium iteratively by using the projected gradient dynamics
\[s^{k+1}=\Pi_{\mathcal{S}}[s^{k}-\tau F(s^{k},\tilde{A})], \tag{5}\]
where \(s^{k}\) is the strategy profile at iteration \(k\geq 0\), \(s^{0}\in\mathcal{S}\), and \(\tau>0\) is a fixed step size.
**Remark 1**: _To implement their learning dynamics_
\[s_{i}^{k+1}=\Pi_{\mathcal{S}_{i}}\left[s_{i}^{k}-\tau\left(Q_{i}s_{i}^{k}- \theta_{i}-\tfrac{\alpha}{N}\sum_{j=1}^{N}\tilde{A}_{ij}s_{j}^{k}\right) \right],\]
_player \(i\in\mathcal{N}\) only requires knowledge of their current strategy, their cost function parameters, and their current local aggregate. The player does not need to observe the network realization \(\tilde{A}\)._
The dynamics in (5) can be shown to converge under a mild assumption on the step size.
**Lemma II.2**: _(Gradient play over a static network, [12, Theorem 12.1.2]) Suppose that Assumption 1 holds. Let \(L:=\lambda_{\max}(Q)+\frac{|\alpha|}{N}\|\tilde{A}\|_{2}\) be the Lipschitz constant of \(F\) and \(\mu:=\lambda_{\min}(Q)-\frac{\alpha}{N}\|\tilde{A}\|_{2}\) be the strong monotonicity constant of \(F\). If the step size \(\tau\) satisfies \(0<\tau<2\mu/L^{2}\), then for any initial strategy profile \(s^{0}\in\mathcal{S}\), the gradient dynamics described in (5) converge to the unique Nash equilibrium of the game \(\mathcal{G}(Q,\theta,\alpha,\tilde{A})\)._
The Lipschitz and strong monotonicity constants used in Lemma II.2 are computed, e.g., in [14].
## III Games over time-varying networks
The well-known results summarized in the previous section hold for a static network of interactions. In many realistic settings however, interactions among the agents may change across repetitions of the game and agents may not always be participating. In this section, we consider learning dynamics in time-varying networks. These results are then extended in Section IV to the case of dynamic populations.
More specifically, we here assume that the network of interactions is random, and at every repetition \(k=0,1,2,\dots\) of the stage game, a new independent network realization \(A^{k}\in[0,1]^{N\times N}\) is sampled as follows: for all \(i\in\mathcal{N}\), \(A^{k}_{ii}=0\) (no self-loop) and for \(i\neq j\), \(A^{k}_{ij}\) is sampled from a distribution with bounded support1 in \([0,1]\) and with mean \(\bar{A}_{ij}\in[0,1]\). In the case where \(A^{k}_{ij}\) is a Bernoulli random variable, \(\bar{A}_{ij}\) can be interpreted as the probability that player \(j\) contributes to the local aggregate of player \(i\). More generally, \(\bar{A}=\mathrm{E}[A^{k}]\) represents the expected network of interactions, where we set \(\bar{A}_{ii}=0\) for all \(i\in\mathcal{N}\).
Footnote 1: Restricting the support to the unit interval is without loss of generality. One can easily extend this paper’s results to any bounded convex support.
### _Motivating example_
We motivate this time-varying setting by presenting an example in the context of online markets.
**Example 1** (Dynamic pricing game): _Consider an online market with \(N\) sellers. Each merchant \(i\in\mathcal{N}\) sells a product \(i\) at a price \(s_{i}\in\mathbb{R}_{+}\) determined daily. On each day \(k=0,1,2,\dots\), a new set of \(M\) customers arrives. Each customer \(c\in\{1,\dots,M\}\) decides which products to purchase. The demand of customer \(c\) on day \(k\) for product \(i\) can be modeled as an affine demand function [37]_
\[d_{i}^{c,k}=\bar{d}_{i}-\eta\Big{(}s_{i}^{k}+\frac{\alpha}{N}\sum_{j=1}^{N}A_ {ij}^{c,k}s_{j}^{k}\Big{)} \tag{6}\]
_where \(s_{i}^{k}\) is the price of product \(i\) on day \(k\), \(\bar{d}_{i}\) is the maximum demand that a customer can have for product \(i\), \(\eta>0\) is the price sensitivity, \(\alpha\geq 0\) models the degree of influence that the price of other products has on the demand of product \(i\), and \(A_{ij}^{c,k}\sim\text{Ber}(\bar{A}_{ij})\) is a Bernoulli random variable that indicates whether customer \(c\) is interested in co-purchasing product \(i\) with product \(j\) on day \(k\). The mean \(\bar{A}_{ij}\in[0,1]\) can be interpreted as the likelihood of this event._
The total demand for product \(i\) can be obtained by aggregating the demands of all customers:
\[d_{i}^{k} =\sum_{c=1}^{M}d_{i}^{c,k}=\sum_{c=1}^{M}\Big{(}\bar{d}_{i}-\eta \Big{(}s_{i}^{k}+\frac{\alpha}{N}\sum_{j=1}^{N}A_{ij}^{c,k}s_{j}^{k}\Big{)} \Big{)}\] \[=M\bar{d}_{i}-\eta\Big{(}Ms_{i}^{k}+\frac{\alpha}{N}\sum_{j=1}^{N }\sum_{c=1}^{M}A_{ij}^{c,k}s_{j}^{k}\Big{)}\] \[=M\Big{(}\bar{d}_{i}-\eta\Big{(}s_{i}^{k}+\frac{\alpha}{N}\sum_{j =1}^{N}A_{ij}^{k}s_{j}^{k}\Big{)}\Big{)}\]
where \(A_{ij}^{k}:=(1/M)\sum_{c=1}^{M}A_{ij}^{c,k}\) is a random variable representing the average complementarity of products \(i\) and \(j\), with mean \(\mathrm{E}[A_{ij}^{k}]=\bar{A}_{ij}\). Given this demand, each merchant tries to maximize their profit, i.e., minimize the cost
\[J_{i}(s_{i}^{k},s_{-i}^{k}, \!A^{k})=-s_{i}^{k}d_{i}^{k}\] \[=-s_{i}^{k}M\Big{(}\bar{d}_{i}-\eta\Big{(}s_{i}^{k}+\frac{\alpha} {N}\sum_{j=1}^{N}A_{ij}^{k}s_{j}^{k}\Big{)}\Big{)}\] \[=M\Big{(}\eta(s_{i}^{k})^{2}-s_{i}^{k}\Big{(}\bar{d}_{i}-\eta \frac{\alpha}{N}\sum_{j=1}^{N}A_{ij}^{k}s_{j}^{k}\Big{)}\Big{)}. \tag{7}\]
This online market competition can be interpreted as a repeated LQ game on a dynamic network \(A^{k}\) that changes daily to capture different daily customer preferences. Sellers may learn how to price their products using a gradient descent scheme. Note that the gradient in this case is
\[\nabla_{s_{i}}J_{i}(s_{i}^{k},s_{-i}^{k},A^{k})=-d_{i}^{k}+\eta Ms_{i}^{k}, \tag{8}\]
hence, the only information seller \(i\) requires to compute their current gradient update is the total number of customers and the quantity of product \(i\) sold on the previous day.
In the next section, we characterize the asymptotic behavior produced by gradient updates in time-varying settings, as motivated by the previous example, for general LQ games.
### _Learning dynamics_
In this nonstationary environment, the Nash equilibrium is time-varying. Despite this, we show that agents learn to play a suitable fixed strategy, provided that they use a time-varying, diminishing step size \(\tau^{k}\) to ensure convergence of their gradient-based learning dynamics
\[s^{k+1}=\Pi_{\mathcal{S}}[s^{k}-\tau^{k}F(s^{k},A^{k})] \tag{9}\]
where \(k\geq 0\), and \(s^{0}\in\mathcal{S}\). The diminishing step size is needed to compensate for the network variability.
**Assumption 2** (Step size): _For all \(k\), the step size \(\tau^{k}\) satisfies \(\tau^{k}>0\), \(\tau^{k}\to 0\), \(\sum_{k=1}^{\infty}\tau^{k}=\infty\) and \(\sum_{k=1}^{\infty}(\tau^{k})^{2}<\infty\)._
In the following, we show that the gradient dynamics (9) converge almost surely to the Nash equilibrium of the LQ game played over the expected network \(\bar{A}=\mathrm{E}[A^{k}]\). This result follows from rewriting the dynamics as a stochastic gradient descent scheme and utilizing stochastic approximation theory [1] to show convergence.
**Proposition 1**: _Suppose that Assumptions 1 (for \(\bar{A}=\bar{A}\)) and 2 hold. Then, the gradient dynamics in (9) converge to the unique Nash equilibrium of the game played over the expected network \(\bar{A}\) almost surely, for any \(s^{0}\in\mathcal{S}\)._
If agents compute their gradient updates with noisy local aggregate data, convergence can still be achieved provided that the noise is additive, zero-mean, and with bounded variance (as it can be absorbed in the perturbation vector \(w^{k}\)).
The following corollary justifies why the Nash equilibrium of the game played over the expected network is a reasonable policy to learn, by showing that with high probability, it results in an approximate equilibrium for any iteration.
**Corollary 1**: _Suppose that Assumption 1 (for \(\bar{A}=\bar{A}\)) holds. Fix any \(k\geq 0\) and any \(\delta>0\). Then, with probability at least \(1-\delta\), the Nash equilibrium of the LQ game \(\mathcal{G}(Q,\theta,\alpha,\bar{A})\) played over the expected network \(\bar{A}\) is an \(\epsilon_{N,\delta}\)-Nash equilibrium to the stage game \(\mathcal{G}(Q,\theta,\alpha,A^{k})\) played over the network \(A^{k}\) with_
\[\epsilon_{N,\delta}:=2|\alpha|s_{\max}^{2}\sqrt{\frac{n\ln\left(2nN/\delta \right)}{2N}}.\]
We note that \(\epsilon_{N,\delta}\) does not depend on the cost parameters \(Q_{i}\) and \(\theta_{i}\). Also, for any confidence \(\delta>0\), \(\epsilon_{N,\delta}\) decreases as the population size \(N\) grows. This corollary can be proven by using matrix concentration inequalities, similarly to the approach in [38, Lemma 14].
## IV Games over dynamic population
In this section, we generalize the previous setting by considering a dynamic population, where players randomly enter and leave the game from one repetition to the other. For example, in the online market setting introduced in Example 1, sellers might lack supply for their products or decide to not sell their items on certain days. This can be modeled in the following way. If a player does not participate in a particular iteration, then their strategy does not update in that iteration, resulting in dynamics where a subset of components are updated at a time, similarly to random coordinate descent dynamics.
Let \(P^{k}\in\{0,1\}^{N\times N}\) be a diagonal matrix such that the random variable \(P^{k}_{ii}\sim\mathrm{Ber}(\bar{P}_{ii})\) represents whether player \(i\) is participating (\(P^{k}_{ii}=1\)) in the game at iteration \(k\) or not (\(P^{k}_{ii}=0\)), with \(\bar{P}_{ii}>0\) for all \(i\in\mathcal{N}\). Then, the network realization over which the game is played at time \(k\) is \(A^{k}P^{k}\) (when \(P^{k}_{ii}=0\), the corresponding column in the adjacency matrix is set to zero such that player \(i\) does not affect any other players participating in the game).
Consider the following gradient dynamics for each player \(i\)
\[s_{i}^{k+1}\!=\!\begin{cases}\Pi_{\mathcal{S}_{i}}\Big{[}s_{i}^{k}-\frac{ \tau^{k}}{\bar{P}_{ii}}\nabla_{s_{i}}J_{i}(s_{i}^{k},s_{-i}^{k},A^{k}P^{k}) \Big{]}&\text{if }P^{k}_{ii}=1,\\ s_{i}^{k}&\text{if }P^{k}_{ii}=0,\end{cases} \tag{10}\]
where \(k\geq 0\), and \(s_{i}^{0}\in\mathcal{S}_{i}\). Note that in (10), each player \(i\) scales their gradient step by \(1/\bar{P}_{ii}\) to compensate for missing updates when \(P^{k}_{ii}=0\).
Despite these random interruptions to the sequence of strategy updates, we show that the players learn to play the Nash equilibrium of the game \(\mathcal{G}(Q,\theta,\alpha,\bar{A}\bar{P})\) played over the expected network \(\bar{A}\bar{P}\). Similar to Proposition 1, the key step to prove this result is to rewrite the dynamics (10) as stochastic gradient dynamics converging to the aforementioned Nash equilibrium.
**Proposition 2**: _Suppose that Assumptions 1 (with \(\bar{A}=\bar{A}\bar{P}\)) and 2 hold. If \(\|\theta_{i}\|\leq\theta_{\max}\) for all \(i\in\mathcal{N}\), then, the gradient dynamics described in (10) converge to the unique Nash equilibrium of the LQ game played over the expected network \(\bar{A}\bar{P}\) almost surely, for any \(s^{0}\in\mathcal{S}\)._
Similar to Corollary 1, the Nash equilibrium of the expected network can be shown to be an approximate Nash equilibrium to the stage game with high probability.
**Corollary 2**: _Suppose that Assumption 1 (for \(\bar{A}=\bar{A}\)) holds. Fix any \(k\geq 0\) and \(\delta>0\). Then, with probability at least \(1-\delta\), the Nash equilibrium of the LQ game played over the network \(\bar{A}\bar{P}\) is an \(\epsilon_{N,\delta}\)-Nash equilibrium to the stage game played over the network \(A^{k}P^{k}\), with_
\[\epsilon_{N,\delta}:=2|\alpha|s_{\max}^{2}\sqrt{\frac{n\ln\left(2nN/\delta \right)}{2N}}.\]
## V Numerical experiments
In this section, we present numerical experiments demonstrating the convergence of gradient dynamics for the dynamic pricing game presented in Example 1 for both the static and the dynamic population settings presented in Sections III and IV, respectively.
Consider a market with \(N\) sellers, where each seller is endowed with a product whose demand has price sensitivity \(\eta=1\). Let \(\alpha=0.8\) be the strength of network effects on
the demand of every product. We consider two categories of products, namely category \(1\) and category \(2\). The probability that a customer is interested in co-purchasing products from categories \(m\) and \(l\) is denoted by \(\bar{a}_{ml}\), where \(m,l\in\{1,2\}\). Thus, in the notation of Example 1, \(\bar{A}_{ij}=\bar{a}_{ml}\) if product \(i\) is in category \(m\) and product \(j\) is in category \(l\). For the numerical simulations in this section, we set these probabilities to be \(\bar{a}_{11}=\bar{a}_{22}=0.8\) and \(\bar{a}_{12}=\bar{a}_{21}=0.3\). Moreover, let \(\bar{d}_{1}=2\), \(\bar{d}_{2}=10\) be the maximum demands a customer can have for products in categories \(1\) and \(2\), respectively.
We first consider the case of static population, in which all sellers participate in the market every day. Each seller starts with an arbitrary initial price for their product (set to zero in our simulations). On each day \(k\), a new set of \(M=100\) customers arrives. The total demand \(d_{i}^{k}\) for product \(i\) on day \(k\) is determined by the realization of the co-purchasing preference matrix \(A^{k}\) sampled according to the matrix \(\bar{A}\). With the gradient of their cost function computed as in (8), seller \(i\) updates the price of their product following the projected gradient dynamics in (9) with step size \(\tau_{k}=1/(Lk)\). For various values of \(N\), we illustrate in Fig. 1 (top) the deviation of the resulting sequence of price profiles \(\{s^{k}\}_{k\geq 0}\) from the Nash equilibrium \(\bar{s}\) of the game played over the expected co-purchasing matrix \(\bar{A}\), averaged over \(100\) trials. It can be observed that the prices converge to the price profile \(\bar{s}\) for each \(N\), as predicted in Proposition 1.
For each day \(k\), we compute the largest _normalized suboptimality gap_ incurred across sellers defined as
\[\bar{\epsilon}_{k}:=\max_{i\in\mathcal{N}}\left|\frac{J_{i}(s_{i}^{k},s_{-i}^ {k},A^{k})-J_{i}^{*}(A^{k})}{J_{i}^{*}(A^{k})}\right| \tag{11}\]
where \(J_{i}^{*}(A^{k}):=\inf_{s\in\mathcal{S}_{i}}J_{i}(s,s_{-i}^{k},A^{k})\) is the cost incurred by seller \(i\) by best-responding to other sellers' prices. Fig. 1 (bottom) depicts the decrease in the gap \(\bar{\epsilon}_{k}\) for different population sizes as a function of \(k\). The oscillations that can be observed in Fig. 1 (bottom) are due to the variability of the network, and are smaller for larger population sizes.
Next, we consider the dynamic population case, where some sellers may not participate in the market every single day. Let \(\bar{p}=0.9\) be the probability that any seller \(i\in\mathcal{N}\) participates in the market on a given day. Thus, \(\bar{P}=\bar{p}\ \mathbb{I}_{N}\). In this setting, the sellers use the gradient dynamics in (10) with a step size of \(\tau_{k}=1/(Lk)\). The deviation between the obtained sequence of prices \(\{s^{k}\}_{k\geq 0}\) from the Nash equilibrium \(\bar{s}\) of the game played over the expected co-purchasing matrix \(\bar{A}\bar{P}\) is shown in Fig. 2 (top) for various values of \(N\), averaged over 100 trials. The slower convergence of the price profile relative to the static population case is expected since sellers randomly skip updating their prices on certain days. Similar to the static population case, Fig. 2 (bottom) presents the largest suboptimality gap across sellers for each iteration \(k\), averaged across 100 trials. A similar trend to Fig. 1 (bottom) can be observed with the decrease of \(\bar{\epsilon}_{k}\) as a function of \(N\), but with larger oscillations. The latter is also to be expected given that the network \(A^{k}P^{k}\) has more variability than the network \(A^{k}\).
## VI Conclusions
In this work, we have introduced a tractable framework for learning in network games with time-varying connectivity and dynamic populations. Several extensions and future research directions can be considered. For example,
Fig. 1: For dynamic network \(A^{k}\) and fixed population size \(N\). Top: Convergence of product prices \(s^{k}\) (averaged over 100 trials) to the Nash equilibrium price profile \(\bar{s}\) of the game played over the expected network \(\bar{A}\). Bottom: Maximum suboptimality gap (averaged over 100 trials) across sellers per iteration as defined in (11).
Fig. 2: For dynamic network \(A^{k}P^{k}\) and dynamic population. Top: Convergence of product prices \(s^{k}\) (averaged over 100 trials) to the Nash equilibrium price profile \(\bar{s}\) of the game played over the expected network \(\bar{AP}\). Bottom: Maximum suboptimality gap (averaged over 100 trials) across sellers per iteration as defined in (11) with \(A^{k}P^{k}\) instead of \(A^{k}\).
in this paper, we have restricted our attention to myopic learning dynamics. However, one could define a discounted cumulative/averaged payoff over all iterations and examine the outcome of forward-looking dynamics. Other interesting directions would be to extend the model to generic cost functions and evolving graphs that densify with time [39].
|
2309.13804 | Symmetric Functions over Finite Fields | The number of linear independent algebraic relations among elementary
symmetric polynomial functions over finite fields is computed. An algorithm
able to find all such relations is described. It is proved that the basis of
the ideal of algebraic relations found by the algorithm consists of polynomials
having coefficients in the prime field F_p. | Mihai Prunescu | 2023-09-25T01:17:38Z | http://arxiv.org/abs/2309.13804v1 | # Symmetric Functions over Finite Fields
###### Abstract
The number of linear independent algebraic relations among elementary symmetric polynomial functions over finite fields is computed. An algorithm able to find all such relations is described. It is proved that the basis of the ideal of algebraic relations found by the algorithm consists of polynomials having coefficients in the prime field \(\mathbb{F}_{p}\).
A.M.S.-Classification: 14-04, 15A03.
## 1 Introduction
The interpolation problem for symmetric functions over finite fields does not have a unique solution in terms of elementary symmetric polynomials. That is why it is useful to know more about the set of solutions, as sometimes we need a solution which is easier to express or faster to evaluate. The general situation will be described in the lines below. Some steps are detailed in the following sections.
Let \(\mathbb{F}_{q}\) be a finite field of characteristic \(p\), a prime number. Let \(\mathbb{F}_{q}[X_{1},\ldots,X_{n}]\) be the \(\mathbb{F}_{q}\)-algebra of polynomials in variables \(X_{1},\ldots,X_{n}\) over \(\mathbb{F}_{q}\). The symmetric group \(S_{n}\) acts on \(\mathbb{F}_{q}[X_{1},\ldots,X_{n}]\) as a group of automorphisms of \(\mathbb{F}_{q}\)-algebras and the subalgebra \(Q\) of fixed points is usually called the ring of symmetric polynomials. The algebra \(Q\) is again a polynomial ring freely generated by the elementary symmetric polynomials
\[e_{i}=\sum_{1\leq j_{1}<j_{2}<\cdots<j_{i}\leq n}X_{j_{1}}\ldots X_{j_{l}}\]
for \(i=1,\ldots,n\). So \(Q=\mathbb{F}_{q}[e_{1},\ldots,e_{n}]\).
The vector space \(Q\) has a basis given by the monomial symmetric functions \(e_{1}^{k_{1}}\ldots e_{n}^{k_{n}}\).
There is an obvious homomorphism of \(\mathbb{F}_{q}\)-algebras from \(\mathbb{F}_{q}[X_{1},\ldots,X_{n}]\) to the \(\mathbb{F}_{q}\)-algebra \(\mathscr{F}(q,n)\) of functions from \(\mathbb{F}_{q}^{n}\) to \(\mathbb{F}_{q}\). Considering the natural action of \(S_{n}\) on \(\mathscr{F}(q,n)\) one can easily see that this homomorphism is \(S_{n}\)-equivariant, i. e. it commutes with the action of \(S_{n}\). From this it follows that \(Q\) maps to the subring \(\mathscr{S}(q,n)\) of the symmetric functions in \(\mathscr{F}(q,n)\).
Since \(x^{q}=x\) for all \(x\in\mathbb{F}_{q}\), it is clear that the above homomorphism factors through the quotient:
\[\mathbb{F}_{q}\{X_{1},\ldots,X_{n}\}=\mathbb{F}_{q}[X_{1},\ldots,X_{n}]/<X_{1 }^{q}-X_{1},\ldots,X_{n}^{q}-X_{n}>,\]
whose basis consists of those monomials whose exponents are smaller than \(q\). So \(\mathbb{F}_{q}\{X_{1},\ldots,X_{n}\}\) is finite of dimension \(q^{n}\) and has \(q^{q^{n}}\) elements. By using interpolation, one can deduce that the above homomorphism gives an isomorphism \(\mathbb{F}_{q}\{X_{1},\ldots,X_{n}\}\simeq\mathscr{F}(q,n)\). This will follow by a cardinality argument.
The ideal \(I=<X_{1}^{q}-X_{1},\ldots,X_{n}^{q}-X_{n}>\) is obviously stable under the action of \(S_{n}\). Hence \(\mathbb{F}_{q}\{X_{1},\ldots,X_{n}\}\) is an \(S_{n}\)-module for the previously given action and that the isomorphism \(\mathbb{F}_{q}\{X_{1},\ldots,X_{n}\}\simeq\mathscr{F}(q,n)\) is an isomorphism of \(S_{n}\)-modules. Under this map we see that the basis of \(Q\) made of monomial symmetric functions is mapped to a basis of the symmetric function algebra \(\mathscr{S}(q,n)\) so that the image of \(Q\) in \(\mathbb{F}_{q}\{x_{1},\ldots,x_{n}\}\) is isomorphic to \(\mathscr{S}(q,n)\).
If \(E_{1},\ldots,E_{n}\) are new variables, one can also consider the quotient
\[\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}=\mathbb{F}_{q}[E_{1},\ldots,E_{n}]/<E_{1 }^{q}-E_{1},\ldots,E_{n}^{q}-E_{n}>,\]
which is again finite of dimension \(q^{n}\) over \(\mathbb{F}_{q}\). Consider the embedding
\[\mathbb{F}_{q}[E_{1},\ldots,E_{n}]\to\mathbb{F}_{q}[X_{1},\ldots,X_{n}]\]
generated by the substitutions \(E_{k}\leadsto e_{k}(X_{1},\ldots,X_{n})\). This embedding descends to a homomorphism
\[\Psi:\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}\to\mathbb{F}_{q}\{X_{1},\ldots,X_{n} \}\simeq\mathscr{F}(q,n),\]
because the embedding transports \(J=<E_{1}^{q}-E_{1},\ldots,E_{n}^{q}-E_{n}>\) inside \(I\).
Let \(\vec{X}\) denote the tuple of variables \((X_{1},\ldots,X_{n})\). By diagram chasing one checks that \(\Psi(\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\})\) is the image of \(\mathbb{F}_{q}[e_{1}(\vec{X}),\ldots,e_{n}(\vec{X})]\) under the natural projection \(\mathbb{F}_{q}[X_{1},\ldots,X_{n}]\to\mathbb{F}_{q}\{X_{1},\ldots,X_{n}\}\). By standard combinatorics one knows that the dimension of the space of the symmetric functions \(\mathscr{S}(q,n)\) is:
\[\binom{n+q-1}{n}.\]
Summarising we have that \(\Psi(\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\})\) is isomorphic with the ring of symmetric functions \(\mathscr{S}(q,n)\) and therefore the dimension of the kernel of \(\Psi\) is:
\[q^{n}-\binom{n+q-1}{n}.\]
We call this kernel \(\mathscr{I}(q,n)\). This kernel can be seen as the set of all algebraic relations between elementary symmetric functions over a finite field. For example, if we consider the symmetric functions with two variables \(e_{1}=X_{1}+X_{2}\) and \(e_{2}=X_{1}X_{2}\) as polynomial functions \(\mathbb{F}_{2}\to\mathbb{F}_{2}\), then:
\[e_{1}e_{2}=(X_{1}+X_{2})X_{1}X_{2}=X_{1}^{2}X_{2}+X_{1}X_{2}^{2}=2X_{1}X_{2}=0,\]
so those functions fulfill the algebraic identity \(e_{1}e_{2}=0\). This doesn't happens when those symmetric functions are considered over \(\mathbb{C}\), because it is known that there the corresponding functions are algebraically independent.
The scope of this article is to show an algorithm that computes a basis of the kernel \(\mathscr{I}(q,n)\) as \(\mathbb{F}_{q}\)-vector space. The general Buchberger-Moller algorithm, see [3], finds a basis of the ideal in the sense of ring theory. The algorithm presented here finds a basis as a vector space. In practical applications, if someone finds an interpolation formula for some symmetric function, and looks for a shorter definition, the basis as a vector space is more useful. Moreover, the algorithm given here for this particular problem works faster then Buchberger-Moller algorithm.
## 2 Definitions and notations
Consider a finite field \(\mathbb{F}_{q}\) of characteristic \(p\). The elements of \(\mathbb{F}_{q}\) are ordered in some way, and this order is fixed. For the rest of the paper we fix a natural number \(n\geq 1\) and two sets of variables: \(E_{1}\),..., \(E_{n}\) and \(X_{1}\),..., \(X_{n}\).
**Definition 2.1**: The set \(\mathrm{Mon}(q,n)\) is the set of all monomials \(E_{1}^{\alpha_{1}}E_{2}^{\alpha_{2}}\ldots E_{n}^{\alpha_{n}}\) with \(0\leq\alpha_{i}<q\). There are \(q^{n}\) many such monomials.
**Definition 2.2**: Let \(\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}\) be the vector space over \(\mathbb{F}_{q}\) freely generated by \(\mathrm{Mon}(q,n)\).
\(\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}\) has dimension \(q^{n}\) over \(\mathbb{F}_{q}\). It has also a canonical structure of finite ring induced by the epimorphism:
\[s:\mathbb{F}_{q}[E_{1},\ldots,E_{n}]\longrightarrow\mathbb{F}_{q}\{E_{1}, \ldots,E_{n}\}\]
with \(\mathrm{Ker}(s)=(E_{1}^{q}-E_{1},\ldots,E_{n}^{q}-E_{n})\) as ideal in \(\mathbb{F}_{q}[E_{1},\ldots,E_{n}]\). Observe that \(\mathbb{F}_{q}\models\ \forall x\,x^{q}=x\).
**Definition 2.3**: For \(f:\mathbb{F}_{q}^{n}\rightarrow\mathbb{F}_{q}\) and a permutation \(\sigma\in S_{n}\) we define \(f^{\sigma}(\vec{X})=f(\sigma(\vec{X}))\) where \(\sigma(\vec{X})=(X_{\sigma(1)},\ldots,X_{\sigma(n)})\). The function \(f\) is called symmetric if for all \(\sigma\in S_{n}\), \(f=f^{\sigma}\).
**Definition 2.4**: Let \(\mathcal{F}(q,n)\) denote the set of all functions \(f:\mathbb{F}_{q}^{n}\rightarrow\mathbb{F}_{q}\) and \(\mathcal{S}(q,n)\subset\mathcal{F}(q,n)\) the subset of all symmetric functions. Both sets equipped with the point-wise operations are finite rings and finite vector spaces over \(\mathbb{F}_{q}\).
**Definition 2.5**: For every \(F\in\mathbb{F}_{q}[X_{1},\ldots,X_{n}]\) and \(\sigma\in S_{n}\) we define \(F^{\sigma}(\vec{X})=F(\sigma(\vec{X}))\), where \(\sigma(\vec{X})=(X_{\sigma(1)},\ldots,X_{\sigma(n)})\). \(F\) is called symmetric if for all \(\sigma\in S_{n}\), \(F^{\sigma}=F\).
**Definition 2.6**: For \(0\leq k\leq n\) denote by \(\mathcal{P}_{k}^{n}\) be the set of \(k\)-element subsets of \(\{1,\ldots,n\}\). Recall that the elementary symmetric polynomials \(e_{k}(X_{1},\ldots,X_{n})\) are defined as:
\[e_{k}(X_{1},\ldots,X_{n})=\sum_{J\in\mathcal{P}_{k}^{n}}\prod_{i\in J}X_{i}.\]
**Definition 2.7**: Consider the function
\[\Phi:\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}\longrightarrow\mathcal{S}(q,n)\]
defined such that
\[\forall\vec{a}\in\mathbb{F}_{q}^{n}\ \Phi(f)(\vec{a})=f(e_{1}(\vec{a}),\ldots,e_{n} (\vec{a})),\]
where \(f\in\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}\). Here we understand \(E_{k}\) as a symbol for the elementary symmetric polynomial \(e_{k}\). \(\Phi\) is a well defined homomorphism of finite rings and finite \(\mathbb{F}_{q}\)-vector spaces.
**Definition 2.8**: The ideal \(\mathcal{I}(q,n)=\mathrm{Ker}(\Phi)\subset\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}\) is the **ideal of algebraic relations** between elementary symmetric functions over \(\mathbb{F}_{q}\).
\(\mathcal{I}(q,n)\) is also a vector subspace of \(\mathbb{F}_{q}\{e_{1},\ldots,e_{n}\}\).
We also consider the following chain of homomorphisms:
\[\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}\xrightarrow{\Psi}\mathbb{F}_{q}\{X_{1}, \ldots,X_{n}\}\xrightarrow{\Gamma}\mathcal{F}(q,n),\]
where \(\Psi(P)=P(\vec{e}(\vec{X}))\) is the substitution homomorphism and \(\Gamma\) is the homomorphism associating to every polynomial \(Q\) its polynomial function. Of course \(\Phi=\Gamma\circ\Psi\).
## 3 The number of algebraic relations
**Definition 3.1**: Let \(\mathrm{WM}(q,n)\) be the set of all weakly monotone increasing tuples \((a_{1},\ldots,a_{n})\) in \(\mathbb{F}_{q}\) according to the fixed order. Denote by \(\mathrm{wm}(q,n)\) the cardinality of the set \(\mathrm{WM}(q,n)\).
**Lemma 3.2**: \[\dim_{\mathbb{F}_{q}}\mathcal{S}(q,n)=\mathrm{wm}(q,n)=\binom{n+q-1}{n}.\]
**Proof**: In order to define an \(f\in\mathcal{S}(q,n)\), it is enough to define its values for every \(\overline{\imath}\in\mathrm{WM}(q,n)\). This number of sequences is the same as the number of possibilities to put \(n\) unsigned balls in \(q\) numbered urns, see [2]. As done there, the \(q\) urns can be represented by \(q+1\) vertical bars and the \(n\) balls as \(n\) circles. A possible distribution as it follows:
\[|\circ||\circ\circ||\]
Because the first and the last symbols in distribution must be bars, we have to distribute \(n\) circles in \(n+q-1\) positions and to complete the remaining positions with bars. There are
\[\binom{n+q-1}{n}\]
possibilities to do so. \(\Box\)
**Definition 3.3**: Consider the following matrix \(M(q,n)\in\mathcal{M}(\mathrm{wm}(q,n)\times q^{n},\mathbb{F}_{q})\). The rows of \(M(q,n)\) are indexed using the tuples \(\overline{\imath}\in\mathrm{WM}(q,n)\), the columns are indexed using the monomials \(m\in\mathrm{Mon}(q,n)\), and if \(M(q,n)=(a(\overline{\imath},m)\mid\overline{\imath}\in\mathrm{WM}(q,n),m\in \mathrm{Mon}(q,n))\),
\[a(\overline{\imath},m)=[\Phi(m)](\overline{\imath})=e_{1}^{\alpha_{1}}( \overline{\imath})\ldots e_{n}^{\alpha_{n}}(\overline{\imath}),\]
where \(m=E_{1}^{\alpha_{1}}\ldots E_{n}^{\alpha_{n}}\).
**Lemma 3.4**: _The morphism \(\Phi:\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}\to\mathcal{S}(q,n)\) with \(\mathrm{Ker}\,\Phi=\mathcal{I}(q,n)\) is surjective._
**Proof**: The proof consists of two steps.
**Step 1**: Let \(f:\mathbb{F}_{q}^{n}\to\mathbb{F}_{q}\) be some function, for the moment not necessarily symmetric. For \(a\in\mathbb{F}_{q}\) define the polynomial \(h_{a}\in\mathbb{F}_{q}[X]\):
\[h_{a}(X)=\prod_{\lambda\in\mathbb{F}_{q}\setminus\{a\}}\frac{X-\lambda}{a- \lambda}.\]
Observe that \(h_{a}(a)=1\) and \(h_{a}(\mathbb{F}_{q}\setminus\{a\})=0\). For a tuple \(\vec{a}\in\mathbb{F}_{q}^{n}\) define \(h_{\vec{a}}\in\mathbb{F}_{q}[X_{1},\ldots,X_{n}]\):
\[h_{\vec{a}}(\vec{X})=h_{a_{1}}(X_{1})\ldots h_{a_{n}}(X_{n}).\]
A polynomial interpolating \(f\) is:
\[H(\vec{X})=\sum_{\vec{a}\in\mathbb{F}_{q}^{n}}h_{\vec{a}}(\vec{X})f(\vec{a}).\]
We observe that for all \(\sigma\in S_{n}\), \((h_{\vec{a}})^{\sigma}=h_{\sigma^{-1}(\vec{a})}\). If the function \(f\) is symmetric, then \(f^{\sigma}=f\) and it follows:
\[H^{\sigma}(\vec{X})=\sum_{\vec{a}\in\mathbb{F}_{q}^{n}}h_{\vec{a}}^{\sigma}( \vec{X})f(\vec{a})=\sum_{\sigma^{-1}(\vec{a})\in\mathbb{F}_{q}^{n}}h_{\sigma^{ -1}(\vec{a})}(\vec{X})f(\sigma^{-1}(\vec{a}))=H(\vec{X}).\]
We proved that the interpolation algorithm applied to a symmetric function leads to a symmetric polynomial. Observe also that all exponents occurring in \(H\) are \(<q\).
**Step \(2\)**: We repeat the argument that a symmetric polynomial is a polynomial in elementary symmetric polynomials as given in [6] and reformulated in [5]. The following total order is defined over the set of monomials in \(\vec{X}\): \(X_{1}^{\alpha_{1}}\ldots X_{n}^{\alpha_{n}}<X_{1}^{\beta_{1}}\ldots X_{n}^{ \beta_{n}}\) if and only if \(\sum\alpha_{i}<\sum\beta_{i}\) or \(\sum\alpha_{i}=\sum\beta_{i}\) but \((\alpha_{i})<(\beta_{i})\) lexicographically. This is the graded lexicographic order. For a polynomial \(H(\vec{X})\in\mathbb{F}_{q}[\vec{X}]\) define \(\mathrm{Init}(H)\) to be the maximal monomial occurring in \(H\), according to this order. It follows from symmetry that \(\mathrm{Init}(H)\) has the form \(cX_{1}^{\gamma_{1}}\ldots X_{n}^{\gamma_{k}}\) with \(\gamma_{1}\geq\gamma_{2}\geq\cdots\geq\gamma_{n}\) with \(c\in\mathbb{F}_{q}\setminus\{0\}\). Consider the polynomial:
\[H_{1}(\vec{X})=H(\vec{X})-ce_{1}(\vec{X})^{\gamma_{1}-\gamma_{2}}e_{2}(\vec{X} )^{\gamma_{2}-\gamma_{2}}\ldots e_{n-1}(\vec{X})^{\gamma_{k-1}-\gamma_{k}}e_{ n}(\vec{X})^{\gamma_{k}}.\]
Observe that \(\mathrm{Init}(H_{1})<\mathrm{Init}(H)\). Continue by constructing in the same way a polynomial \(H_{2}\) with \(\mathrm{Init}(H_{2})<\mathrm{Init}(H_{1})\), and so on. This process ends in finitely many steps. Adding the \(S_{i}\)-monomials defined during the process, one gets a polynomial \(F\in\mathbb{F}_{q}[e_{1},\ldots,e_{n}]\) with the property that for all \(\vec{a}\in\mathbb{F}_{q}^{n}\), \(F(e_{1}(\vec{a}),\ldots,e_{n}(\vec{a}))=f(\vec{a})\). Finally, observe that all exponents occurring in \(F\) are again \(<q\), so \(F\in\mathbb{F}_{q}\{e_{1},\ldots,e_{n}\}\) and \(\Phi(F)=f\).
\(\Box\)
**Theorem 3.5**: _The rank of the matrix \(M(q,n)\) is maximal:_
\[\mathrm{rank}\;M(q,n)=\mathrm{wm}(q,n)=\binom{n+q-1}{n}.\]
_The dimension of the ideal \(\mathcal{I}(q,n)\) of algebraic relations as a vector space over \(\mathbb{F}_{q}\) is:_
\[\dim_{\mathbb{F}_{q}}\mathcal{I}(q,n)=q^{n}-\mathrm{wm}(q,n)=q^{n}-\binom{n+q -1}{n}.\]
**Proof**: Follows directly from Lemma 3.2 and the Lemma 3.4. \(\Box\)
**Remark 3.6**: If \(q\) is kept constant and \(n\to\infty\),
\[\frac{\dim_{\mathbb{F}_{q}}\mathcal{I}(q,n)}{\dim_{\mathbb{F}_{q}}\mathbb{F}_ {q}\{S_{1},\ldots,S_{n}\}}\;\longrightarrow\;1.\]
If \(n\) is kept constant and \(q\to\infty\),
\[\frac{\dim_{\mathbb{F}_{q}}\mathcal{I}(q,n)}{\dim_{\mathbb{F}_{q}}\mathbb{F}_ {q}\{S_{1},\ldots,S_{n}\}}\;\longrightarrow\;1-\frac{1}{n!}.\]
Indeed, if \(q\) is kept constant and \(n\to\infty\),
\[\frac{q^{n}-\binom{n+q-1}{n}}{q^{n}}=1-\frac{1}{(q-1)!}\frac{(n+1)\ldots(n+q-1 )}{q^{n}}\;\longrightarrow\;1.\]
If \(n\) is kept constant and \(q\to\infty\),
\[\frac{q^{n}-\binom{n+q-1}{n}}{q^{n}}=1-\frac{1}{n!}\frac{q(q+1)\ldots(q+n-1)}{q^{n }}\ \longrightarrow\ 1-\frac{1}{n!}.\]
**Remark 3.7**: Define the set of non-monotone tuples \(\operatorname{NM}(q,p)\) as the set of tuples \((a_{1},\ldots,a_{n})\) with \(a_{i}\in\mathbb{F}_{q}\) such that there are \(1\leq i<j\leq n\) with \(a_{i}>a_{j}\) according to the order fixed on \(\mathbb{F}_{q}\). Let \(\operatorname{nm}(q,n)\) be the cardinality of the set \(\operatorname{NM}(q,n)\). According to Theorem 3.5, \(\mathcal{I}(q,n)\) has dimension \(\operatorname{nm}(q,n)\). But is there any natural correspondence between \(\operatorname{NM}(q,n)\) and a basis of the vector space \(\mathcal{I}(q,n)\)? As far I know, this problem is open.
**Remark 3.8**: Recall the chain of homomorphisms:
\[\mathbb{F}_{q}\{E_{1},\ldots,E_{n}\}\xrightarrow{\Psi}\mathbb{F}_{q}\{X_{1},\ldots,X_{n}\}\xrightarrow{\Gamma}\mathcal{F}(q,n),\]
where \(\Psi(P)=P(\vec{e}(\vec{X}))\) is the substitution homomorphism and \(\Gamma\) is the homomorphism associating to every polynomial \(Q\) its polynomial function, and \(\Phi=\Gamma\circ\Psi\). Using the interpolation part of the proof of Lemma 3.4 one sees that \(\Gamma\) is an isomorphism of rings and vector spaces. Indeed, \(\Gamma\) is a surjective homomorphism and both rings have \(q^{q^{n}}\) elements.
**Corollary 3.9**: \(\operatorname{Ker}\Psi=\mathcal{I}(q,n)\) _and \(\operatorname{Im}\Psi=\Gamma^{-1}(\mathcal{S}(q,n))\). Consequently, the subring of symmetric polynomials in \(\mathbb{F}_{q}\{X_{1},\ldots,X_{n}\}\) is a vector space of dimension \(\operatorname{wm}(q,n)\) over \(\mathbb{F}_{q}\)._
## 4 Deduction procedure
Before presenting the specific algorithm that generates a basis of the ideal of algebraic identities, I briefly recall the Gauss Algorithm over some field \(K\).
**Gauss Algorithm**: Consider the extended matrix \(M\) of a system of \(e\) many linear equations in \(v\) unknowns. The matrix has \(e\) rows and \(v+1\) columns. The algorithm finds out if the system is solvable. It also finds out on how many parameters the solution depends and generates a parametric solution of the system.
At the beginning one completes an array \(\pi\) with the values \(\pi(0)=0\),..., \(\pi(v-1)=v-1\). This array will accumulate the total permutation of lines, produced by the algorithm.
Let \(r\) be a variable in which the algorithm computes the rank of the restricted matrix of the system. Let \(m=\min(e,v)\). The Gauss Algorithm is a repetition of Gauss Steps according to a variable \(s\) running from \(0\) to \(m-1\) inclusively. If the Gauss Step returns _false_ at step \(s\), the loop is broken. If not, \(r\) is increased by \(1\) and the corresponding Gauss Step is performed.
The Gauss Step of \(s\) works as follows. In every column \(c\) with \(s\leq c<v\) one looks for a row \(w\) with \(s\leq w<e\) such that \(M(w,c)\neq 0\). If no such an element is found, the Gauss Step returns _false_ and stops. Once such an element is found: if \(c\neq s\), the columns \(c\) and \(s\) are inter-changed and also, the values of \(\pi(s)\) and \(\pi(c)\) are inter-changed. Further, if \(s\neq w\), the rows \(w\) and \(s\) are inter-changed. Let now \(a=M(s,s)\). Then the row \(s\) is divided by \(a\). Further, from all the rows \(d\) with \(d>s\) is subtracted the row \(s\) times a field value \(b\), where \(b=M(d,r)\). Because of this, all elements of the column \(s\) which are below \(M(s,s)\) become \(0\). Finally, the Gauss Step returns _true_.
The decision on the number of solutions is made as follows. If the restricted rank \(r\) is equal with the number of unknowns \(v\), there is only one solution. If \(r<e\), there are two possibilities: (1) there are elements \(M(v,i)\neq 0\) with \(i\geq r\). In this case there are no solutions. Or (2), there are no such elements. Then the unknowns which corresponds now, after all permutations of columns (unknowns), to the columns \(r\), \(r+1\),..., \(v\), are independent parameters. The other unknowns are computed successively, started to the unknown corresponding to column \(v-1\) after the permutation, continuing to the unknown corresponding to the column \(v-2\), and finally ending with the unknown corresponding to the column \(0\).
\(\Box\)
The following algorithm is able to find a basis over \(\mathbb{F}_{q}\) for the vector space \(\mathcal{I}(q,n)\) of algebraic relations between elementary symmetric functions over \(\mathbb{F}_{q}\).
1. Consider \(q^{n}\) many new unknowns \(Y_{m}\) indexed using the set \(\mathrm{Mon}(q,n)\), and the following homogenous system \(\Sigma\) of \(\mathrm{wm}(q,n)\) many linear equations indexed using the set \(\mathrm{WM}(q,n)\): \[(\overline{t}):\quad\sum_{m\in\mathrm{Mon}(q,n)}[\Phi(m)](\overline{t})Y_{m}=0.\] The matrix of this linear homogenous system is the matrix \(M(q,n)\) defined in the previous section. One sees that for any polynomial \(P\in\mathbb{F}_{q}\{e_{1},\ldots,e_{n}\}\) following holds: \[P(\vec{e})=\sum_{m\in\mathrm{Mon}(q,n)}y_{m}\;m\;(\vec{e})\;\;\in\;\mathcal{I }(q,n)\;\Leftrightarrow\;(y_{m})\in(\mathbb{F}_{q})^{q^{n}}\text{ satisfies }\Sigma.\]
2. Using Gauss' Algorithm over \(\mathbb{F}_{q}\) transform \(M(q,n)\) in an upper triangular matrix. Recall that \(M(q,n)\) has maximal rank equal with \(\mathrm{wm}(q,n)\).
3. Introduce a tuple \(\vec{t}\) of \(q^{n}-\mathrm{wm}(q,n)\) many new parameters and compute the parametric solution of \(\Sigma\), consisting of linear functions in \(\overline{t}\): \[(Y_{m}(\overline{t}))_{m\in\mathrm{Mon}(q,n)}.\]
4. Using the equivalence from (1) one has that: \[\mathcal{I}(q,n)=\Big{\{}\sum_{m\in\mathrm{Mon}(q,n)}Y_{m}(\overline{t})\;\;m \;\mid\;\overline{t}\in(\mathbb{F}_{q})^{q^{n}-\mathrm{wm}(q,n)}\;\Big{\}}.\] For \(i=1,\ldots,q^{n}-\mathrm{wm}(q,n)\) set the parameter tuple \(\vec{t}_{i}=(0,0,\ldots,t_{i}=1,0,\ldots,0)\) in the general form \[P_{\vec{t}}=\sum_{m\in\mathrm{Mon}(q,n)}Y_{m}(\overline{t})m\] and call the result of the substitution \(P_{i}=P_{\vec{t}_{i}}\). The mapping \(\vec{t}\leadsto P_{\vec{t}}\) is an isomorphism of vector spaces over \(\mathbb{F}_{q}\) because it is a surjective linear mapping between vector spaces of equal dimensions. As an isomorphism, this mapping transports basis to basis, so \(\{\,P_{i}\mid i=1,\ldots,q^{n}-\mathrm{wm}(q,n)\,\}\) is a basis of the vector space \(\mathcal{I}(q,n)\) over \(\mathbb{F}_{q}\).
The fact that this algorithm finds a basis of the ideal of identities of symmetric polynomials follows directly from the theory developed above and from its description. It remains to show only that the
coefficients of the basis polynomials always belong to the base field. This shall be shown in the next Section. Here some examples will be given.
**Example**. \(\mathbb{F}_{2}[X_{1},X_{2}]\): \(\dim_{\mathbb{F}_{2}}\mathcal{I}(2,2)=2^{2}-\)\(\mathrm{wn}(2,2)=4-3=1\). The monomial basis of the vector space \(\mathbb{F}_{2}\{E_{1},E_{2}\}\) is the set \(\{1,E_{1},E_{2},E_{1}E_{2}\}\). The matrix \(M(2,2)\) is the following:
\[\begin{array}{c|cccc}&1&e_{1}&e_{2}&e_{1}e_{2}\\ \hline 00&1&0&0&0\\ 01&1&1&0&0\\ 11&1&0&1&0\\ \end{array}\]
Linear variables \(\{Y_{1},Y_{2},Y_{3},Y_{4}\}\) are associated to the columns. The linear system of equations over \(\mathbb{F}_{2}\):
\[Y_{1} = 0,\] \[Y_{1}+Y_{2} = 0,\] \[Y_{1}+Y_{3} = 0,\]
has the parametric solution \((Y_{1},Y_{2},Y_{3},Y_{4})=(0,0,0,t)\), where \(t\in\mathbb{F}_{2}\) is a parameter. It follows that:
\[\mathcal{I}(2,2)=\{te_{1}e_{2}\,|\,t\in\mathbb{F}_{2}\}=\{0,e_{1}e_{2}\}.\]
The only one nontrivial algebraic relation between elementary symmetric functions is in this case the following:
\[e_{1}e_{2} = 0.\]
We verified in the introduction that \(e_{1}e_{2}=0\) as a function.
**Example**. \(\mathbb{F}_{2}[X_{1},X_{2},X_{3}]\): \(\dim_{\mathbb{F}_{2}}\mathcal{I}(2,3)=2^{3}-\)\(\mathrm{wn}(2,3)=8-4=4\). The monomial basis of the vector space \(\mathbb{F}_{2}\{E_{1},E_{2},E_{3}\}\) is the set \(\{1,E_{1},E_{2},E_{3},E_{1}E_{2},E_{1}E_{3},E_{2}E_{3},E_{1}E_{2}E_{3}\}\). The matrix \(M(2,3)\) is the following:
\[\begin{array}{c|cccccccc}&1&e_{1}&e_{2}&e_{3}&e_{1}e_{2}&e_{1}e_{3}&e_{2}e_{ 3}&e_{1}e_{2}e_{3}\\ \hline 000&1&0&0&0&0&0&0\\ 001&1&1&0&0&0&0&0\\ 011&1&0&1&0&0&0&0\\ 111&1&1&1&1&1&1&1\\ \end{array}\]
Gauss' Algorithm produces the following matrix:
\[\begin{array}{cccccccc}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&1&1&1&1&1\\ \end{array}\]
Linear variables \(\{Y_{1},Y_{2},Y_{3},Y_{4},Y_{5},Y_{6},Y_{7},Y_{8}\}\) are associated to the columns. The linear system of equations over \(\mathbb{F}_{2}\) has the parametric solution:
\[(Y_{1},Y_{2},Y_{3},Y_{4},Y_{5},Y_{6},Y_{7},Y_{8})=(0,0,0,t_{1}+t_{2}+t_{3}+t_ {4},t_{1},t_{2},t_{3},t_{4}),\]
where \(t_{1},t_{2},t_{3},t_{4}\in\mathbb{F}_{2}\) are parameters. It follows that:
\[\mathcal{I}(2,3)=\{(t_{1}+t_{2}+t_{3}+t_{4})e_{3}+t_{1}e_{1}e_{2}+t_{2}e_{1}e_{3} +t_{3}e_{2}e_{3}+t_{4}e_{1}e_{2}e_{3}\,|\,t_{1},t_{2},t_{3},t_{4}\in\mathbb{F}_{ 2}\}.\]
By replacing the parameter vector \((t_{1},t_{2},t_{3},t_{4})\) with the unit vectors \((1,0,0,0)\), \(\ldots\), \((0,0,0,1)\), following basis of linear independent algebraic relations is found:
\[e_{3}+e_{2}e_{3} = 0\] \[e_{3}+e_{1}e_{3} = 0\] \[e_{3}+e_{1}e_{2} = 0\] \[e_{3}+e_{1}e_{2}e_{3} = 0\]
**Remark 4.1**: In this algorithm, we can skip the line corresponding to the tuple \((0,0,\ldots,0)\) and the column corresponding to the monomial \(1\). This does not change the result.__
**Remark 4.2**: If \(n=1\), the monomials are \(\{1,X,X^{2},\ldots,X^{q-1}\}\) and the tuples are the \(q\) elements of \(\mathbb{F}_{q}\). It follows that the matrix \(M(q,1)\) is nothing but the Vandermonde matrix over the field \(\mathbb{F}_{q}\). The ideal \(\mathcal{I}(q,1)\) is always \(0\).__
## 5 Coefficients in \(\mathbb{F}_{p}\)
**Theorem 5.1**: _Let \(\mathbb{F}_{q}\) be some finite field, \(p=\mathrm{char}\,\,\mathbb{F}_{q}\) and \(n\geq 2\). Then the algorithm presented above finds a basis of \(\mathcal{I}(q,n)\) consisting of polynomials with coefficients in the prime field \(\mathbb{F}_{p}\). In particular, such a basis always exists._
Before proving the Theorem 5.1, we must fix some notations concerning Gauss' Algorithm.
**Definition**: For \(i<j\) we define:
\(A(i,j)\) means that the equation \(i\) multiplied with an apropriate element is added to equation \(j\). The element is chosen such that the first non-zero coefficient in the equation \(j\) becomes \(0\).
\(L(i,j)\) means that equations (lines) \(i\) and \(j\) are inter-changed.
\(C(i,j)\) means that the columns \(i\) and \(j\) are inter-changed.
**Definition**: Here is the step number \(i\) of the deterministic Gauss' Algorithm. If \(a_{i,i}=0\) and the whole line \(i\) consists only of zeros, find the first \(j>i\) such that the line \(j\) contains non-zero elements, and apply \(L(i,j)\). If \(a_{i,i}\) continues to be zero, find the first \(k>i\) such that \(a_{i,k}\neq 0\) and apply \(C(i,k)\). Now, for each \(r>i\); if the element \(a_{r,i}\neq 0\) below \(a_{i,i}\), apply \(A(i,r)\).
**Definition**: Let \(K\) be some field and \(S\), \(S^{\prime}\) two systems of linear equations over \(K\); both of them consisting of \(e\) equations with \(u\) unknowns. We say that Gauss' Algorithm works in parallel for systems \(S\) and \(S^{\prime}\) if the application of Gauss' Algorithm on the systems leads to the same sequence of operations \(O_{1},O_{2},\ldots,O_{s}\) in the above notation. (For example, \(O_{1}=A(1,2)\),..., \(O_{s}=L(3,4)\), \(O_{s+1}=C(3,5)\),... and so on.)
**Lemma 5.2**: _Let \(K\) be a field and \(S\), \(S^{\prime}\) two homogenous systems of linear equations over \(K\), satisfying the following conditions:_
1. \(S\) _and_ \(S^{\prime}\) _have both_ \(e\) _equations and_ \(u\) _unknowns,_ \(k=u-e\geq 0\) _and_ \(\operatorname{rank}S=e\)_._
2. \(S^{\prime}\) _has been obtained from_ \(S\) _by some permutation of lines (equations)._
3. _Gauss' Algorithm works in parallel over_ \(S\) _and_ \(S^{\prime}\)_._
_In this situation, Gauss' Algorithm independently applied for the systems \(S\) and \(S^{\prime}\) computes the same parametrization of the common space of solutions:_
\[\forall\ i=1,\ldots,u\quad Y_{i}(\vec{t})=Y_{i}^{\prime}(\vec{t}),\]
_where \(\vec{t}=(t_{1},t_{2},\ldots,t_{k})\) is the tuple of parameters._
**Proof**: Let \(M\vec{Y}=\vec{0}\) be the system \(S\) of linear equations as in the statement. Running Gauss' Algorithm forth and back, we find that columns with indexes \(i_{1},\ldots,i_{e}\) build a non-singular \(e\times e\) minor \(A\). Let \(j_{1},\ldots j_{k}\) be the indexes of the left columns. For \(s=1,\ldots,k\), we set \(Y_{j_{s}}=t_{s}\), where \(t_{1},\ldots,t_{s}\) are independent parameters.
The parametric solution is obtained from:
\[A\begin{pmatrix}Y_{i_{1}}\\ \vdots\\ Y_{i_{e}}\end{pmatrix}=-\sum_{s=1}^{k}\vec{c}_{j_{s}}t_{s},\]
where \(\vec{c}_{j_{s}}\) are the column of \(M\) left outside \(A\). The parametric solution has the form:
\[\begin{pmatrix}Y_{i_{1}}\\ \vdots\\ Y_{i_{e}}\end{pmatrix}=\sum_{s=1}^{k}\vec{d}_{j_{s}}t_{s},\]
completed with the \(k\) many \(Y_{j_{s}}=t_{s}\) already done. Here the column vector \(\vec{d}_{j_{s}}\) is the unique solution of the system of linear equations:
\[A\begin{pmatrix}Y_{i_{1}}\\ \vdots\\ Y_{i_{e}}\end{pmatrix}=-\vec{c}_{j_{s}}.\]
Now consider the system \(S^{\prime}\) of linear equations \(M^{\prime}\vec{Y}=\vec{0}\). As Gauss' Algorithm works in parallel for the systems \(S\) and \(S^{\prime}\), by running it forth and back, we find the same columns \(i_{1},\ldots,i_{e}\) to build a non-singular minor \(A^{\prime}\). But the matrix \(M^{\prime}\) consists of the same lines as \(M\) in a permuted order, and the same permutation is used to get \(A^{\prime}\) from \(A\) and \(\vec{c}_{j_{s}}^{\prime}\) from \(\vec{c}_{j_{s}}\). The parametric solution is again:
\[\begin{pmatrix}Y_{i_{1}}\\ \vdots\\ Y_{i_{e}}\end{pmatrix}=\sum_{s=1}^{k}\vec{d}_{j_{s}}t_{s},\]
completed with the \(k\) many \(Y_{j_{s}}=t_{s}\), where the column vector \(\vec{d}_{j_{s}}\) is the unique solution of the system of linear equations:
\[A^{\prime}\begin{pmatrix}Y_{i_{1}}\\ \vdots\\ Y_{i_{e}}\end{pmatrix}=-\vec{c}_{j_{s}}^{\prime}.\]
This system is the same system as previous one, up to the order in which the equations are displayed, and we know that permuting the equation in a system with unique solution, does not change this solution. \(\Box\)
**Proof of the Theorem 5.1**: If \(\mathbb{F}_{q}=\mathbb{F}_{p}\) there is nothing to prove. Consider some automorphism \(\varphi\in\mathrm{Gal}(\mathbb{F}_{q}/\mathbb{F}_{p})\). The fact that \(\varphi\) is a power of Frobenius' Automorphism is not relevant here. Consider the system \(\Sigma\) used in the algorithm given above and the system \(\Sigma^{\prime}=\varphi(\Sigma)\).
_Claim 1: \(\Sigma^{\prime}\) can be obtained from \(\Sigma\) by some permutation of equations._ Indeed, the elements of the matrix \(\varphi(M(q,n))\) are \(\varphi(a(\overline{t},m))=\varphi([\Phi(m)](\overline{t}))=[\Phi(m)]( \varphi\overline{(}t))\). The automorphism \(\varphi\) is a permutation of \(\mathbb{F}_{q}\). Consequently, the line formerly indexed \(\overline{t}\) shall be found in \(\Sigma^{\prime}\) at the index obtained by the weakly monotone reordering of the tuple \(\varphi(\overline{t})\). As the coefficients of the system are values of symmetric functions for this tuple, the order inside the tuple does not matter.
_Claim 2: Gauss' Algorithm works in parallel over \(\Sigma\) and \(\Sigma^{\prime}\)._ The coefficients in \(\Sigma^{\prime}\) are images of corresponding coefficients in \(\Sigma\) by \(\varphi\). This situation remains true after every computation step done by Gauss Algorithm. In particular, at every step one has in both systems \(\Sigma\) and \(\Sigma^{\prime}\) the same situation concerning coefficients (matrix entries) which are zero or not. So after every step, the same decision concerning the next step shall be taken: an \(A(i,j)\) or a \(C(i,j)\).
So all conditions requested by Lemma 5.2 are satisfied, and Gauss' Algorithm computes the same parametrization \((Y_{i}(\overline{t}))\) for both systems \(\Sigma\) and \(\varphi(\Sigma)\). So for all \(i\), \(Y_{i}(\overline{t})=\varphi(Y_{i}(\overline{t}))\); and this takes place for all automorphisms \(\varphi\in\mathrm{Gal}(\mathbb{F}_{q}/\mathbb{F}_{p})\). It follows that the linear functions \(Y_{i}(\overline{t})\) have coefficients in \(\mathbb{F}_{p}\), and the same is true for the basis of \(\mathcal{J}(q,n)\) found by the algorithm. \(\Box\)
## 6 Further examples
This algorithm has been implemented by the author using the language C# on Visual Studio, and ran for the fields \(\mathbb{F}_{q}\) with \(q\in\{2,3,4,5,7,8,9,11,16,25,27,49,81\}\) and values of \(n\leq 5\). The implementation uses the fact that the elementary symmetric polynomials in \(x_{1},\ldots,x_{n}\) can be computed all at a time in quadratic time and linear space. We show here only some examples.
\(\mathbb{F}_{3}[x_{1},x_{2}]\). \(\mathrm{dim}_{\mathbb{F}_{3}}\mathcal{J}(3,2)=3^{2}-\mathrm{wn}(3,2)=9-6=3\). Following basis has been found:
\[2e_{1}e_{2}+e_{1}e_{2}^{2} = 0\] \[e_{2}+e_{2}^{2}+e_{1}^{2}e_{2} = 0\] \[e_{2}+e_{2}^{2}+e_{1}^{2}e_{2}^{2} = 0\]
\(\mathbb{F}_{3}[x_{1},x_{2},x_{3}]\). \(\mathrm{dim}_{\mathbb{F}_{3}}\mathcal{J}(3,3)=3^{3}-\mathrm{wn}(3,3)=27-10=17\). Following basis has been found:
\[2e_{2}e_{3}^{2}+e_{1}e_{3} = 0\] \[2e_{2}e_{3}+e_{1}e_{2}^{2} = 0\] \[e_{2}e_{3}+2e_{1}e_{2}^{2} = 0\] \[2e_{2}e_{3}^{2}+e_{1}e_{2}e_{3} = 0\] \[e_{2}e_{3}+e_{1}e_{2}e_{3} = 0\] \[e_{2}e_{3}+e_{1}e_{2}e_{3} = 0\] \[e_{2}e_{3}+2e_{1}e_{2}+e_{1}e_{2}^{2} = 0\] \[2e_{2}e_{3}^{2}+e_{1}e_{2}^{2}e_{3} = 0\] \[2e_{2}e_{3}+e_{1}e_{2}^{2}e_{3} = 0\] \[e_{2}e_{3}+e_{2}^{2}e_{3} = 0\] \[e_{2}e_{3}+e_{2}^{2}e_{3} = 0\] \[e_{2}e_{3}+e_{1}^{2}e_{3} = 0\] \[e_{2}e_{3}^{2}+e_{1}^{2}e_{3} = 0\] \[e_{2}e_{3}+e_{1}^{2}e_{2}e_{3} = 0\] \[2e_{2}e_{3}+e_{1}^{2}e_{2}e_{3} = 0\] \[2e_{2}e_{3}+e_{1}^{2}e_{2}e_{3} = 0\] \[2e_{2}e_{3}+e_{1}^{2}e_{2}e_{3} = 0\] \[2e_{2}e_{3}^{2}+e_{1}^{2}e_{2}e_{3}^{2} = 0\] \[e_{2}e_{3}^{2}+e_{1}^{2}e_{2}e_{3}^{2} = 0\]
\(\mathbb{F}_{4}[x_{1},x_{2}]\): The field \(\mathbb{F}_{4}\) has been realized using the polynomial \(X^{2}+X+1\) over \(\mathbb{F}_{2}\). \(\dim_{\mathbb{F}_{4}}\mathcal{I}(4,2)=4^{2}-\mathrm{wn}(4,2)=16-10=6\). The found basis has coefficients in \(\mathbb{F}_{2}\):
\[e_{1}e_{2}+e_{1}^{2}e_{2}^{2} = 0\] \[e_{1}e_{2}^{2}+e_{1}^{2}e_{2}^{3} = 0\] \[e_{1}e_{2}^{3}+e_{1}^{2}e_{2} = 0\] \[e_{1}e_{2}^{2}+e_{1}^{3}e_{2} = 0\] \[e_{1}e_{2}^{3}+e_{1}^{3}e_{2}^{2} = 0\] \[e_{1}e_{2}+e_{1}^{3}e_{2}^{3} = 0\]
\(\mathbb{F}_{5}[x_{1},\ldots,x_{5}]\): \(\dim_{\mathbb{F}_{5}}\mathcal{I}(5,5)=5^{5}-\mathrm{wn}(5,5)=3125-126=2999\). The first identity found was:
\[4e_{4}^{2}e_{5}+e_{4}^{4}e_{5}+3e_{3}e_{4}e_{5}^{2}+4e_{3}e_{4}^{3}e_{5}^{2}+ 3e_{3}e_{4}^{4}e_{5}^{2}+4e_{3}^{2}e_{4}e_{5}^{3}+e_{3}^{2}e_{4}^{4}e_{5}^{3}=0.\]
\(\mathbb{F}_{7}[x_{1},\ldots,x_{4}]\): \(\dim_{\mathbb{F}_{7}}\mathcal{I}(7,4)=7^{4}-\mathrm{wn}(7,4)=2401-210=2191\). The first identity found was:
\[e_{4}+6e_{4}^{4}+6e_{3}^{6}e_{4}+e_{3}^{6}e_{4}^{4}+5e_{2}e_{4}^{2}+2e_{2}e_{ 4}^{5}+5e_{2}e_{3}^{2}e_{4}^{2}+2e_{2}e_{3}^{2}e_{4}^{5}+5e_{2}e_{3}^{4}e_{4}^{ 2}+\] \[+2e_{2}e_{3}^{4}e_{4}^{5}+6e_{2}^{2}e_{4}^{3}+e_{2}^{2}e_{4}^{6}+ 6e_{2}^{2}e_{3}^{2}e_{4}^{3}+e_{2}^{2}e_{3}^{2}e_{4}^{6}+6e_{2}^{2}e_{3}^{4}e_{ 4}^{3}+e_{2}^{2}e_{3}^{4}e_{4}^{6}=0.\] |
2309.12529 | Curriculum Reinforcement Learning via Morphology-Environment
Co-Evolution | Throughout long history, natural species have learned to survive by evolving
their physical structures adaptive to the environment changes. In contrast,
current reinforcement learning (RL) studies mainly focus on training an agent
with a fixed morphology (e.g., skeletal structure and joint attributes) in a
fixed environment, which can hardly generalize to changing environments or new
tasks. In this paper, we optimize an RL agent and its morphology through
``morphology-environment co-evolution (MECE)'', in which the morphology keeps
being updated to adapt to the changing environment, while the environment is
modified progressively to bring new challenges and stimulate the improvement of
the morphology. This leads to a curriculum to train generalizable RL, whose
morphology and policy are optimized for different environments. Instead of
hand-crafting the curriculum, we train two policies to automatically change the
morphology and the environment. To this end, (1) we develop two novel and
effective rewards for the two policies, which are solely based on the learning
dynamics of the RL agent; (2) we design a scheduler to automatically determine
when to change the environment and the morphology. In experiments on two
classes of tasks, the morphology and RL policies trained via MECE exhibit
significantly better generalization performance in unseen test environments
than SOTA morphology optimization methods. Our ablation studies on the two MECE
policies further show that the co-evolution between the morphology and
environment is the key to the success. | Shuang Ao, Tianyi Zhou, Guodong Long, Xuan Song, Jing Jiang | 2023-09-21T22:58:59Z | http://arxiv.org/abs/2309.12529v1 | # Curriculum Reinforcement Learning via Morphology-Environment Co-Evolution
###### Abstract
Throughout long history, natural species have learned to survive by evolving their physical structures adaptive to the environment changes. In contrast, current reinforcement learning (RL) studies mainly focus on training an agent with a fixed morphology (e.g., skeletal structure and joint attributes) in a fixed environment, which can hardly generalize to changing environments or new tasks. In this paper, we optimize an RL agent and its morphology through "morphology-environment co-evolution (MECE)", in which the morphology keeps being updated to adapt to the changing environment, while the environment is modified progressively to bring new challenges and stimulate the improvement of the morphology. This leads to a curriculum to train generalizable RL, whose morphology and policy are optimized for different environments. Instead of hand-crafting the curriculum, we train two policies to automatically change the morphology and the environment. To this end, (1) we develop two novel and effective rewards for the two policies, which are solely based on the learning dynamics of the RL agent; (2) we design a scheduler to automatically determine when to change the environment and the morphology. In experiments on two classes of tasks, the morphology and RL policies trained via MECE exhibit significantly better generalization performance in unseen test environments than SOTA morphology optimization methods. Our ablation studies on the two MECE policies further show that the co-evolution between the morphology and environment is the key to the success.
## 1 Introduction
Deep Reinforcement learning (RL) has achieved unprecedented success in some challenging tasks (Lillicrap et al., 2016; Mnih et al., 2015). Although current RL can excel on a specified task in a fixed environment through massing training, it usually struggles to generalize to unseen tasks and/or adapt to new environments. A promising strategy to overcome this problem is to train the agent on multiple tasks in different environments (Wang et al., 2019; Portelas et al., 2019; Gur et al., 2021; Jaderberg et al., 2017) via multi-task learning or meta-learning (Salimans et al., 2017; Finn et al., 2017). However, it increases the training cost and the space for possible environments/tasks can be too large to be fully explored by RL. So how to select the most informative and representative environments/tasks to train an RL agent to evolve generalizable skills becomes an critical open challenge.
Unlike RL agents that do not actively seek new environments to improve their learning capability, natural species have full motivations to do so to survive in the competitive world, and one underlying mechanism to drive them is evolution. Evolution is a race with the changing environment for every
species, and a primary goal is to accelerate its adaptation to new environments. Besides merely optimizing its control policy, evolution more notably changes the morphology of species, i.e., the skeletal structure and the attributes for each part, in order to make them adapt to the environment. In fact, the improvement on morphology can be more critical because there could exist a variety of actions or skills an agent cannot do (no matter how the control policy is optimized) without certain structures, e.g., more than one leg, a long-enough limb, or a 360-degree rotation joint. For RL, we claim that **A good morphology should improve the agent's adaptiveness and versatility, i.e., learning faster and making more progress in different environments.** From prehistoric person to modern Homo sapiens, there is a definite association between the rise of civilization and the Homo sapiens' optimization of their physical form for improved tool use. Unfortunately, the morphology in many RL researches are pre-defined and fixed so it could be sub-optimal for the targeted environments/tasks and may restrain the potential of RL. Although morphology can be optimized using RL (Sims, 1994; Wang et al., 2019b; Kurin et al., 2021; Yuan et al., 2022) to improve the final performance for a specific task or environment, it was not optimized for the adaptiveness to varying environments. Moreover, instead of re-initializing the control/RL policy after every morphology modification or environment change, the millions of years of evolution demonstrate the effectiveness of continual and progressive learning of the control policy over generations of morphology evolution along with a sequence of varying environments. How to optimize the morphology and control policy of an RL agent for the above goal is still an underexplored problem.
Given that the agent has the freedom to improve its morphology and control policy for faster adaptation to different environments/tasks, the remaining question is: what environment can improve this learning process and incentivize the agent to keep finding better morphology (rather than staying with the same one)? The learning environment plays an essential role in RL since it determines the data and feedback the agent can collect for its training. In this paper, we adopt a simple criterion: **A good environment should accelerate the evolution of the agent and help it find better morphology sooner,** which will enhance the agent's adaptiveness and versatility over new environments/tasks.
The interplay between morphology and environment has lasted for billions of years. In the endless game, species which find the appropriate environments to evolve its morphology and policy may survive and keep being improved via adaptation to new environments/tasks. Inspired by this, we propose an _morphology-environment co-evolution (MECE)_ scheme that automatically generates a curriculum of varying environments to optimize the agent's morphology and its RL policy so they can be generalized to unseen environments and adapted to new morphology, respectively. In MECE, we train two RL policies to modify the morphology and change the environment, which create a sequence of learning scenarios to train the control policy continuously. This differs from previous work that select the morphology or the environment from a pre-defined set of candidates. Specifically, unlike zeroth-order optimization approaches (Wang et al., 2019b; Hejna et al., 2021; Wang et al., 2018), a morphology policy network can generate modifications based on all the morphology evaluated in diverse environments from history. On the other hand, an environment policy can produce a sequence of progressively modified environments instead of randomly selected or sampled ones (Portelas et al., 2019; Wang et al., 2019a). In Fig. 1, we illustrate the co-evolution process of MECE and compare
Figure 1: **MECE vs. previous methods. (a) Evolution strategy (ES)-based method for morphology optimization. It starts from agents with different morphology, eliminates those with poor performance, and then applies random mutation to the survivors. (b) Transform2Act, which trains an RL policy to modify the morphology of an agent in a fixed environment. (c) MECE (ours). We train two policies to optimize the morphology and change the environment to form a curriculum over the course of training a control policy.**
it with evolution-strategy (ES) based method (Wang et al., 2019) and Transform2Act (Yuan et al., 2022), which uses RL to optimize the morphology in a fixed environment.
Inspired by the former discussion of "good morphology" and "good environment", we optimize the morphology policy to improve the agent's adaptiveness+versatility and optimize the environment policy to improve the morphology's evolution. To this end, we design two rewards for the two policies based on the learning progress of the control policy, which is estimated by comparing its concurrent performance on a validation set and its historical trajectories. Therefore, the three policies are complementary and mutually benefit each other: (1) the morphology and environment policies create a curriculum to improve generalizability of the control policy; (2) the environment policy creates a curriculum to optimize the morphology policy; (3) the control policy provides rewards to train the other two policies. In MECE, we train the control policy and automatically determine when to apply the other two policies to modify the morphology or change the environment: they are triggered by the slow progress on the current morphology and environment, respectively. In experiments over different tasks, the morphology and control policy learned through MECE significantly outperforms the existing method in new environments. Moreover, MECE exhibits much faster-learning speed than baselines. We further provide an extensive ablation study to isolate each main component of MECE's contribution and compare it with other options.
## 2 Related Work
**Continuous Design Optimization.** A line of research in the community has studied optimizing an agent's continuous design parameters without modifying its skeletal structure, and they commonly optimize on a specific kind of robots. Baykal and Alterovitz (2017) studies an annealing-based framework for cylindrical robots optimization. Ha et al. (2017); Desai et al. (2017) optimize the design of legged robots by trajectory optimization and implicit function. Applying deep RL into design optimization becomes more popular recently. Chen et al. (2020) uses computational graphs to model robot hardware as part of the policy. CMA-ES (Luck et al., 2019) optimizes robot design via a learned value function. Ha et al. (2017) proposes a population-based policy gradient method. Another line of work (Yu et al., 2019; Exarchos et al., 2021; Jiang et al., 2021) search for the optimal robot parameters to fit an incoming domain by RL. Different from the prior works, our approach learns to find the most general skeleton of an agent while maintaining its continuous design parameters optimization to diverse environments or tasks.
**Combinatorial morphology Optimization.** One closely related line of work is the design of modular robot morphology spaces and developing algorithms for co-optimizing morphology and control (Sims, 1994; Liao et al., 2019; Gupta et al., 2022; Trabucco et al., 2022) within a design space to find task-optimized combinations of controller and robot morphology. When the control complexity is low, evolutionary strategies have been successfully applied to find diverse morphologies in expressive soft robot design space (Cheney et al., 2013, 2018). For more expressive design spaces, GNNs have been leveraged to share controller parameters (Wang et al., 2019) across generations or develop novel heuristic search methods for efficient exploration of the design space (Zhao et al., 2020). In contrast to task specific morphology optimization, Hejna et al. (2021) propose evolving morphologies without any task or reward specification. Hejna et al. (2021) employ an information-theoretic objective to evolve task-agnostic agent designs.
**Generalization to environments.** Several recent works (Wang et al., 2019; Portelas et al., 2019; Gur et al., 2021; Parker-Holder et al., 2022) show that the efficiency and generalization of RL can be improved by training the policy in different environments. By sampling the parameters of environmental features, they Portelas et al. (2019); Wang et al. (2019) teach the RL agent using a curriculum of various environments. To estimate the distribution of environments, however, requires evaluating the RL policy on numerous sampled environments, which might be expensive and inaccurate for intricate ones. The modified environments may also be unfeasible or too difficult for RL exploration. In MECE, we train an environment policy to change the environments adaptive to the evolved agent's learning progress, and it is possible to regulate the level of difficulty of the modified environments.
Formulations of the Three Policies in MECE
In MECE, we have three RL policies, a control policy \(\pi\) that learns to control the agents of evolved morphology, a morphology policy \(\pi_{m}\) that learns to modify the agent's morphology for better robustness in diverse environments, and an environment policy \(\pi_{e}\) that learns to change the environment to boost the morphology evolution. During the training phase, \(\pi_{m}\) and \(\pi_{e}\) are alternately applied to evolve the agent's morphology and the training environment, respectively. Taking into account that \(\pi\) might require various environment steps for training in distinct morphologies or environments, we propose a dynamic time-scaling for \(\pi_{m}\) and \(\pi_{e}\) that is adaptive to \(\pi\)'s learning process.
**Agent control.** The problem of controlling an agent can be modeled as a Markov decision process (MDP), denoted by the tuple \(\{\mathcal{S},\mathcal{A},p,r,\gamma\}\), where \(\mathcal{S}\) and \(\mathcal{A}\) represent the set of state space and action space respectively, \(p\) means the transition model between states, and \(r\) is the reward function. An agent of morphology \(m_{i}\in M\) in state \(s_{t}\in\mathcal{S}\) at time \(t\) takes action \(a_{t}\in\mathcal{A}\) and the environment returns the agent's new state \(s_{t+1}\) according to the unknown transition function \(p(s_{t+1}|s_{t},a_{t},m_{i})\), along with associated reward \(r_{t}=r(s_{t},a_{t})\). The goal is to learn the control policy \(\pi^{*}:\mathcal{S}\rightarrow\mathcal{A}\) mapping states to actions that maximizes the expected return \(\mathbb{E}[\mathcal{R}_{t}]\), which takes into account reward at future time-steps \(J(\pi)=\mathbb{E}[\mathcal{R}_{t}]=\mathbb{E}\left[\sum_{t=0}^{H}\gamma^{t}r _{t}\right]\) with a discount factor \(\gamma\in[0,1]\), where \(H\) is the variable time horizon.
We use a graph neural network (GNN) for the control policy because (1) it can be generalized to different morphologies and learn fundamental control skills; (2) it captures the skeleton structure and encodes physical constraints. Specifically, we represent the agent's skeleton by a graph \(G=(V,A,E)\) where each node \(u\in V\) represents a joint and each edge \(e\in E\) represents a bone connecting two joints. Each joint \(u\) is associated with a representation vector \(z_{u}\) storing attributes in \(A\), e.g., bone length, motor strength, size. The GNN takes \(z_{u}\) as inputs and propagate messages across nodes on the graph to produce a hidden state for each node.
**Morphology evolution.** We model the morphology evolution as an MDP problem, denoted by the tuple \(\{\mathcal{S}_{m},\mathcal{B},p_{m},r_{m},\rho\}\). \(\mathcal{S}_{m}\) represents the set of state space, and a state \(s_{\alpha t}^{m}\in\mathcal{S}_{m}\) at time-step \(\alpha t\) is defined as \(s_{\alpha t}^{m}=G_{\alpha t}\), where \(G_{\alpha t}\) is the skeleton structure of agent. The action \(b_{\alpha t}\) sampled from the set of action space \(\mathcal{B}\) can modify the topology of the agent, that has three choices in \(\{AddJoint,DelJoint,NoChange\}\). The transition dynamics \(p_{m}(s_{\alpha t+1}^{m}|s_{\alpha t}^{m},b_{\alpha t})\) reflects the changes in state and action. We define the reward function for \(\pi_{m}\) as the average improvement of training \(\pi\) with the current evolved morphology in different environments. The reward \(r_{m}\) at time-step \(\alpha t\) is denoted as
\[r_{\alpha t}^{m}=\frac{1}{|E|}\sum_{\theta^{E}\in E}\left(\mathcal{R}(H, \theta^{E},G_{\alpha t})-\mathcal{R}(H,\theta^{E},G_{\alpha t-1})\right)\,- \lambda\|b_{\alpha t}^{m}\|_{2}^{2}, \tag{1}\]
note that the first term in Eq. 1 indicates the average performance of the current morphology in diverse environments, and the second term is a cost for each action taken by \(\pi_{m}\) that modifies the morphology of the agent. \(E\) is the set of environments used for evaluation.
**Environment evolution.** We model environment evolution as an MDP problem, denoted by the tuple \(\{\mathcal{S}_{e},\mathcal{C},p_{e},r_{e},\rho\}\). The state \(s_{\beta t}^{e}=(G_{\beta t},\theta_{\beta t}^{E})\) in the set of state space \(\mathcal{S}_{e}\) includes the agent's topology \(G_{\beta t}=(\mathcal{V}_{\beta t},\mathcal{E}_{\beta t})\) and the parameter of environment \(\theta_{E}\in\Theta_{E}\). The transition dynamics \(p_{e}(s_{\beta t+1}^{e}|s_{\beta t}^{e},c_{\beta t})\) reflects the changes in the state by taking an action at time-step \(\beta t\). The action \(c_{\beta t}\) will directly change the environment parameters (more detailed in Appendix.A). We define the reward for \(\pi_{e}\) as the learning progress of multiple morphologies in the current environment. Thus, training \(\pi_{e}\) will encourage \(\pi_{m}\) to optimize the morphology of better generalization sooner. The reward \(r_{e}\) at time-step \(\beta t\) is denoted as
\[r_{\beta t}^{E} =\mathcal{P}_{\beta t}-\mathcal{P}_{\beta t-1}, \tag{2}\] \[where,\,\,\,\mathcal{P}_{\beta t} =\mathcal{R}(H,\theta_{\beta t}^{E},G_{\beta t})-\mathcal{R}(H, \theta_{\beta t-1}^{E},G_{\beta t}).\]
in Eq. 2, \(\mathcal{P}_{k}\) is the learning progress of the RL agent in an environment defined on \(\mathcal{R}(H,\theta_{\beta t}^{E},G_{\beta t})\), which denotes the expected return of the RL agent \(G_{\beta t}\) of \(H\) time-steps evaluated in environment \(\theta_{k}^{E}\).
**Short evaluation window.** Since \(r_{m}\) and \(r_{e}\) are respectively based on the training process of \(\pi\) on different morphologies or environments, it is cost but necessary to rollout \(\pi\) periodically to collect
data. However, Hejna et al. (2021) demonstrates that evaluating with short horizon is enough to provide informative feedback to reflect the training process of the current policy. In light of this, we run a short evaluation window for \(\pi\) after every period of environment steps taken by the agent. In each evaluation window, we rollout multiple morphologies of the best performance by \(\pi\) in several most recent environments, and then we can calculate \(r_{m}\) and \(r_{e}\) based on the evaluation results. More details refer to Appendix.C.
In this paper, we use a standard policy gradient method, PPO Schulman et al. (2017), to optimize these three policies. **Note that the time scale of the main policy, morphology policy \(\pi_{m}\) and environment policy \(\pi_{e}\) are different, indicated by the subscript \(\alpha\) and \(\beta\), which we will explain more detailed in the following section.**
## 4 Algorithm of MECE
In MECE, we need jointly train three policies: control policy \(\pi\) learns to complete tasks, morphology policy \(\pi_{m}\) learns to evolve the agent's morphology to be more adaptive to diverse environments, and environment policy \(\pi_{e}\) learns to modify the training environments more challenging. At first glance, training multiple RL policies may appear more difficult than training a single RL policy, as it needs the collection of additional data via interaction. However, MECE enables three policies to help in each other's training through the co-evolving mechanism, where each policy is trained on a curriculum of easy-to-hard tasks utilizing dense feedback from the training experience of other policies. By iterating this co-evolution process, MECE considerably improves the training efficiency of each policy, resulting in an RL agent of morphology that is more generalizable across environments.
In each episode, the control policy \(\pi\) starts from training on an initial agent in a randomly sampled environment (refer to Line.3-9 in Alg. 1). This initial agent's skeleton structure is relatively simple and is easy to learn according to its naive morphology. Even though the naive agent is not generalized due to its constrained morphology, it can initially provide dense feedback for training \(\pi_{m}\) and \(\pi_{e}\). On the other hand, beginning from an agent with a random morphology is theoretically feasible but inefficient when initializing a complicated morphology that is difficult to learn to control and yields uninformative feedback for training the other policies.
```
1:Input: Current morphology \(m_{\alpha t}\), control policy \(\pi\), training environment \(\theta_{\#t}^{E}\), dataset \(\mathcal{D}\).
2:Procedure CO-EVO(\(m_{\alpha t},\pi,\theta_{\#t}^{E}\)):
3:Evaluate \(\pi\) with morphology \(m_{\alpha t}\) in \(\theta_{\#t}^{E}\);
4:Calculate reward \(r_{m}\) and \(r_{e}\) following Eq. 1 and Eq. 2;
5:if\(r_{m}\leq\delta_{m}\)then\(\triangleright\) Modify the agent's morphology
6: Apply \(\pi_{m}\) to modify \(m_{\alpha t}\) to \(m_{\alpha t+1}\);
7: Update \(\pi_{m}\) with PPO using samples in \(\mathcal{D}\);
8:else
9:if\(r_{e}\leq\delta_{e}\)then\(\triangleright\) Modify the training environment
10: Apply \(\pi_{e}\) to modify \(\theta_{\#t}^{E}\) to \(\theta_{\#t+1}^{E}\);
11: Update \(\pi_{e}\) with PPO using samples in \(\mathcal{D}\);
12:endif
13:endif
14:Return (\(m_{\alpha t+1},\theta_{\#t+1}^{E}\))
```
**Algorithm 2** CO-EVO
We assume that the control policy is now mature on the current morphology after training. Then MECE proceeds to a series of co-evolution phases, where a phase is associated with achieving robustness across the evolved morphologies and environments (refer to Line.11 in Alg. 1). In each phase, as shown in Line.3-4 of Alg. 2, we first evaluate the control policy to collect rewards for training \(\pi_{m}\) and \(\pi_{e}\). Note that this step is not cost, as the RL agent executes environment steps with a short horizon. After that, we alternately apply \(\pi_{m}\) and \(\pi_{e}\) to co-evolve the agent's morphology and the training environments based on two criteria of \(r_{m}\) and \(r_{e}\) (refer to Line.5 and 9 in Alg. 2).
**Dynamic update window based on the reward criteria.** As mentioned in section 3, \(r_{m}\) and \(r_{e}\) indicate the learning progress of the current morphology in the training environment respectively.
In particular, a nominal value of \(r_{m}\) corresponds to the adaptability of the current morphology to different environments and indicates that training with the current morphology cannot significantly increase \(\pi\)'s performance. In this instance, we should apply \(\pi_{m}\) to optimize the morphology to increase its generalization to environments. Similarly, a minor \(r_{e}\) indicates that modifying the agent's morphology will have a negligible effect on its performance in the current environment, and the environment should be modified to be more challenging to boost the morphology evolution. As a result, we employ two independent criteria for \(r_{m}\) and \(r_{e}\) to formulate a dynamic update window, which allows MECE to adapt to the learning progress of morphology and environment and apply \(\pi_{m}\) or \(\pi_{e}\) to produce corresponding evolution and alterations.
We now have an agent of newly evolved morphology or a modified training environment, and then forward to the next iteration of training the control policy \(\pi\) on them. Using \(r_{m}\) and \(r_{e}\), MECE has achieved the alternating evolution of morphology and environment. MECE not only improves the robustness of the morphology to various environment, but also improves the learning efficiency of the policies through the use of dense feedback.
## 5 Experiments
We design our experiments to answer the following questions: (1) Given dynamic environments, does our method outperform previous methods in terms of convergence speed and final performance? (2) Does our method create agents of better generalization to diverse environments?
### Environments Setup
Diverse environments varying features that require the agent to evolve out different morphologies, e.g., a bipedal agent has too short legs to go across a high obstacle, or a agent of four legs navigate on rough road more smoothly than one of one/two legs. In our experiments, we conduct three environments that requires various morphologies to complete the tasks. We use a tuple \(\theta^{E}\) of environment parameters to denote the possible physical features of environments.
In experiments, we have three types of environments: **(1) 2d locomotion:** In the 2D locomotion environment, agents are limited to movement in the X-Z plane. The state \(s_{T}\) is given by the final \((x,z)\) positions of the morphology joints. We evaluate morphologies on three tasks: running forwards, running backwards, and jumping. Environment policy \(\pi_{e}\) takes actions to change the roughness of terrains, that is controlled by \(\theta^{E}\). **(2) 3d locomotion:** where a 3D agent's goal is to move as fast as possible along x-axis and is rewarded by its forward speed along x-axis. Environments have a fixed number of obstacles and terrains of different roughness which are controlled by \(\theta^{E}\). \(\pi_{e}\) can not only change the roughness of terrains in the environments, also learns to modify the average distance between obstacles. **(3) Gap crosser:** Gap Crosser, where a 2D agent living in an \(xz-\)plane needs to cross periodic gaps and is rewarded by its forward speed. The width of the gap is controlled by \(\pi_{e}\). In order to avoid unlearnable environments, the upper and lower limits of the gap width are limited. More details refer to Appendix.A.
**Baselines.** We compare our method with the following baselines that also optimize both the skeletal structure and joint attributes of an agent. (1)Neural Graph Evolution (NGE) (Wang et al., 2019), which is an ES-based method that uses GNNs to enable weight sharing between an agent and its offspring. (2) Transform2Act (Yuan et al., 2022), that optimizes an RL policy to control the evolution of agents. Note that both baselines do not have diverse environments in the original code. For a fair comparison, we apply an environment set randomly sampled from a uniform distribution in the training phase for them. (3) enhanced POET (Wang et al., 2020), that pertains the modification of the training environments that are paired with agents possessing distinct yet unchanging morphologies. Given the absence of morphology optimization/evolution in enhanced POET, we conducted a random sampling of six sets of agents, each comprising five agents with distinct morphologies, resulting in a total of 30 agents. Subsequently, we selected the top-performing agent from each set for the test.
### Main results
In Fig. 2(a)-(c), we report the average performance of MECE and all baseline methods in test environments for three settings. The accumulated rewards are averaged over 12 UNSEEN and randomly sampled environments and 6 different random seeds. In terms of the learning efficiency and final performance, MECE outperforms baseline methods in all environments. The results show that while Transform2Act's final performance can be improved by periodically changing the training
environments, MECE's learning efficiency is still notably higher than that of Transform2Act. The best morphologies found by each approach are listed in Fig. 2(d)-(f) for easier comparison. For 2d locomotion MECE develops a morphology that resembles a crawling animal that can move fast over terrain. In 3D locomotion, MECE has developed a spider-like framework. MECE evolves a more streamlined structure in comparison to Transform2Act so that it can more possibly avoid obstacles. Finally, the Hopper-like agents developed by Transform2Act and MECE are comparable in the Gap crosser that can jump across gaps. Overall, MECE-optimized morphologies have superior structural symmetry and are more consistent with the characteristics of biological evolution in various environments.
### Ablation Study and Analysis
We product out a series of ablation studies to prove the effectiveness of each part in MECE. We list the introduction information of each ablation study in Tab. 1, and report the results in Fig. 3. Note that in each ablation study, we only change one part of MECE and maintain the other parts the same as the original MECE, and all results are the test performance evaluated on the same environment set in Fig. 2 (b).
**Ablation Study I.** In this ablation study, we focus on the effectiveness of \(\pi_{e}\) for the training and report the results in Fig. 2(a). Note that the initial environment is relatively simple, while the final environment that evolved by \(\pi_{m}\) is more challenging. The results show that the evolved morphology and control policy trained in the diverse environments are more general than trained in the fixed environment, no matter the environment is simple or not. On the other hand, compare the performance of MECE with MECE (periodic envs), we can find that \(\pi_{e}\) helps the training in terms of the efficiency and the final performance. This is because training in the environment modified by \(\pi_{e}\), compared to the randomly generated one, avoids the learning gap between two extremely different environments, and is smoother for \(\pi\) and \(\pi_{m}\) to learn. On the other hand, \(\pi_{e}\) learns to generate environment to accelerate the learning progress of \(\pi\) and \(\pi_{m}\), which is shown clearly in the learning curves of 10 - 20 million simulation steps in Fig. 3(a).
Figure 2: **Baselines comparison and optimized morphology.** In Fig.(a)-(c), we report the evaluation results of MECE and baseline methods in three environments, and we plot the accumulated rewards (mean\(\pm\) std averaged over 6 random seeds) against the number of simulation steps for all methods. For fair comparison, we add periodically changing environments that randomly sampled from a fixed distribution in the training of baseline methods. MECE has better learning efficiency and final performance than all baseline methods. In Fig.(d)-(f), we list the optimal morphology that evolved by each method. Intuitively, the structural symmetry and environmental adaptability of the MECE-optimized morphology is better. Especially in 3d locomotion, the agent developed by MECE is better at navigating terrain and avoiding obstacles.
**Ablation Study II.** The purpose of this ablation study is to demonstrate how \(\pi_{m}\) can produce morphology that is more adaptable to diverse environments and increase learning effectiveness. The results are shown in Fig. 3(b). When comparing the learning curves for MECE (random morph) and MECE (original), the former has a far higher early learning efficiency than the latter. This is due to \(\pi_{e}\) remaining constant and the environment not changing significantly, which prevents the morphology from being adequately adapted to a variety of habitats, even some comparable environments. Theoretically, the "fixed morph-final" morphology should be able to be more adaptive to environments and, with training, reach competitive performance. Although it performs better than "random morph" and "fixed morph-initial," "MECE(original)" is still far behind. The findings indicate that training an agent with a complex morphology (i.e., MECE(fixed morph-final)) in diverse environments is difficult and ineffective. Therefore, it is imperative to implement an automated curriculum that encompasses diverse environments and morphologies in MECE to effectively train an adaptable agent.
**Ablation Study III.** Through a co-evolutionary process of adaptive criterion on \(r_{m}\) and \(r_{e}\), \(\pi_{m}\) and \(\pi_{e}\) learn to evolve the morphology and environment. To determine whether this design is effective, we perform an ablation experiment. The results are shown in Fig 3(c). For "MECE (fixed evaluation window)", every fixed number of environment steps, \(\pi_{m}\) and \(\pi_{e}\) take actions to change the agent's morphology and environment. The results indicate that by removing the dynamic window, MECE's learning effectiveness has drastically decreased. It is challenging to coordinate the training developments of the three policies, particularly the control policy, without a dynamic evaluation window. Only a somewhat steady control policy, as was already mentioned, can give the other two reliable feedback for training. Moreover, the performance of Transform2Act is taken into account for easy comparing. Even though "MECE(fixed evaluation window)" is less effective, a competitive final performance to Transform2Act is still feasible.
**Ablation Study IV.** We designed this ablation investigation to verify the reward of \(\pi_{m}\) and \(\pi_{e}\) in MECE, and the findings are displayed in Fig. 3(d). Note that just one incentive is altered in each scenario in this experiment; all other rewards remain unchanged. We explore two scenarios for \(\pi_{m}\)
\begin{table}
\begin{tabular}{l|l} \hline \hline Ablation Study & Definition of Legends \\ \hline \multirow{2}{*}{Ablation study-I for \(\pi_{e}\).} & **original**: Periodic environments generated by \(\pi_{e}\). \\ & **periodic ensy-random**: Periodic environments of randomly sampled. \\ Fig. 3(a) & **fixed ensy-initial**: Initial and easy environment. \\ & **fixed ensy-final**: Fixed environment generated by \(\pi_{e}\). \\ \hline \multirow{2}{*}{Ablation study-II for \(\pi_{m}\).} & **original**: Dynamic morphology evolved by \(\pi_{m}\). \\ & **random morph**: Randomly mutate the morphology. \\ Fig. 3(b) & **fixed morph-initial**: Fixed agent of the initial morphology. \\ & **fixed morph-final**: Fixed agent of the final morphology evolved by \(\pi_{m}\). \\ \hline \multirow{2}{*}{Ablation study-III for dynamic update window.} & **Original**: Dynamic evaluation window. \\ & **fixed update window**: Update \(\pi_{e}\) and \(\pi_{m}\) at a fixed frequency. \\ \hline \multirow{2}{*}{Ablation study-IV for \(r_{m}\) and \(r_{e}\). Fig. 3(d)} & **original**: Follow the reward setting of \(\pi_{m}\) (\(\pi_{e}\)) in Eq. 1 (Eq. 2). \\ & **reward-i**: Remove the reward for \(\pi_{m}\). \\ \cline{1-1} & **reward-ii**: change \(r_{m}\) to Eq. 1. \\ \cline{1-1} & **reward-iii**: change \(r_{e}\) to Eq. 2. \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of ablation studies. This list includes definitions for each set of controls as well as the objectives of ablation study I-IV. Since we only validated one design of MECE in each ablation study, only one design from each control group was different from MECE (original).
Figure 3: **Ablation study.** We design four ablation studies to illustrate the advantages of each part in MECE. The results of ablation study I and II show the effectiveness of \(\pi_{e}\) and \(\pi_{m}\) on the learning efficiency and generalization. Ablation study III proves that applying \(\pi_{e}\) and \(\pi_{m}\) adaptive to the training process of the control policy can improve its robustness. In ablation study IV, we try different setting of the reward function of \(\pi_{e}\) and \(\pi_{m}\), and the results prove that the original reward setting is optimal. More details of each ablation study can be found in Tab. 1.
the first involves removing the reward ("MECE (reward-i)"), same to the setting of transform2Act), and the second involves substituting the formal of \(r_{e}\) ("MECE (reward-ii)"), i.e., accelerating learning progress. We take into account employing learning progress (in the form of \(r_{m}\), "MECE (reward-iii)") for \(\pi_{e}\). The results show that the original design has obvious advantages in both learning efficiency and final performance. For the issues addressed by MECE, \(\pi_{m}\) can be thought of as the meta learner of \(\pi\), whereas \(\pi_{e}\) is the meta learner of the former. In order to help \(\pi\) perform better on various tasks, \(\pi_{m}\) learns to modify the agent's morphology during training, and \(\pi_{e}\) speeds up both of their learning processes. Therefore, in both theoretical and practical tests, the original design is the most logical.
### Case study: Co-evolution between morphology and environment
To further comprehend the co-evolution of morphology and environment in MECE, we perform a case study in 2d locomotion and present results in Fig. 4. The y-axis of the point plot indicates the training environment's roughness. Fig. 4(a)-(f) are schematic descriptions of the current morphology and the training environment corresponding to it.
Generally, an effective training environment should meet two conditions simultaneously. The first is learnable, and the RL agent can collect training data via exploration. The second factor is the level of difficulty; environments that are either too easy or too challenging may reduce the learning efficiency of the RL agent. Therefore, in 2D locomotion, a reasonable environment is one in which the environment's roughness is quite large (challenging), but not so large as to render the agent unconquerable (learnable) during exploration. Using the results of the pointplot, the environment policy can ensure that the ratio of environmental unevenness during the training process to the height of the current agent is approximately 1, which satisfies the training environment criteria. At contrast, despite the fact that the randomly generated environment can ensure the ideal in some phases, it is extremely unstable (the standard deviation is visibly high), which would undoubtedly lower the learning effectiveness.
Specifically, comparing Fig. 4(a) and (c), the environment generated by \(\pi_{e}\) in the early stage of training is unlearnable for the current agent, whereas in the middle stage of training, the training environment is relatively flat, but challenging, especially in terms of the number of occurrences of a particular feature. The roughness of the environment on the right fluctuates frequently. Note again compared to Fig. 4(d) that the right side of the randomly produced environment is unlearnable for the current agent in the middle of training. This phenomena does not occur in MECE because \(\pi_{e}\) modifies the environment to be morphology-adaptive. Comparing Figs. 4(e) and (f), the environment formed by \(\pi_{e}\)'s slope exhibits noticeable but relatively subtle alterations. This is due to the fact that the existing morphology has a high degree of adaptation to varied contexts, and environments with more extreme alterations have less effect on the morphology's optimization. To better train the control policy, an environment of moderate difficulty should be adopted.
Figure 4: **Case study results.** This figure more vividly illustrates a schematic diagram of the co-evolution of morphology and environment during training. The ordinate in the illustration shows the average environments’ roughness changing as the training processes (6 random seeds). Although the training environments generated by \(\pi_{e}\) are similar to the randomly sampled at the beginning, it can be observed from the pointplot that \(\pi_{e}\) can guarantee more challenging and stable training environments. Fig.(a)-(f) compare the effectiveness of \(\pi_{e}\) that correspond to changes in the training environments (blue indicates original MECE, and orange indicates random environments). Contrarily, it is evident that no \(\pi_{e}\) is likely to result in exceptionally difficult environments (Fig.(d)), while the environment produced by \(\pi_{e}\) is challenging but learnable (Fig.(c)).
Conclusion
In this paper, we offer a framework for morphology-environment co-evolution that evolves the agent's morphology and the training environment in alternation. Compared to previous morphology optimization approaches, ours is more sample-efficient and learns to develop morphologies that are more robust across diverse environments. Experiments demonstrate that our approach outperforms baseline methods in terms of convergence speed and overall performance. In the future, we are interested in using curriculum RL approaches to further improve the learning efficiency of our approach so that it can adapt to more challenging environments and tasks.
## Societal Impact
Our research conducted centers on the co-evolution of an RL agent's morphology and the training environments. This study aims to synchronize the instruction of a control policy, morphology policy, and environment policy. The regulation of the morphological development of RL agents has the potential to drive progress in multiple domains, resulting in notable advantages for society. In the domain of robotics, our investigations have the potential to facilitate the advancement of adaptable and multifaceted robotic frameworks that can proficiently traverse volatile surroundings, execute intricate operations, and provide aid in perilous undertakings. The technological progressions possess the capability to augment industrial procedures, ameliorate efficacy, and mitigate hazards to human laborers across diverse domains.
Although the management of morphological evolution presents encouraging progress, it is imperative to acknowledge potential hazards and apprehensions. A fundamental issue of utmost importance pertains to the ethical and responsible utilization of advanced RL agents. It is imperative to ensure that the behavior of these agents aligns with ethical standards and legal regulations as their morphology evolves through autonomous learning processes. It is imperative to develop all-encompassing frameworks and protocols that regulate the deployment of RL agents featuring dynamic morphologies, thereby averting any potential instances of misapplication or detrimental outcomes.
|
2309.05055 | An Overview of Formulae for the Higher-Order Kinematics of Lower-Pair
Chains with Applications in Robotics and Mechanism Theory | The motions of mechanisms can be described in terms of screw coordinates by
means of an exponential mapping. The product of exponentials (POE) describes
the configuration of a chain of bodies connected by lower pair joints. The
kinematics is thus given in terms of joint screws. The POE serves to express
loop constraints for mechanisms as well as the forward kinematics of serial
manipulators. Besides the compact formulations, the POE gives rise to purely
algebraic relations for derivatives wrt. joint variables. It is known that the
partial derivatives of the instantaneous joint screws (columns of the geometric
Jacobian) are determined by Lie brackets the joint screws. Lesser-known is that
derivative of arbitrary order can be compactly expressed by Lie brackets. This
has significance for higher-order forward/inverse kinematics and dynamics of
robots and multibody systems. Various relations were reported but are scattered
in the literature and insufficiently recognized. This paper aims to provide a
comprehensive overview of the relevant relations. Its original contributions
are closed form and recursive relations for higher-order derivatives and Taylor
expansions of various kinematic relations. Their application to kinematic
control and dynamics of robotic manipulators and multibody systems is
discussed. | Andreas Mueller | 2023-09-10T15:34:19Z | http://arxiv.org/abs/2309.05055v1 | # An Overview of Formulae for the Higher-Order Kinematics of Lower-Pair Chains with Applications
###### Abstract
_Abstract_- The motions of mechanisms can be described in terms of screw coordinates by means of an exponential mapping. The product of exponentials (POE) describes the configuration of a chain of bodies connected by lower pair joints. The kinematics is thus given in terms of joint screws. The POE serves to express loop constraints for mechanisms as well as the forward kinematics of serial manipulators. Besides the compact formulations, the POE gives rise to purely algebraic relations for derivatives wrt. joint variables. It is known that the partial derivatives of the instantaneous joint screws (columns of the geometric Jacobian) are determined by Lie brackets the joint screws. Lesser-known is that derivative of arbitrary order can be compactly expressed by Lie brackets. This has significance for higher-order forward/inverse kinematics and dynamics of robots and multibody systems. Various relations were reported but are scattered in the literature and insufficiently recognized. This paper aims to provide a comprehensive overview of the relevant relations. Its original contributions are closed form and recursive relations for higher-order derivatives and Taylor expansions of various kinematic relations. Their application to kinematic control and dynamics of robotic manipulators and multibody systems is discussed.
_Keywords_- Kinematics, dynamics, screws, product of exponentials, kinematic mapping, higher-order derivatives, inverse kinematics, inverse dynamics, series elastic actuators, model-based control, Taylor series
## 1 Introduction
A central part of the kinematics modeling is to express the configuration (pose, posture) of an open kinematic chain in terms of assigned joint variables. This functional relation is described by the _kinematic mapping (KM)_. The KM serves to describe the kinematics of serial robotic arms and multibody systems (MBS) with tree topology, but also the loop closure constraints of mechanisms and general MBS. The relation of the twist of the kinematic chain and the joint velocities is determined by the Jacobian of the KM, which serves as the forward kinematics Jacobian of a serial robotic arm as well as the constraint Jacobian for a kinematic loop.
An important concept in modern kinematics is the _product of exponentials (POE)_[17, 71, 89, 107]. Its fundamental advantage is that the KM of a kinematic chain with lower pair joints is completely parameterized in terms of (readily available) geometric data rather than following restrictive modeling conventions such as DH parameters. Moreover, using the POE makes it possible to derive closed form algebraic expressions for partial derivatives of the KM and the _geometric Jacobian_, but also for the time derivatives of the twist of a kinematic chain, of arbitrary order. Since such formulations are scattered in the literature, and presented in various different forms, e.g. [98, 99, 79, 60, 65, 66, 59, 60, 107, 89, 71, 92, 50, 51, 20, 22, 23, 40] they are not yet established as a generally applicable modeling approach, despite the recent interest in screw and Lie group modeling of mechanisms and MBS [82, 83, 94].
The driving force behind the research on higher-order time derivatives and partial derivatives of the geometric Jacobian has been the mobility and singularity analysis of linkages and robots [103, 102, 101, 60]. A central result is that these only require the derivatives of the instantaneous joint screws, which are given by (nested) Lie-brackets, i.e. screw products, of the joint screws. Further, a recent interest in explicit compact relations for higher-order time derivatives of twists (accelerations, jerk, jounce/snap, etc.) stems from the development of advanced methods for the optimal trajectory planning and model-based control of robots and general MBS. In particular, the control of robots equipped with series elastic actuators (SEA) requires computation of the 4th time derivative of twists (jounce/snap) [31, 91, 12, 42], and the time optimal trajectory planning must respect the bounds on higher derivatives of the inverse dynamics, which requires the derivatives of the forward kinematics [29, 97]. Accordingly, the path planning requires solving the higher-order kinematics problem of a serial robotic arms, i.e. determination of higher time derivatives of the velocity inverse kinematics solution.
The POE formulation gives rise to further closed form expressions that are relevant for the mobility and singularity analysis of linkages, as well as the control of robotic arms, which have not yet been reported in the literature. One is the Taylor series expansion of the KM. It has been observed recently that the local analysis based on higher-order time derivatives may not be sufficient to deduce the mobility of certain mechanisms [28, 70, 88], and that an exhaustive local mobility analysis must
further investigate the local geometry of the configuration space of a linkage, which can be pursued with a Taylor series expansion of the KM. Another important relation is the closed form of the time derivatives of the minors of the geometric Jacobian. The latter facilitate the detection and analysis of kinematic singularities.
While various of the relations listed above were already reported in the literature (although in different forms using different notations), several of them have not yet been published.
Therefore this paper aims to provide a comprehensive overview of the closed form expressions for higher-order kinematic relations of kinematic chains with lower pair joints using the POE in a consolidated formulation, as far as relevant for the kinematic analysis, motion planning, and control of mechanisms and robots. The following _mathematical_ topics are addressed
* multiple partial derivatives of the geometric Jacobian of arbitrary order (section 4)
* higher-order time derivatives of twists of a kinematic chain (section 5)
* Taylor series expansion of the KM (section 6)
* arbitrary time derivatives of the minors of the geometric Jacobian (section 7.1)
* Taylor series expansion of the minors of the geometric Jacobian (section 7.2)
* higher-order inverse kinematics of robotic arm (section 8)
* Taylor series expansion of the geometric Jacobian (section 9)
* Taylor series expansion of the solution of a kinematic loop in terms of time derivatives of independent joint variables (section 10)
where results that where already published are summarized (indicated by \(\circ\)) or novel relations are derived (indicated by \(\bullet\)). It is discussed how these results can be applied to the following topics in _robotics and mechanisms_
* Higher-order forward kinematics of serial robotic manipulators and MBS (section 5.4a)
* Higher-order inverse kinematics of non-redundant serial robotic manipulators (section 8)
* Local mobility and singularity analysis of linkages (section 5.4d, 6.3, and 7.3)
* Computation of gradients of kinematic manipulability measures for robot manipulators (section 5.4c)
* Determination of the generic structural mobility of linkages (section 9.3)
* Approximate solutions of the loop constraints of linkages (section 10.2)
The paper is intended as a reference where the reader can look up the relevant relations directly from the respective section while the introduction section 2 should serve as a short introduction to the notation.
The paper is organized as follows. Section 2 introduces the POE formulation for the KM. To this end, the screw coordinates for lower pair joints are introduced. The geometric Jacobian is then introduced in section 3 where its columns are identified as the instantaneous joint screws determined by frame transformations of the joint screws. The explicit form of the partial derivatives of the instantaneous joint screws are presented in section 4. The explicit form of repeated partial derivatives is presented. Higher-order time derivatives of the twist of a rigid body of the kinematic chain are presented in section 5. Closed form algebraic expressions for derivatives up to second order are presented in terms of partial derivatives of the Jacobian. For time derivatives of arbitrary order, a recursive relation is presented. A recursive \(O\left(n\right)\) formulation up to 4th-order is also presented. It is discussed how these relations can be applied to the higher-order forward kinematics and inverse dynamics of a robotic arm, to compute the gradients of dexterity measures, and to the local mobility analysis of linkages. Algebraic relations for higher-order differentials of the KM are reported in section 6. A recursive relation for arbitrary order and explicit expressions for differentials up to 4th order are given. The differentials are used for a Taylor series expansion of the KM. It is discussed how such series expansion can be used to approximate the local geometry of the configuration space of a linkage. Section 7 addresses the time derivatives and the differentials of the minors of the Jacobian. The presented closed form relations are applied to the higher-order approximation of the motions of a linkage where the constraint Jacobian exhibits a permanent drop of rank. In section 8 the higher-order inverse kinematics of a non-redundant serial chain (e.g. robotic arm) is addressed. A recursive relation for the solution of the inverse kinematics problem of arbitrary order is presented along with a closed form relation for the solution up to order 4. It is briefly discussed how these results can be used for the control of robots. Section 9 investigates how the structural properties of a kinematic chain, in particular its motion space, can be extracted from the joint screw system. To this end, the geometric Jacobian is expanded into a Taylor. Since this is given in terms of nested Lie brackets of joint screws, it is concluded that the motion space is the Lie subgroup corresponding to the Lie algebra generated by the joint screws. This result is related to the well-known structural mobility formulae. The higher-order time derivatives of the loop constraints are used to derive an algebraic expression for a higher-order approximate solution of the geometric loop constraints. This is presented in section 10. All formulations in sections 2-9 used the spatial representation of twists and screws. In various applications, however, the body-fixed or hybrid representations are used. The relation of the time derivatives of the latter are related to those of the spatial representation in section 11. This allows application of all presented results also when other representations are used, while exploiting the efficiency of the spatial
formulations. The necessary geometric background on the Lie group of rigid body motions and the exponential mapping is summarized in appendix A, which shallow the reader to follow without the need to (immediately) consult secondary literature. For better readability the notation used in this paper is summarized in appendix B.
The paper uses concepts and notations related to the Lie group \(SE\left(3\right)\) of rigid body motions. Details can be found in the textbooks [71, 89, 107].
Most of the reported relations have been implemented as package for the computer algebra system Mathematica\({}^{\text{\textregistered}}\). This package can be downloaded from www.dropbox.com/sh/487ghqzncegped3/ABBuEafULYvCSD8Cke6IAvV0a?d1=0 along with several examples.
**Remark 1**.: _In this paper the KM is understood as a mapping from joint space to \(SE\left(3\right)\), which is in accordance with its use in [34]. This must not be confused with the notion of the kinematic mapping as introduced by Blaschke [10, 11], or Study [110, 111], see also [15]. The latter is a mapping from \(SE\left(3\right)\) to \(\mathbb{P}^{7}\) that was used to provide algebraic equations for the computational analysis of mechanism kinematics [54, 95]._
## 2 Kinematic Mapping and the Product of Exponentials
In the following a general simple kinematic chain [73] comprising \(n\) 1-DOF lower pair joints and \(n\) rigid links is considered, where joint 1 connects to the ground at which a global reference frame \(\mathcal{F}_{0}\) is defined. The motion of the kinematic chain is hence due to its internal mobility. Denote with \(\mathbf{q}\in\mathbb{V}^{n}\) the vector of \(n\) joint variables, where the joint space manifold is \(\mathbb{V}^{n}=\mathbb{R}^{n_{\text{P}}}\times\mathbb{T}^{n_{\text{R}}}\), with \(n_{\text{R}}\) being the number of revolute joints and \(n_{\text{P}}\) the number of prismatic or helical joints.
At link \(i\) a body-fixed reference frame \(\mathcal{\bar{J}}_{i}\) is attached, which coincides with \(\mathcal{F}_{0}\) at reference configuration \(\mathbf{q}=\mathbf{0}\) of the open kinematic chain. The spatial configuration (posture) of link \(i\) is uniquely represented by the \(4\times 4\) transformation matrix [5, 89, 107]
\[\bar{\mathbf{C}}_{i}=\left(\begin{array}{cc}\bar{\mathbf{R}}_{i}&\bar{ \mathbf{r}}_{i}\\ \mathbf{0}&1\end{array}\right)\in SE\left(3\right) \tag{1}\]
transforming homogenous coordinates of a point when expressed in the frame \(\mathcal{\bar{J}}_{i}\), to those when expressed in the world frame \(\mathcal{F}_{0}\), where \(\bar{\mathbf{r}}_{i}\in\mathbb{R}^{3}\) is the position vector of the origin of \(\mathcal{\bar{J}}_{i}\) measured and resolved in \(\mathcal{F}_{0}\), and \(\bar{\mathbf{R}}_{i}\in SO\left(3\right)\) is the rotation matrix transforming coordinates of vectors resolved in \(\mathcal{\bar{J}}_{i}\) to those when resolved in \(\mathcal{F}_{0}\).
A linkage is a system of rigid bodies interconnected by lower pair joints. The joint motions can thus be described as screw motions and hence be expressed by the exponential of joint screw coordinates (sec. A.1). The configuration of link \(i\) is determined by the joint variables as \(\bar{\mathbf{C}}_{i}=f_{i}(\mathbf{q})\), with the _kinematic mapping (KM) of link \(i\)_
\[f_{i}\left(\mathbf{q}\right)=\exp\left(\mathbf{Y}_{1}q_{1}\right)\exp\left( \mathbf{Y}_{2}q_{2}\right)\cdot\ldots\exp\left(\mathbf{Y}_{i}q_{i}\right),i=1,\ldots,n. \tag{2}\]
The expression (2) is known as the product of exponentials (POE), which can be attributed to Brockett [17] as well as to Herve [51]. The POE is crucial for deriving kinematic relations in a compact form. The linkage kinematics is encoded in the
Figure 1: Definition of a screw associated to a 1-DOF lower pair joint.
screw coordinate vectors \(\mathbf{Y}_{1},\ldots,\mathbf{Y}_{n}\), where
\[\mathbf{Y}_{j}=\left(\begin{array}{c}\mathbf{e}_{j}\\ \mathbf{y}_{j}\times\mathbf{e}_{j}+h_{j}\mathbf{e}_{j}\end{array}\right)\in \mathbb{R}^{6} \tag{3}\]
is the screw coordinate vector (ray coordinates) associated with joint \(j\) in the reference configuration \(\mathbf{q}=\mathbf{0}\), represented in \(\mathcal{P}_{0}\). Therein, \(\mathbf{e}_{j}\in\mathbb{R}^{3}\) is a unit vector along the joint axis, \(\mathbf{y}_{j}\in\mathbb{R}^{3}\) is a position vector to a (any) point on that axis (fig. 1). Both vectors are resolved in \(\mathcal{P}_{0}\). The pitch \(h\) of the joint determines the amount of translation along the joint axis per rotation. Revolute and prismatic joints are special cases, with \(h=0\) and \(h=\infty\), respectively.
The advantage of the POE formula is that it does not require modeling of the relative inter-body transformations, which is a complicated step in the classical kinematics formulations, of which the Denavit-Hartenberg convention [63, 32] is best known, including two-frame conventions, e.g. [108]. A salient feature of the POE formula is that it allows for a kinematic description without the need to introduce separate body-fixed joint frames since all joint screw coordinates \(\mathbf{Y}_{i}\) are expressed in the global frame \(\mathcal{P}_{0}\).
**Remark 2**.: _In the above formulation of the KM all body-fixed reference frames are (implicitly) introduced such that, in the reference configuration \(\mathbf{q}=\mathbf{0}\), they coincide with the global frame \(\mathcal{P}_{0}\) (since \(f_{i}(\mathbf{0})=\mathbf{I}\)). In many applications, a dedicated reference frame \(\mathcal{F}_{i}\) at body \(i\) is used. Denote with \(\mathbf{A}_{i}\in\mathit{SE}\left(3\right)\) the configuration of the body-fixed frame \(\mathcal{F}_{i}\) relative to \(\vec{\mathcal{F}}_{i}\), then the spatial configuration of \(\mathcal{F}_{i}\) is given as \(\mathbf{C}_{i}=\mathbf{\bar{C}}_{i}\mathbf{A}_{i}\), and thus determined by the KM as \(\mathbf{C}_{i}=f_{i}(\mathbf{q})\,\mathbf{A}_{i}\). For simplicity, this constant transformation will be omitted, and the basic formulation (2) will be used in this paper._
**Remark 3**.: _Modeling mechanisms using 1-DOF joints has been the standard approach for MBS modeling [119] while advanced formulations explicitly use higher-DOF joints [114]. It should be noted, however, that all lower pair (and several higher pair) joints can be modeled as combination of 1-DOF joints. The formulations in this paper assume 1-DOF joints but can be extended to multi-DOF joints. This is not shown the for sake of compactness._
## 3 Spatial Twists and the Geometric Jacobian
The twist (velocity screw) of a rigid body is the aggregate of its angular velocity and the translational velocity of a reference point. It thus depends on the selection of a reference point. Its coordinate representation further depends on the frame in which the velocity vectors are resolved.
The _spatial twist_ of link \(i\) is represented by the vector \(\mathbf{V}_{i}^{\mathrm{s}}=(\boldsymbol{\omega}_{i}^{\mathrm{s}},\mathbf{v}_ {i}^{\mathrm{s}})\in\mathbb{R}^{6}\), where \(\boldsymbol{\omega}_{i}^{\mathrm{s}}\) is the angular velocity of the body relative to \(\mathcal{P}_{0}\) resolved in \(\mathcal{P}_{0}\), and \(\mathbf{v}_{i}^{\mathrm{s}}=\dot{\mathbf{r}}_{i}-\boldsymbol{\omega}_{i}^{ \mathrm{s}}\times\mathbf{r}_{i}\) is the translational velocity of the (possibly imaginary) point in the body that is momentarily traveling through the origin of the global frame measured and resolved in \(\mathcal{P}_{0}\).
Denote with \(\mathbf{u}_{j}\left(\mathbf{q}\right)\) a unit vector along the current axis of joint \(j\), and with \(\mathbf{s}_{i}\left(\mathbf{q}\right)\) the position vector from the origin of \(\mathcal{F}_{0}\) to any point on the current axis, both resolved in \(\mathcal{P}_{0}\). The spatial twist of body \(i\) is readily constructed as [82, 5]
\[\mathbf{V}_{i}^{\mathrm{s}} =\dot{q}_{1}\left(\begin{array}{c}\mathbf{u}_{1}\\ \mathbf{s}_{1}\times\mathbf{u}_{1}+h_{1}\mathbf{u}_{1}\end{array}\right)+\dot {q}_{2}\left(\begin{array}{c}\mathbf{u}_{2}\\ \mathbf{s}_{2}\times\mathbf{u}_{2}+h_{2}\mathbf{u}_{2}\end{array}\right)+ \ldots+\dot{q}_{i}\left(\begin{array}{c}\mathbf{u}_{i}\\ \mathbf{s}_{i}\times\mathbf{u}_{i}+h_{i}\mathbf{u}_{i}\end{array}\right)\] \[=\dot{\mathbf{J}}_{i}^{\mathrm{s}}\left(\mathbf{q}\right)\dot{ \mathbf{q}} \tag{4}\]
where \(h_{j}\) is the pitch of joint \(j\), and
\[\mathbf{S}_{j}:=\left(\begin{array}{c}\mathbf{u}_{j}\\ \mathbf{s}_{j}\times\mathbf{u}_{j}+h_{j}\mathbf{u}_{j}\end{array}\right) \tag{6}\]
is the _instantaneous screw coordinate_ vector in _spatial representation_ associated to joint \(j\). The latter constitute the columns of the _geometric (spatial) Jacobian_ of body \(i\)
\[\mathbf{J}_{i}^{\mathrm{s}}\left(\mathbf{q}\right):=\left(\mathbf{S}_{1}( \mathbf{q})\left|\cdots\left|\mathbf{S}_{i}(\mathbf{q})\left|\mathbf{0}\right| \cdots\left|\mathbf{0}\right.\right)\right. \tag{7}\]
The instantaneous joint screw coordinates (6) are obtained analytically by transforming the screw coordinates \(\mathbf{Y}_{j}\), in the zero reference configuration, to the current configuration of body \(j\) according to \(\mathbf{\bar{C}}_{j}=f_{j}\left(\mathbf{q}\right)\)
\[\mathbf{S}_{j}(\mathbf{q})=\mathbf{A}\mathbf{d}_{f_{j}\left(\mathbf{q}\right)} \mathbf{Y}_{j},\ j\leq i \tag{8}\]
**Corrected Preprint -- Published in Mechanism and Machine Theory, Vol. 142, 2019**
with the Ad mapping in (156). The latter is the \(6\times 6\) matrix transforming screw coordinate vectors. In the reference configuration \(\mathbf{q}=\mathbf{0}\), the \(\mathbf{S}_{j}\) are equal to \(\mathbf{Y}_{j}\) in (3), i.e. \(\mathbf{s}_{j}\left(\mathbf{0}\right)=\mathbf{y}_{j}\) and \(\mathbf{u}_{j}\left(\mathbf{0}\right)=\mathbf{e}_{j}\). Application of the identity \(\mathbf{Ad}_{\exp}(\mathbf{y}_{\neq q_{j}})\mathbf{Y}_{j}=\mathbf{Y}_{j}\) shows that \(\mathbf{S}_{i}\) depends on the joint variables \(q_{1},\ldots,q_{i-1}\), and the expression (8) simplifies to \(\mathbf{S}_{j}(\mathbf{q})=\mathbf{Ad}_{j_{i-1}(\mathbf{q})}\mathbf{Y}_{j}\), for \(j>1\). Consequently, the screw coordinate vector of the first joint in spatial representation is constant: \(\mathbf{S}_{1}\equiv\mathbf{Y}_{1}\).
From (4) follows immediately the recursive relation
\[\mathbf{V}_{i}^{\mathrm{s}}=\mathbf{V}_{i-1}^{\mathrm{s}}+\mathbf{S}_{i} \dot{q}_{i},i=1,\ldots,n \tag{9}\]
with \(\mathbf{V}_{0}^{\mathrm{s}}=\mathbf{0}\), which could be regarded as the twist of the ground.
**Remark 4**.: _The definition of spatial twist may seem unusual. However, its advantage is that the twists of all bodies, and the relative twists due to joint motions, are represented in one common spatially fixed frame \(\mathcal{F}_{0}\) (hence the name) so that the twists of individual bodies can simply be added without the need for a frame transformation of twists. The spatial representation prevails in kinematics and mechanism theory, but it is also increasingly used in multibody dynamics [37]. In many applications, including motion planning and control, as well as classical dynamics modeling, a body-fixed representation is used [5, 71, 83, 114]. This will be briefly discussed in sec. 11._
## 4 Partial Derivatives of Joint Screw Coordinates in Spatial Representation
The non-zero partial derivatives of the instantaneous joint screw coordinates are [71, 78, 89, 107]
\[\frac{\partial\mathbf{S}_{i}}{\partial q_{j}} =\left[\mathbf{S}_{j},\mathbf{S}_{i}\right]\] \[=\mathbf{ad}_{\mathbf{S}_{j}}\mathbf{S}_{i},\;j<i. \tag{10}\]
Derivatives w.r.t. \(q_{j},j\geq i\) vanish as \(\mathbf{S}_{i}\) only depends on \(q_{j},j\leq i\) and \(\left[\mathbf{S}_{i},\mathbf{S}_{i}\right]=\mathbf{0}\). The relation (10) gives rise to a compact expression for repeated partial derivatives. The repeated partial derivative w.r.t. the \(\nu\) variables \(q_{\alpha_{1}},\ldots,q_{\alpha_{\nu}}\) attains the closed form [78]
\[\frac{\partial^{\nu}\mathbf{S}_{i}}{\partial q_{\alpha_{1}} \partial q_{\alpha_{2}}\cdots\partial q_{\alpha_{\nu}}} =\left[\mathbf{S}_{\beta_{1}},\left[\mathbf{S}_{\beta_{2}},\left[ \mathbf{S}_{\beta_{3}},\ldots\left[\mathbf{S}_{\beta_{\nu}},\mathbf{S}_{i} \right]\ldots\right]\right]\right]\] \[=\mathbf{ad}_{\mathbf{S}_{1}}\mathbf{ad}_{\mathbf{S}_{2}} \mathbf{ad}_{\mathbf{S}_{3}}\cdots\mathbf{ad}_{\mathbf{S}_{\beta_{\nu}}}\mathbf{ S}_{i},\;\;\text{if}\;\alpha_{1},\ldots,\alpha_{\nu}<i \tag{11}\]
where \(\beta_{1}\leq\beta_{2}\leq\beta_{3}\leq\ldots\leq\beta_{\nu}<i\) is the ordered set of indexes \(\{\alpha_{1},\ldots,\alpha_{\nu}\}\) (which accounts for the fact that partial derivatives commute but the indexes are generally not ordered).
The relation (11) can be written using a multi-index \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{n})\in\mathbb{N}^{n}\), where \(a_{j}=0,1,2,\ldots\) is the number of partial derivations w.r.t. \(q_{j}\). Then (11) can be expressed in the compact form
\[\partial^{\mathbf{a}}\mathbf{S}_{i}=\mathbf{ad}_{\mathbf{S}_{i}}^{\alpha_{1}} \mathbf{ad}_{\mathbf{S}_{\alpha_{2}}}^{\alpha_{2}}\cdots\mathbf{ad}_{\mathbf{S }_{\alpha_{i-1}}}^{a_{i-1}}\mathbf{S}_{i}=\prod_{1\leq j\leq n}\mathbf{ad}_{ \mathbf{S}_{j}}^{a_{j}}\mathbf{S}_{i} \tag{12}\]
and \(\partial^{\mathbf{a}}\mathbf{S}_{i}=\mathbf{0}\) if \(a_{j}\neq 0\) for some \(j\geq i\), where the ordered (right) matrix product (168) is used. With \(=\partial^{\mathbf{a}_{i-1}}\mathbf{S}_{i}\), for \(a_{i}=\ldots=a_{n}=0\), this can be written as
\[\partial^{\mathbf{a}}\mathbf{S}_{i}=\partial^{a_{i-1}}\mathbf{S}_{i},\text{ for }a_{i}=\ldots=a_{n}=0. \tag{13}\]
As an example, the partial derivatives of \(\mathbf{S}_{4}\) are
\[\frac{\partial^{3}}{\partial q_{2}^{3}}\frac{\partial}{\partial q_{1}}\frac{ \partial^{2}}{\partial q_{3}^{3}}\mathbf{S}_{4}=\left[\mathbf{S}_{1},\left[ \mathbf{S}_{2},\left[\mathbf{S}_{2},\left[\mathbf{S}_{2},\left[\mathbf{S}_{3}, \left[\mathbf{S}_{3},\mathbf{S}_{4}\right]\right]\right]\right]\right]= \mathbf{ad}_{\mathbf{S}_{1}}\mathbf{ad}_{\mathbf{S}_{2}}^{\alpha}\mathbf{ad}_{ \mathbf{S}_{3}}^{2}\mathbf{S}_{4},\;\;\frac{\partial}{\partial q_{4}}\mathbf{ S}_{4}=\mathbf{0}.\]
The expression (10) for the partial derivative has been reported in [58, 59] and later e.g. in [89, 65, 99]. A geometric derivation of the partial derivative of a screw can be found in [99, 19, 20], which provides insight into the kinematic meaning of the Lie bracket in this context. Relations for partial derivatives up to 3rd order were presented in [60]. The relation (11) for the higher-order partial derivatives was first presented in [78].
Time Derivatives of Twists and Joint Screws
### Closed form expressions for lower degree derivatives
The spatial twist of body \(i\) is given by (4), and the time derivatives of the twist \(\mathbf{V}_{i}^{s}\) can be written explicitly with the expressions (11) for partial derivatives. For instance the acceleration and jerk of body \(i\) is respectively
\[\dot{\mathbf{V}}_{i}^{s} =\sum_{j\leq i}\mathbf{S}_{j}\dot{q}_{j}+\sum_{k<j\leq i}[\mathbf{ S}_{k},\mathbf{S}_{j}]\,\dot{q}_{j}\dot{q}_{k} \tag{14}\] \[\dot{\mathbf{V}}_{i}^{s} =\sum_{j\leq i}\mathbf{S}_{j}\,\ddot{q}_{j}+2\sum_{k<j\leq i}[ \mathbf{S}_{k},\mathbf{S}_{j}]\,\dot{q}_{k}\ddot{q}_{j}+\sum_{k<j\leq i}[ \mathbf{S}_{k},\mathbf{S}_{j}]\,\ddot{q}_{k}\dot{q}_{j}+\sum_{l<k<j\leq i}[ \left[\mathbf{S}_{l},\mathbf{S}_{k}\right],\mathbf{S}_{j}]\dot{q}_{l}\dot{q}_ {k}\dot{q}_{j}+\sum_{l,k<j\leq i}[\mathbf{S}_{k},\left[\mathbf{S}_{l},\mathbf{ S}_{j}\right]]\dot{q}_{l}\dot{q}_{k}\dot{q}_{j}\] \[=\sum_{j\leq i}\mathbf{S}_{j}\,\ddot{q}_{j}+2\sum_{l<k<j\leq i} \left[\mathbf{S}_{l},\left[\mathbf{S}_{k},\mathbf{S}_{j}\right]\right]\dot{ q}_{l}\dot{q}_{k}\dot{q}_{j}+\sum_{k<j\leq i}\left(\left[\mathbf{S}_{k}, \mathbf{S}_{j}\right]\left(\ddot{q}_{k}\dot{q}_{j}+2\dot{q}_{k}\ddot{q}_{j} \right)+\left[\mathbf{S}_{k},\left[\mathbf{S}_{k},\mathbf{S}_{j}\right] \right]\dot{q}_{k}^{2}\ddot{q}_{j}\right). \tag{15}\]
The final form of (15) is obtained using \([\mathbf{S}_{j},\left[\mathbf{S}_{l},\mathbf{S}_{k}\right]]=-[\mathbf{S}_{k}, \left[\mathbf{S}_{j},\mathbf{S}_{l}\right]]-[\mathbf{S}_{l},\left[\mathbf{S}_ {k},\mathbf{S}_{j}\right]]\), obtained with the Jacobi identity (167), and the skew symmetry of the Lie bracket for summation range \(l<k<j\leq i\). The manipulation of these relations gets involved for higher orders. This can be avoided using recursive expressions as derived in the next section.
**Remark 5**.: _From (14) it is clear that the time derivatives of the spatial twist are in fact screws [67], i.e. elements of \(se\left(3\right)\), since they are given in terms of Lie brackets. Therefore \(\dot{\mathbf{V}}_{i}^{s}\) is called the 'acceleration motor' [16] (see page 127) or the'reduced acceleration' [98]._
The equation (14) for the acceleration was presented in [98]. A version of the relation (15) for the jerk was reported in [99, 39, 40]. An explicit relation for the 4th time derivative (the jounce) was reported in [69]. The relations (14) and (15) for were also presented in [65].
The complexity of the relation for the \(k\)th-degree derivative \(\mathrm{D}^{\left(k\right)}\mathbf{V}_{i}^{s}\) is of order \(O\left(\dot{k}^{+1}\right)\). Closed form expressions for derivatives of any degree can yet be derived using the explicit relations (11). However, the number of different summations grows rapidly with the degree, and finding simple closed form relations becomes very difficult.
Derivatives of arbitrary degree using recursive relations for time derivatives of instantaneous joint screws
The use of complex expressions can be avoided by means of a recursive formulation where the \(k\)th time derivative is expressed in terms of time derivatives of degree up to \(k-1\)[79]. To this end, introduce
\[\mathsf{S}_{i}(\mathbf{q},\dot{\mathbf{q}}):=\sum_{j\leq i}\mathbf{S}_{j} \left(\mathbf{q}\right)\dot{q}_{j}. \tag{16}\]
The expression for the velocity (4) then becomes \(\mathbf{V}_{i}=\mathsf{S}_{i}\left(\mathbf{q},\dot{\mathbf{q}}\right)\). The \(k\)th time derivative thus amounts to evaluating \(\mathrm{D}^{\left(k\right)}\mathbf{V}_{i}=\mathrm{D}^{\left(k\right)}\mathsf{S }_{i}(\mathbf{q},\dot{\mathbf{q}})\). With (16), the \(k\)th time derivative of \(\mathsf{S}_{i}\) is
\[\mathrm{D}^{\left(k\right)}\mathsf{S}_{i}=\sum_{j\leq i}\sum_{l=0}^{k}\binom{k }{l}\mathrm{D}^{\left(l\right)}\mathbf{S}_{j}q_{j}^{\left(k-l+1\right)} \tag{17}\]
which involves time derivatives of the instantaneous joint screws. The first derivative follows from (10), along with (16), as
\[\dot{\mathbf{S}}_{i}=\sum_{j\leq i}[\mathbf{S}_{j},\mathbf{S}_{i}]\dot{q}_{j}=[ \sum_{j\leq i}\mathbf{S}_{j},\mathbf{S}_{i}]\dot{q}_{j}=[\mathsf{S}_{i},\mathbf{ S}_{i}] \tag{18}\]
Higher derivatives are obtained, by noting that \(\frac{\partial}{\partial q_{i}}\left[\mathbf{X},\mathbf{Y}\right]=[\frac{ \partial}{\partial q_{i}}\mathbf{X},\mathbf{Y}]+[\mathbf{X},\frac{\partial}{ \partial q_{i}}\mathbf{Y}]\), as
\[\mathrm{D}^{\left(k\right)}\mathbf{S}_{i}=\sum_{l=0}^{k-1}\binom{k-1}{l}[ \mathrm{D}^{\left(l\right)}\mathsf{S}_{l},\mathrm{D}^{\left(k-l-1\right)} \mathbf{S}_{i}]. \tag{19}\]
The recursive relation (17) for the \(k\)-th time derivative of \(\mathsf{S}_{i}(\mathbf{q},\dot{\mathbf{q}})\) involves time derivatives of the screw coordinates \(\mathbf{S}_{j},j\leq i\) of preceding joints in the kinematic chain up to degree \(k\). The latter in turn involve derivatives of \(\mathsf{S}_{i}\) and \(\mathbf{S}_{i}\) up to degree
\(k-1\), according to (19). The relations (19) and (17) thus allow for a recursive symbolic construction, respectively recursive (numerical) evaluation, of the time derivatives of the instantaneous joint screw coordinates \(\mathbf{S}_{i}\), and hence of the time derivatives of \(\mathbf{S}_{i}\), i.e. of the twist \(\mathbf{V}_{i}\). The advantage of this recursive form is that it is easy to implement and accounts for arbitrary derivatives, in contrast to explicit relations, such as (14) and (15). The complexity indeed remains \(O\left(\hat{t}^{k+1}\right)\).
### Explicit recursive relations for lower degree derivatives
In various applications low-degree derivatives are required only, and it may be desirable to use explicit forms of the recursive relations. The time derivatives (19) of the instantaneous joint screws of order \(k=1,2,3\) are for instance
\[\dot{\mathbf{S}}_{i} =[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]=\mathbf{ad}_{ \mathbf{V}_{i}}\mathbf{S}_{i} \tag{20}\] \[\dot{\mathbf{S}}_{i} =[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]+[\mathbf{V}_{i}^{ \mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]]=\left(\mathbf{ad}_ {\mathbf{V}_{i}^{\mathrm{s}}}+\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}^{2} \right)\mathbf{S}_{i}\] (21) \[\dot{\mathbf{S}}_{i} =[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]+2[\dot{\mathbf{V}} _{i}^{\mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]]+[\mathbf{V}_ {i}^{\mathrm{s}},[\dot{\mathbf{V}}_{i}^{\mathrm{s}},\mathbf{S}_{i}]]+[\mathbf{ V}_{i}^{\mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},[\mathbf{S}_{i}^{\mathrm{s}}, \mathbf{S}_{i}]]]\] \[=\left(\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}+2\mathbf{ad}_{ \mathbf{V}_{i}^{\mathrm{s}}}\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}+ \mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}^{3}\right)\mathbf{S}_{i}. \tag{22}\]
Inserting them into (17) yields
\[\dot{\mathbf{V}}_{i}^{\mathrm{s}} =\sum_{j\leq i}\left(\mathbf{S}_{j}\ddot{q}_{j}+[\mathbf{V}_{j}^{ \mathrm{s}},\mathbf{S}_{j}]\dot{q}_{j}\right)=\sum_{j\leq i}\left(\ddot{q}_{j} \mathbf{I}+\dot{q}_{j}\mathbf{ad}_{\mathbf{V}_{j}^{\mathrm{s}}}\right) \mathbf{S}_{j} \tag{23}\] \[\dot{\mathbf{V}}_{i}^{\mathrm{s}} =\sum_{j\leq i}\left(\mathbf{S}_{j}\dddot{q}_{j}+2[\mathbf{V}_{j} ^{\mathrm{s}},\mathbf{S}_{j}]\ddot{q}_{j}+\left([\dot{\mathbf{V}}_{j}^{ \mathrm{s}},\mathbf{S}_{j}]+[\mathbf{V}_{j}^{\mathrm{s}},[\mathbf{V}_{j}^{ \mathrm{s}},\mathbf{S}_{j}]]\right)\dot{q}_{j}\right)\] (24) \[=\sum_{j\leq i}\left(\dddot{q}_{j}\mathbf{I}+2\ddot{q}_{j} \mathbf{ad}_{\mathbf{V}_{j}^{\mathrm{s}}}+\dot{q}_{j}(\mathbf{ad}_{\mathbf{V} _{j}^{\mathrm{s}}}+\mathbf{ad}_{\mathbf{V}_{j}^{\mathrm{s}}}^{2})\right) \mathbf{S}_{j}\] \[\ddot{\mathbf{V}}_{i}^{\mathrm{s}} =\sum_{j\leq i}\left(\mathbf{S}_{j}\dddot{q}_{j}+3[\mathbf{V}_{j} ^{\mathrm{s}},\mathbf{S}_{j}]\dddot{q}_{j}+3\left([\dot{\mathbf{V}}_{j}^{ \mathrm{s}},\mathbf{S}_{j}]+[\mathbf{V}_{j}^{\mathrm{s}},[\mathbf{V}_{j}^{ \mathrm{s}},\mathbf{S}_{j}]]\right)\dot{q}_{j}\right.\] \[\left.\quad+\left([\dot{\mathbf{V}}_{j}^{\mathrm{s}},\mathbf{S} _{j}]+2[\dot{\mathbf{V}}_{j}^{\mathrm{s}},[\mathbf{S}_{j}^{\mathrm{s}}, \mathbf{S}_{j}]+[\mathbf{V}_{j}^{\mathrm{s}},[\dot{\mathbf{V}}_{j}^{\mathrm{ s}},\mathbf{S}_{j}]]+[\mathbf{V}_{j}^{\mathrm{s}},[\mathbf{V}_{j}^{\mathrm{s}}, \mathbf{S}_{j}]]\right)\dot{q}_{j}\right)\] (25) \[=\sum_{j\leq i}\left(\ddot{q}_{j}\mathbf{I}+3\dddot{q}_{j} \mathbf{ad}_{\mathbf{V}_{j}^{\mathrm{s}}}+3\dddot{q}_{j}(\mathbf{ad}_{\mathbf{ V}_{j}^{\mathrm{s}}}+\mathbf{ad}_{\mathbf{V}_{j}^{\mathrm{s}}}^{2})+\dot{q}_{j}( \mathbf{ad}_{\mathbf{V}_{j}^{\mathrm{s}}}+2\mathbf{ad}_{\mathbf{V}_{j}^{\mathrm{ s}}}^{2}+\mathbf{ad}_{\mathbf{V}_{j}^{\mathrm{s}}}^{2})\mathbf{S}_{j}.\]
These are recursive relations for the \(k\)th time derivative of \(\mathbf{V}_{i}^{\mathrm{s}}\) in terms of derivatives of twists of preceding bodies up to order \(k-1\) and the time derivatives of \(\mathbf{q}\left(t\right)\), so that the nested summations in (14) and (15) etc. are avoided. Due to the use of spatial twists, the summation for body \(i-1\) is repeated in the summation for body \(i\). Reusing the repeated term yields the recursive relations
\[\mathbf{V}_{i}^{\mathrm{s}} =\mathbf{V}_{i-1}^{\mathrm{s}}+\mathbf{S}_{i}\dot{q}_{i}\] (26) \[\dot{\mathbf{V}}_{i}^{\mathrm{s}} =\dot{\mathbf{V}}_{i-1}^{\mathrm{s}}+\mathbf{S}_{i}\ddot{q}_{i}+[ \mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]\dot{q}_{i}=\dot{\mathbf{V}}_{i-1}^ {\mathrm{s}}+\left(\ddot{q}_{i}\mathbf{I}+\dot{q}_{i}\mathbf{ad}_{\mathbf{V}_{ i}^{\mathrm{s}}}\right)\mathbf{S}_{i}\] (27) \[\dot{\mathbf{V}}_{i}^{\mathrm{s}} =\dot{\mathbf{V}}_{i-1}^{\mathrm{s}}+\mathbf{S}_{i}\dddot{q}_{i}+2[ \mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]\ddot{q}_{i}+\left([\dot{\mathbf{V}} _{i}^{\mathrm{s}},\mathbf{S}_{i}]+[\mathbf{V}_{i}^{\mathrm{s}},[\mathbf{V}_{i} ^{\mathrm{s}},\mathbf{S}_{i}]]\right)\dot{q}_{i}\] (28) \[=\dot{\mathbf{V}}_{i-1}^{\mathrm{s}}+\left(\dddot{q}_{i}\mathbf{ I}+2\ddot{q}_{i}\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}+\dot{q}_{i}(\mathbf{ad}_{ \mathbf{V}_{i}^{\mathrm{s}}}+\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}^{2}) \right)\mathbf{S}_{i}\] \[\ddot{\mathbf{V}}_{i}^{\mathrm{s}} =\ddot{\mathbf{V}}_{i-1}^{\mathrm{s}}+\mathbf{S}_{i}\dddot{\mathbf{V} }_{i}+3[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]\dddot{q}_{i}+3\left([ \dot{\mathbf{V}}_{i}^{\mathrm{s}},\mathbf{S}_{i}]+[\mathbf{V}_{i}^{\mathrm{s}}, [\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]]\right)\ddot{q}_{i}\] (29) \[\quad+\left([\dot{\mathbf{V}}_{i}^{\mathrm{s}},\mathbf{S}_{i}]+2[ \dot{\mathbf{V}}_{i}^{\mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i} ]]+[\mathbf{V}_{i}^{\mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i} ]]+[\mathbf{V}_{i}^{\mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},[\mathbf{S}_{i} ^{\mathrm{s}},\mathbf{S}_{i}]]\right)\dot{q}_{i}\] \[=\ddot{\mathbf{V}}_{i-1}^{\mathrm{s}}+\left(\ddot{q}_{i} \mathbf{I}+3\dddot{q}_{i}\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}+3\ddot{q}_{i} (\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}+\mathbf{ad}_{\mathbf{V}_{i}^{ \mathrm{s}}}^{2})+\dot{q}_{i}(\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}+2 \mathbf{ad}_{\mathbf{V}_{i
higher-order problem is to compute the time derivatives of \(\mathbf{V}_{n}^{s}\) for given derivatives of the joint variables \(\mathbf{q}\). This is necessary for smooth motion planning when, in addition to velocity and acceleration, also limits on the jerk, jounce and possibly higher derivative, must be respected.
The closed form relations (14) and (15) can be used to determine the EE acceleration and jerk. Alternatively, the time derivatives of the EE twist of arbitrary degree can be determined with the recursive relations (19) and (17), and in particular with (26)-(29) up to 3rd degree. These recursive approach yield the derivatives of the twists of all bodies.
**Remark 6**.: _The formulation in terms of spatial representation of twists seems computationally advantageous since it does not involve frame transformations, as apparent from (26)-(29) (twists of the preceding bodies are simply added). This has been confirmed in [90] where the recursive formulations for the velocity (i.e. first-order) forward kinematics using spatial, body-fixed, and hybrid twists were compared. Such an analysis comparing the higher-order formulation in spatial and body-fixed representation (sec. 11.1) does not yet exist._
b) Higher-order inverse dynamics of a robotic armThe inverse dynamics problem is to determine the generalized forces \(\mathbf{Q}\) in the equations of motion (EOM)
\[\mathbf{M}\left(\mathbf{q}\right)\ddot{\mathbf{q}}+\mathbf{g}\left(\ddot{ \mathbf{q}},\mathbf{q}\right)=\mathbf{Q} \tag{30}\]
of a robotic arm and to relate them to the actuator forces. This simply requires evaluating the left hand side of (30) for given \(\mathbf{q}\left(t\right)\). Since this evaluation is time critical, efficient recursive \(O\left(n\right)\) inverse dynamics algorithms were developed [2, 5, 7, 36, 37, 38, 52, 92, 104, 105, 106]. Any such \(O\left(n\right)\) algorithm involves a recursive forward kinematics run where the configurations, twists, and accelerations of all bodies are determined from given \(\mathbf{q}\), \(\ddot{\mathbf{q}}\), and \(\ddot{\mathbf{q}}\) (which solves the forward kinematics problem of the linkage). The number of operations thus depends on the number and type of frame transformations involved. As apparent from (26), no frame transformations are required when using the spatial representation of twists, and likewise wreaches (the twists \(\mathbf{V}_{i}^{s}\) and \(\ddot{\mathbf{v}}_{i-1}^{s}\) of body \(i\) and \(i-1\) are simply added and complemented by the contribution \(\mathbf{S}_{i}\dot{q}_{i}\) of the connecting joint). Therefore, the computationally most efficient \(O\left(n\right)\) algorithms use the spatial representation of twists such as the Articulated-Body and the Composite-Rigid-Body algorithm [37], where (26) and (27) provide relations in the forward kinematic run.
In various applications the time derivatives of the actuation forces \(\mathbf{Q}\) are necessary. Two such applications are 1) the optimal control taking into account technical limitations of the drives, i.e. on \(\mathbf{Q}\) and \(\dot{\mathbf{Q}}\)[29, 97], and 2) the flatness-based control of manipulators with series elastic actuators (SEA), which requires the second time derivative of \(\mathbf{Q}\)[31, 91, 12, 42]. The latter translates to the second time derivative of the EOM (30), which involves \(\dddot{\mathbf{q}}\) and \(\dddot{\mathbf{q}}\) (see section 8.1.4a). Recursive \(O\left(n\right)\) algorithms for evaluating the first time derivative of the EOM (30) have been proposed in [46, 47], and for the second second time derivative in [12, 13], where the recursive forward kinematics run additionally determines the jerk and jounce of all bodies, respectively. These use the body-fixed representation of twists (see 11.1), and are formulated in terms of DH parameters rather than the (more user-friendly) joint screw coordinates. A formulation using body-fixed and hybrid representation of twists was presented in [86] using the Lie group formulation in terms of joint screws. Aiming at maximum efficiency an \(O\left(n\right)\) algorithm can be developed using the spatial representation. The necessary relations for the higher-order forward kinematics recursion are given by (26)-(29).
c) Gradients of local dexterity measures of a robotic armA serial manipulator is an open kinematic chain with an end-effector (EE) attached at its terminal body \(n\). In a given configuration \(\mathbf{q}\in\mathbb{V}^{n}\) of the manipulator, the EE twist generated by joint velocities is determined with (5) as \(\mathbf{V}_{n}^{s}=\mathbf{J}_{n}^{s}(\mathbf{q})\,\dot{\mathbf{q}}\). This Jacobian serves as forward kinematics Jacobian, and is denoted with \(\mathbf{J}\) for simplicity. Kinematic dexterity measures are used to assess the kinematic dexterity, i.e. the transmission of joint velocities to EE twists (respectively the transmission from EE wrenches to joint torques/forces) at a given configuration. The two established local dexterity (or manipulability) measures for (possibly kinematically redundant) serial manipulators are [74, 113, 125, 64, 72] are
\[\mu=\sqrt{\det(\mathbf{J}\mathbf{J}^{T})},\;\sigma=1/\kappa(\mathbf{J} \mathbf{J}^{T}), \tag{31}\]
where \(\kappa\left(\mathbf{A}\right)=\left\|\mathbf{A}\right\|\left\|\mathbf{A}^{-1}\right\|\) is the condition number of a rectangular matrix \(\mathbf{A}\). Dexterity measures are used to select optimal poses and to find optimal designs of robots [1, 3, 44, 35, 62]. Also optimal motion planning aims to maximize dexterity, i.e. to find \(\mathbf{q}\) such that the above measures are maximized [25, 27, 76]. Gradient-based methods are often used, which necessitates the gradient (and possibly Hessian) w.r.t. the joint variables, i.e. the first and second partial derivatives.
Corrected Preprint -- Published in Mechanism and Machine Theory, Vol. 142, 2019
The partial derivative of the measure \(\mu\) is \(\frac{\partial}{\partial q_{i}}p_{\lambda}=\frac{1}{2}\frac{1}{\mu}\frac{ \partial}{\partial q_{i}}\det(\mathbf{J}\mathbf{J}^{T})\). Defining the \(n\times n\) matrix, with columns \(\mathbf{A}_{i}\),
\[\mathbf{A}:=\mathbf{J}\mathbf{J}^{T}=\left(\mathbf{A}_{1}\left|\cdots\right| \mathbf{A}_{n}\right) \tag{32}\]
the partial derivative of \(\mu\) can be expressed as
\[\frac{\partial}{\partial q_{i}}\mu=\frac{1}{2}\frac{1}{\mu}\sum_{k=1}^{n}\det \left(\mathbf{A}_{1}\left|\cdots\right|\frac{\partial}{\partial q_{i}}\mathbf{ A}_{k}\left|\cdots\right|\mathbf{A}_{n}\right). \tag{33}\]
The partial derivative of the \(k\)th column of \(\mathbf{A}\) is determined with \(\partial_{q_{i}}\mathbf{A}=\partial_{q_{i}}\mathbf{J}\mathbf{J}^{T}+\left( \partial_{q_{i}}\mathbf{J}\mathbf{J}^{T}\right)^{T}\). The partial derivatives \(\partial_{q_{i}}\mathbf{J}\) of the Jacobian, defined in (7), are given algebraically in closed form with (10). The closed form expression (33) applies to non-redundant as well as redundant serial manipulators.
For a non-redundant robotic arm, \(\mathbf{J}\) is a square \(n\times n\) matrix. If this has full rank, i.e. \(\mu\neq 0\), then (33) can be simplified to
\[\frac{\partial}{\partial q_{i}}\mu=\mu\mathrm{tr}(\mathbf{J}^{-1}\partial_{q _{i}}\mathbf{J})=\mu\sum_{k=i+1}^{n}\mathbf{\bar{J}}_{k}^{T}\mathbf{ad_{S}} \mathbf{S}_{k} \tag{34}\]
with \(\mathbf{\bar{J}}_{k}\) being the \(k\)th row of \(\mathbf{J}^{-1}\). The last term in (34) follows with (10). Although this formula involves the inverse Jacobian, it is computationally advantageous if the latter is already known, e.g. from solving the inverse kinematics.
The condition number of the \(n\times n\) matrix (32) can be expressed with the spectral norm, which is defined as \(\left\|\mathbf{A}\right\|_{2}=\sqrt{\sum_{i,j=1}^{n}A_{ij}^{2}}=\sqrt{\sum_{i =1}^{n}\mathbf{A}_{i}^{T}\mathbf{A}_{i}}\). Its partial derivatives are \(\frac{\partial}{\partial q_{i}}\left\|\mathbf{A}\right\|_{2}=\frac{1}{\left\| \mathbf{A}\right\|}\sum_{j=1}^{n}\frac{\partial}{\partial q_{i}}\mathbf{A}_{ j}^{T}\mathbf{A}_{j}\), and thus
\[\frac{\partial}{\partial q_{i}}\kappa(\mathbf{A})=\frac{\left\|\mathbf{A} \right\|_{2}}{\left\|\mathbf{A}^{-1}\right\|_{2}}\sum_{j=1}^{n}\frac{\partial }{\partial q_{i}}\mathbf{A}_{j}^{-T}\mathbf{A}_{j}^{-1}+\frac{\left\|\mathbf{A }^{-1}\right\|_{2}}{\left\|\mathbf{A}\right\|_{2}}\sum_{i=1}^{n}\frac{ \partial}{\partial q_{i}}\mathbf{A}_{j}^{T}\mathbf{A}_{j}. \tag{35}\]
The derivatives of the inverse condition number is thus
\[\frac{\partial}{\partial q_{i}}\frac{1}{\kappa(\mathbf{A})} =-\frac{\frac{\partial}{\partial q_{i}}\kappa(\mathbf{A})}{\kappa ^{2}(\mathbf{A})}\] \[=-\frac{1}{\kappa^{2}(\mathbf{A})}\Big{(}\left\|\mathbf{A}^{-1} \right\|_{2}\sum_{j=1}^{n}\frac{\partial}{\partial q_{i}}\mathbf{A}_{j}^{T} \mathbf{A}_{j}+\left\|\mathbf{A}\right\|_{2}\sum_{j=1}^{n}\frac{\partial}{ \partial q_{i}}\mathbf{A}_{j}^{-T}\mathbf{A}_{j}^{-1}\Big{)}. \tag{36}\]
The last term in (36) can be evaluated using the identity \(\partial_{q_{j}}\mathbf{A}^{-1}=-\mathbf{A}^{-1}\partial_{q_{i}}\mathbf{A} \mathbf{A}^{-1}\). The final result follows by using \(\partial_{q_{i}}\mathbf{A}=\partial_{q_{i}}\mathbf{J}\mathbf{J}^{T}+\mathbf{J} \partial_{q_{i}}\mathbf{J}^{T}\). For non-redundant serial robots, \(\mathbf{A}\) is replaced by \(\mathbf{J}\).
The Hessian of the measure \(\mu\) in (31) follows directly from (33) as
\[\frac{\partial^{2}}{\partial q_{i}\partial q_{j}}\mu=-\frac{1}{4\mu^{3}}\sum_{ \begin{subarray}{c}k\leq n\\ k\neq l\end{subarray}}\det\left(\mathbf{A}_{1}\left|\cdots\right|\frac{ \partial}{\partial q_{i}}\mathbf{A}_{k}\left|\cdots\right|\frac{\partial}{ \partial q_{j}}\mathbf{A}_{l}\left|\cdots\right|\mathbf{A}_{n}\right)+\sum_{k \leq n}\det\left(\mathbf{A}_{1}\left|\cdots\right|\frac{\partial^{2}}{\partial q _{i}\partial q_{i}}\mathbf{A}_{k}\left|\cdots\right|\mathbf{A}_{n}\right). \tag{37}\]
For non-redundant robotic arms, at regular configurations (\(\det\mathbf{J}\neq 0\)) the Hessian is the partial derivative of (34)
\[\frac{\partial^{2}}{\partial q_{i}\partial q_{j}}\mu =\mu\left(\mathrm{tr}\left(\mathbf{J}^{-1}\partial_{q_{i}} \partial_{q_{j}}\mathbf{J}\right)+\mathrm{tr}\left(\mathbf{J}^{-1}\partial_{q _{i}}\mathbf{J}\right)\mathrm{tr}(\mathbf{J}^{-1}\partial_{q_{j}}\mathbf{J})+ \mathrm{tr}\left(\left(\mathbf{J}^{-1}\partial_{q_{i}}\mathbf{J}\right)( \mathbf{J}^{-1}\partial_{q_{j}}\mathbf{J})\right)\right)\] \[=\mu\left(\sum_{k=j+1}^{n}\left(\mathbf{\bar{J}}^{T}\mathbf{ad_{S }}\mathbf{ad_{S}}\mathbf{S}_{k}\right)+\left(\sum_{k=i+1}^{n}\mathbf{\bar{J}}_{ k}^{T}\mathbf{ad_{S}}\mathbf{S}_{k}\right)\!\!\left(\!\sum_{k=j+1}^{n}\mathbf{\bar{J}}_{ k}^{T}\mathbf{ad_{S}}\mathbf{S}_{k}\right)+\mathrm{tr}\left(\left(\mathbf{J}^{-1} \partial_{q_{i}}\mathbf{J}\right)(\mathbf{J}^{-1}\partial_{q_{j}}\mathbf{J}) \right)\right),i\leq j.\]
The explicit relations for the Hessian of the inverse condition number \(\sigma\) are not presented here.
**Corrected Preprint -- Published in Mechanism and Machine Theory, Vol. 142, 2019**
**Remark 7**.: _Both measures are based on the left invariant metric on \(\text{SE}\left(3\right)\), which depends on the scaling of rotations and translations. This has been a central topic for the design of isotropic manipulators performing spatial EE motions [3, 4, 33, 126]. Amended measures were proposed to tackle this problem. A physically sensible left-invariant dexterity measure is obtained by incorporating the inertia tensor \(\boldsymbol{\Theta}\) of a manipulated object. This gives rise to the dexterity measure \(\boldsymbol{\mu_{\Theta}}:=\sqrt{\det(\mathbf{J}\boldsymbol{\Theta}^{-1} \mathbf{J}^{T})}\)[93]. For non-redundant manipulators \(\mathbf{J}\mathbf{Q}\mathbf{J}^{T}\) can be considered from a differential-geometric point of view as the inverse of the pull-back metric on \(\mathbb{V}^{n}\) induced by the metric on \(\text{SE}\left(3\right)\) defined by \(\boldsymbol{\Theta}\). These local measures have also been extended to non-holonomic manipulators [8]._
d) Local analysis of smooth motions of linkages with kinematic loopsFixing the terminal body \(n\) of the kinematic chain at the ground yields a kinematic loop formed by the chain of \(n\) 1-DOF lower pairs. This leads to the geometric loop closure constraints \(f_{n}\left(\mathbf{q}\right)=\mathbf{I}\), where the KM (2) serves as _constraint mapping_ for the kinematic loop. The solution variety
\[V=\left\{\mathbf{q}\in\mathbb{V}^{n}|f_{n}\left(\mathbf{q}\right)=\mathbf{I}\right\} \tag{38}\]
serves as the configuration space (c-space) of the single-loop linkage. The local dimension of \(V\) at \(\mathbf{q}\) is the _local DOF_ of the linkage [107] denoted \(\delta_{\text{loc}}\left(\mathbf{q}\right)=\dim_{\mathbf{q}}V\).
The analysis of the c-space (38) is a central topic for the mobility and reconfiguration analysis of mechanisms. Since an explicit solution of the geometric constraints is impossible in general, a local analysis aims to identify tangent vectors to the motion curve in \(V\), i.e. possible velocities \(\dot{\mathbf{q}}\), through a general configuration \(\mathbf{q}\in V\) (singular or regular point of \(V\)). To this end, loop constraints expressed by the POE were first explored in [50, 51] and lead to significant contributions to the higher-order mobility analysis of general linkages [87, 88, 14, 65, 99]. The central elements of any such method are the higher-order time derivatives of the loop closure constraints. Such formulations of up to 3rd order were reported in [65], up to 4th order in [99]. The relations reported in this paper provide constraints of arbitrary order. These were already applied to mobility and singularity analysis in [87].
The velocity constraints are expressed using (4) and (16) as
\[\mathbf{0}=\mathbf{J}_{n}^{\text{s}}\left(\mathbf{q}\right)\dot{\mathbf{q}}= \mathsf{S}_{n}\left(\mathbf{q},\dot{\mathbf{q}}\right). \tag{39}\]
The differential (instantaneous) DOF of the linkage at \(\mathbf{q}\) is \(\delta_{\text{diff}}\left(\mathbf{q}\right)=n-\text{rank}\ \mathbf{J}_{n}^{\text{s}} \left(\mathbf{q}\right)\).
The \(k\)th time derivative of the velocity constraints (39) is
\[H^{(k)}\!\left(\mathbf{q},\dot{\mathbf{q}},\ldots,\mathbf{q}^{(k)}\right)= \mathbf{0} \tag{40}\]
with
\[H^{(1)}\!\left(\mathbf{q},\dot{\mathbf{q}}\right):=\mathsf{S}_{n}\left( \mathbf{q},\dot{\mathbf{q}}\right),\ H^{(2)}\!\left(\mathbf{q},\dot{\mathbf{q }},\dot{\mathbf{q}}\right):=\frac{d}{dt}\mathsf{S}_{n}\left(\mathbf{q},\dot{ \mathbf{q}}\right),\ \ldots\,H^{(i)}\!\left(\mathbf{q},\dot{\mathbf{q}},\ldots, \mathbf{q}^{(i)}\right):=\mathrm{D}^{(i-1)}\mathsf{S}_{n}\left(\mathbf{q},\dot {\mathbf{q}}\right). \tag{41}\]
The mappings (41) are evaluated using (17) since the explicit relations (26)-(29) may not be sufficient since the necessary order of such an analysis is not know a priori.
A finite motion through \(\mathbf{q}\in V\) satisfies the velocity constraints and all its time derivatives. Thus, a vector \(\dot{\mathbf{q}}\) is tangent to a curve through \(\mathbf{q}\in V\) if and only if \(H^{(1)}\!\left(\mathbf{q},\dot{\mathbf{q}}\right)=\mathbf{0}\) and if there is a \(\ddot{\mathbf{q}}\in\mathbb{R}^{n}\) such that \(H^{(2)}\!\left(\mathbf{q},\dot{\mathbf{q}},\ddot{\mathbf{q}}\right)=\mathbf{0}\), and so forth. This is formalized by the _kinematic tangent cone_[65, 87], denoted \(C_{\mathbf{q}}^{\text{K}}V\), which is the set of possible velocities at \(\mathbf{q}\in V\). It is determined by the sequence
\[C_{\mathbf{q}}^{\text{K}}V=K_{\mathbf{q}}^{\text{K}}\subset\ldots\subset K_{ \mathbf{q}}^{3}\subset K_{\mathbf{q}}^{2}\subset K_{\mathbf{q}}^{1} \tag{42}\]
where each
\[K_{\mathbf{q}}^{i}:=\left\{\mathbf{x}|\exists\mathbf{y},\mathbf{z},\ldots\in \mathbb{R}^{n}:H^{(1)}\!\left(\mathbf{q},\mathbf{x}\right)=\mathbf{0},H^{(2) }\!\left(\mathbf{q},\mathbf{x},\mathbf{y}\right)=\mathbf{0},H^{(3)}\!\left( \mathbf{q},\mathbf{x},\mathbf{y},\mathbf{z}\right)=\mathbf{0},\cdots H^{(i)} \!\left(\mathbf{q},\mathbf{x},\mathbf{y},\mathbf{z},\ldots\right)=\mathbf{0} \right\}. \tag{43}\]
is a cone (rather than a vector space) satisfying the inclusion \(K_{\mathbf{q}}^{i-1}\subset K_{\mathbf{q}}^{i}\). There is a finite order \(\kappa\) so that (42) terminates. The kinematic tangent cone characterizes the tangent aspects of smooth finite motions, i.e. smooth finite curves in \(V\), through \(\mathbf{q}\).
Corrected Preprint -- Published in Mechanism and Machine Theory, Vol. 142, 2019
This applies to regular as well as to singular configurations, so that \(V\) does not have to be a smooth manifold, and thus allows for investigation of singularities and linkage mobility.
The limitation of this local analysis is that it only reveals tangents to smooth motions but cannot capture non-smooth finite curves through a c-space singularity, where no tangent is defined (see sec. 6.3).
Example 1: 4C-Linkage with a shaky motion modeAs a simple example consider the single-loop 4C linkage in fig. 2, which was reported in [69]. The cylindrical joints are modeled as combination of a revolute and prismatic joint. The screw coordinates in the reference configuration \(\mathbf{q}_{0}=\mathbf{0}\), represented in the shown frame \(\mathcal{F}_{0}\), are
\[\mathbf{Y}_{1}=\mathbf{Y}_{5}=\left(1,0,0,0,0,0\right)^{T},\mathbf{Y}_{2}= \mathbf{Y}_{6}=\left(0,0,0,1,0,0\right)^{T},\mathbf{Y}_{3}=\mathbf{Y}_{7}= \left(0,1,0,0,0,0\right)^{T},\mathbf{Y}_{4}=\mathbf{Y}_{8}=\left(0,0,0,0,1,0 \right)^{T}. \tag{44}\]
The mappings (41) are evaluated with (17). For instance, up to order 3 these are
\[H^{(1)}(\mathbf{q}_{0},\mathbf{x})=\left(\begin{array}{c}x_{1}+x_{5}\\ x_{3}+x_{7}\\ 0\\ x_{2}+x_{6}\\ x_{4}+x_{8}\\ 0\end{array}\right),\;\;H^{(2)}(\mathbf{q}_{0},\mathbf{x},\mathbf{y})=\left( \begin{array}{c}y_{1}+y_{5}\\ y_{3}+y\\ x_{5}\left(-x_{3}+x_{7}\right)+x_{1}\left(x_{3}+x_{7}\right)\\ y_{2}+y_{6}\\ y_{4}+y_{8}\\ -x_{4}x_{5}-x_{3}x_{6}+x_{6}x_{7}+x_{2}\left(x_{3}+x_{7}\right)+x_{5}x_{8}+x_{ 1}\left(x_{4}+x_{8}\right)\end{array}\right)\]
\[H^{(3)}(\mathbf{q}_{0},\mathbf{x},\mathbf{y},\mathbf{z})=\left(\begin{array} []{c}-x_{3}x_{5}\left(x_{3}-2x_{7}\right)+z_{1}+z_{5}\\ -x_{7}\left(x_{1}+x_{5}\right)^{2}-x_{1}x_{3}\left(x_{1}-2x_{8}\right)+z_{3}+z _{7}\\ x_{3}y_{1}+2x_{1}y_{3}-x_{5}y_{3}-2x_{3}y_{5}+x_{7}\left(y_{1}+y_{5}\right)+2 \left(x_{1}+x_{5}\right)y_{7}\\ 2\left(-x_{4}x_{5}+x_{8}x_{5}+x_{6}x_{7}\right)x_{3}-x_{3}^{2}x_{6}+2x_{4}x_{ 3}x_{7}+z_{2}+z_{6}\\ -2\left(-x_{4}x_{5}+x_{8}x_{5}-x_{3}x_{6}+x_{6}x_{7}+x_{2}\left(x_{3}+x_{7} \right)\right)x_{1}-x_{1}^{2}\left(x_{4}+x_{8}\right)-x_{5}\left(-2x_{2}x_{3}+ \left(2x_{2}+x_{6}\right)x_{7}+x_{5}x_{8}\right)+z_{4}+z_{8}\\ -x_{6}y_{3}+2x_{1}y_{4}-x_{5}y_{4}+x_{8}\left(y_{1}+y_{5}\right)+x_{4}\left(y_ {1}-2y_{5}\right)+x_{7}\left(y_{2}+y_{6}\right)+x_{3}\left(y_{2}-2y_{6}\right) +2x_{6}y_{7}+2x_{2}\left(y_{3}+y_{7}\right)+2\left(x_{1}+x_{5}\right)y_{8} \end{array}\right).\]
The first-order cone defined by \(H^{(1)}(\mathbf{q}_{0},\mathbf{x})=0\) is
\[K^{1}_{\mathbf{q}_{0}}=\{\mathbf{x}=(t,s,u,v,-t,-s,-u,-v)|t,s,u,v\in\mathbb{R} \}\subset\mathbb{R}^{8}. \tag{45}\]
The second-order cone is implicitly defined by the system of second-order polynomials \(H^{(1)}(\mathbf{q}_{0},\mathbf{x})=H^{(2)}(\mathbf{q}_{0},\mathbf{x},\mathbf{ y})=0\) as
\[K^{2}_{\mathbf{q}_{0}}=\{\mathbf{x}=(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{7}, x_{8})\in\mathbb{R}^{8}|x_{1}=-x_{5},x_{2}=-x_{6},x_{3}=-x_{7},x_{4}=-x_{8},x_{5}x_{7}= 0,x_{6}x_{7}=-x_{5}x_{8},x_{5}^{2}x_{8}=0\}.\]
Fig. 2: A single-loop linkage comprising four cylindrical joints. The latter are modeled as a combination of a revolute and a prismatic joint. E.g. joint 1 and 2 represent one cylindrical joint.
The polynomial system can be factorized so that \(K^{2}_{\mathbf{q}_{0}}\) is the union of three vector spaces
\[K^{2}_{\mathbf{q}_{0}}=K^{2(I)}_{\mathbf{q}_{0}}\cup K^{2(II)}_{ \mathbf{q}_{0}}\cup K^{2(III)}_{\mathbf{q}_{0}}\quad\text{with}\quad K^{2(I)}_{ \mathbf{q}_{0}}=\{\mathbf{x}=(0,0,u,v,0,0,-u,-v)|u,v\in\mathbb{R}\} \tag{46}\] \[K^{2(II)}_{\mathbf{q}_{0}}=\{\mathbf{x}=(0,s,0,v,0,-s,0,-v)|s,v\in \mathbb{R}\}\] \[K^{2(III)}_{\mathbf{q}_{0}}=\{\mathbf{x}=(t,s,0,0,-t,-s,0,0)|t,s \in\mathbb{R}\}.\]
The third- and higher-order cones are \(K^{i}_{\mathbf{q}_{0}}=K^{2}_{\mathbf{q}_{0}},i\geq 2\). The kinematic tangent cone is thus \(C^{\mathbf{k}}_{\mathbf{q}_{0}}V=K^{2}_{\mathbf{q}_{0}}\). Each of the three vector spaces in (46) is the tangent space to a manifold (the motion modes) passing trough \(\mathbf{q}_{0}\). The latter are the motion modes of the linkage (shown in Fig. 3). The point \(\mathbf{q}_{0}\) is a c-space singularity since \(K^{2}_{\mathbf{q}_{0}}\) is not a vector space (the reverse is not necessarily true). It is a bifurcation point so that any smooth curve through \(\mathbf{q}_{0}\) corresponds to one of the motion modes where the linkage has DOF \(\delta=2\).
Example 2: Double Evans-LinkageThere is a recent interest in linkages possessing singularities through which no smooth motions are possible (in contrast to almost all singularities considered in the literature that are characterized by the intersection of motion modes). That is, the kinematic tangent cone (and any analysis based on higher-order time derivatives) fails to reveal the tangents to these non-smooth motions). So far, this phenomenon is known for multi-loop 1-DOF linkages whose c-space possesses cusp singularities), for which the planar linkage proposed in [28] is a well-known example, but other (spatial) mechanisms were reported recently [70].
The planar linkage constructed by combination of two Evans-linkages in fig. 4a) is another 1 DOF example, which was presented in [70]. This linkage possesses three independent kinematic loops, the fundamental cycles \(\Lambda_{1},\Lambda_{2}\), and \(\Lambda_{3}\), indicated in the topological graph in fig 4b). For each of these loops the closure constraints are formulated [85, 87]. Details are omitted here, but the detailed calculation can be found in the accompanying Mathematica files. The \(i\)th-order cone is then determined as
\[K^{i}_{\mathbf{q}}:=\left\{\mathbf{x}|\exists\mathbf{y},\mathbf{z},\ldots \in\mathbb{R}^{n}:H^{(1)}_{l}(\mathbf{q},\mathbf{x})=\mathbf{0},\cdots H^{(l)} _{l}(\mathbf{q},\mathbf{x},\mathbf{y},\mathbf{z},\ldots)=\mathbf{0},l=1,2,3\right\} \tag{47}\]
where the index \(l\) indicates the kinematic loop. The screw coordinates in the singular configuration \(\mathbf{q}_{0}=\mathbf{0}\) are presented in [70]. The above analysis yields the cones
\[K^{1}_{\mathbf{q}_{0}} = \{x_{1}=s,x_{2}=-2s,x_{3}=s,x_{4}=0,x_{6}=-2t,x_{7}=t,x_{8}=0,x_{ 9}=2s-t,x_{10}=-s+2t;\;s,t\in\mathbb{R}\} \tag{48}\] \[K^{i}_{\mathbf{q}_{0}} = \{x_{1}=t,x_{2}=-2t,x_{3}=t,x_{4}=0,x_{6}=-2t,x_{7}=t,x_{8}=0,x_{ 9}=t,x_{10}=t;\;t\in\mathbb{R}\},i=2,3,4 \tag{49}\]
and \(K^{5}_{\mathbf{q}_{0}}=\{\mathbf{0}\}\). Thus \(C^{\mathbf{k}}_{\mathbf{q}_{0}}=\{\mathbf{0}\}\), which indicates that no smooth curve exists through the singularity \(\mathbf{q}_{0}\). It is known, however, that the linkage is mobile with final DOF \(\delta=1\)[70]. The above analysis does not reveal this mobility since the linkage must
stop when traversing this singularity, i.e. there is no smooth motion through that configuration. In order to deduce the correct finite DOF necessitates analysis of the solution variety \(V\), as discussed in the next section.
## 6 Series Expansion of the Kinematic Mapping
### Taylor series expansion of the kinematic mapping
The kinematic mapping (2) is analytic and thus admits a Taylor series expansion. W.l.o.g. the KM \(f_{n}\) of the terminal body is considered, and the index \(n\) is omitted. The Taylor series of \(f\) at \(\mathbf{q}\in\mathbb{V}^{n}\) is
\[f\left(\mathbf{q}+\mathbf{x}\right)=f\left(\mathbf{q}\right)+\sum_{k\geq 1} \frac{1}{k!}\left(\sum_{1\leq i\leq n}x_{i}\frac{\partial}{\partial q_{i}} \right)^{k}=f\left(\mathbf{q}\right)+\sum_{k\geq 1}\frac{1}{k!}\mathrm{d}^{k}f_{ \mathbf{q}}\left(\mathbf{x}\right) \tag{50}\]
with the \(k\)th differential of \(f\)
\[\mathrm{d}^{k}f_{\mathbf{q}}\left(\mathbf{x}\right) =\Bigg{(}\sum_{1\leq i\leq n}x_{i}\frac{\partial}{\partial q_{i}} \Bigg{)}^{k}f\] \[=\sum_{|\mathbf{a}|=k}\frac{k!}{a_{1}!a_{2}!\cdots a_{n}!}\frac{ \partial^{k}f}{\partial q_{1}^{a_{1}}\partial q_{2}^{a_{2}}\cdots\partial q_{ n}^{a_{n}}}x_{1}^{a_{1}}x_{2}^{a_{2}}\cdots x_{n}^{a_{n}}=\sum_{|\mathbf{a}|=k} \frac{k!}{\mathbf{a}!}\mathbf{x}^{\mathbf{a}}\partial^{\mathbf{a}}f\] \[=\sum_{\alpha_{1},\alpha_{2},\ldots,\alpha_{k}}x_{\alpha_{1}}x_{ \alpha_{2}}\cdots x_{\alpha_{n}}\frac{\partial^{k}f}{\partial q_{\alpha_{1}} \partial q_{\alpha_{2}}\ldots\partial q_{\alpha_{k}}} \tag{51}\]
evaluated at \(\mathbf{q}\) (which is not indicated for sake of simplicity).
**Remark 8**.: _The differential can be expressed as \(\mathrm{d}^{k}f_{\mathbf{q}}\left(\mathbf{x}\right)=\left(\mathrm{D}_{\mathbf{ q}^{T}}^{k}f\right)(\mathbf{x}^{\otimes k}\otimes\mathbf{I}_{4})\), where \(\otimes\) denotes the Kronecker product and \(\mathbf{x}^{\otimes k}\) is \(k\)-fold tensor product \(\mathbf{x}^{\otimes k}=\mathbf{x}\otimes\ldots\otimes\mathbf{x}\)[109, 115, 57]. Moreover, higher-order derivatives can be expressed compactly as Kronecker products. Since this merely serves for 'bookkeeping', but does not yield more insight, it will not be used in this paper._
Fig. 4: a) Double-Evans linkage in a singular configuration, where the c-space possesses a cusp (link numbers are omitted for clarity). b) Topological graph and the fundamental cycles used for the analysis.
### Higher-order differentials of the kinematic mapping
The point of departure is the right-trivialized differential of the KM \(f:\mathbb{V}^{n}\to SE\left(3\right)\) at a configuration \(\mathbf{q}\in\mathbb{V}^{n}\), which yields
\[\mathrm{d}f_{\mathbf{q}}\left(\mathbf{x}\right)f\left(\mathbf{q}\right)^{-1}= \sum_{i\leq n}\widehat{\mathbf{S}}_{i}\left(\mathbf{q}\right)x_{i}=\widehat{ \mathbf{S}}_{n}\left(\mathbf{q},\mathbf{x}\right) \tag{52}\]
with \(\mathbf{S}_{n}\) in (8), where \(f\left(\mathbf{q}\right)^{-1}\) is the inverse of the matrix \(f\left(\mathbf{q}\right)\in SE\left(3\right)\). Notice that higher-order differentials \(\mathrm{d}^{k}f_{\mathbf{q}}\) of the KM are not the \(k\)-th differentials of \(\mathbf{S}_{n}\) when \(f\left(\mathbf{q}\right)=\mathbf{I}\), since the equality \(\mathrm{d}f_{\mathbf{q}}\left(\mathbf{x}\right)=\mathrm{d}f_{\mathbf{q}}\left( \mathbf{x}\right)f\left(\mathbf{q}\right)^{-1}\) only holds for \(k=1\). Introduce the following mappings \(h_{\mathbf{q}}^{\left(\right)}:\mathbb{R}^{n}\to se\left(3\right)\)
\[h_{\mathbf{q}}^{\left(1\right)}(\mathbf{x}):=\sum_{i\leq n} \widehat{\mathbf{S}}_{i}\left(\mathbf{q}\right)x_{i},\;\;h_{\mathbf{q}}^{ \left(2\right)}(\mathbf{x}):=\sum_{j,i\leq n}x_{i}x_{j}\frac{\partial\widehat{ \mathbf{S}}_{i}}{\partial q_{j}},\;\;\;h_{\mathbf{q}}^{\left(3\right)}( \mathbf{x}):=\sum_{l,j,i\leq n}x_{i}x_{j}x_{l}\frac{\partial^{2}\widehat{ \mathbf{S}}_{i}}{\partial q_{l}\partial q_{j}}\] \[h_{\mathbf{q}}^{\left(k\right)}(\mathbf{x}):=\sum_{\alpha_{1}, \ldots,\alpha_{k-1},i\leq n}x_{i}x_{\alpha_{1}}\cdots x_{\alpha_{k-1}}\frac{ \partial^{k-1}\widehat{\mathbf{S}}_{i}}{\partial q_{\alpha_{1}}\cdots\partial q _{\alpha_{k-1}}}=\sum_{i\leq n}x_{i}\mathrm{d}^{k-1}\widehat{\mathbf{S}}_{i, \mathbf{q}}\left(\mathbf{x}\right),k>1 \tag{53}\]
where \(\widehat{\mathbf{S}}_{i}\) is the \(4\times 4\) matrix in (148). The involved \(k\)-th differential of \(\mathbf{S}_{i}\) at \(\mathbf{q}\) is defined as
\[\mathrm{d}^{k}\mathbf{S}_{i,\mathbf{q}}\left(\mathbf{x}\right)=\sum_{\alpha_ {1},\ldots,\alpha_{k}<i}x_{\alpha_{1}}\cdots x_{\alpha_{k}}\frac{\partial^{k} \mathbf{S}_{i}}{\partial q_{\alpha_{1}}\cdots\partial q_{\alpha_{k}}}=\sum_{ \left|\mathbf{x}\right|=1}\frac{k!}{\mathbf{a}_{i-1}!}\mathbf{x}^{\mathbf{a}_ {i-1}}\partial^{\mathbf{a}_{i-1}}\mathbf{S}_{i} \tag{54}\]
The last term in (54) follows taking into account the index range in (11) and (12). Application of Leibnitz' rule to (52) yields
\[h_{\mathbf{q}}^{\left(k\right)}(\mathbf{x})=\sum_{i=0}^{k-1}\binom{k-1}{i} \mathrm{d}^{i+1}f_{\mathbf{q}}\left(\mathbf{x}\right)\mathrm{d}^{k-i-1}f_{ \mathbf{q}}^{-1}\left(\mathbf{x}\right)=\mathrm{d}^{k}f_{\mathbf{q}}\left( \mathbf{x}\right)f\left(\mathbf{q}\right)^{-1}+\sum_{i=1}^{k-1}\binom{k-1}{i-1 }\mathrm{d}^{i+1}f_{\mathbf{q}}\left(\mathbf{x}\right)\mathrm{d}^{k-i}f_{ \mathbf{q}}^{-1}\left(\mathbf{x}\right) \tag{55}\]
where, with slight abuse of notation, \(\mathrm{d}^{k}f_{\mathbf{q}}^{-1}\) denotes the the differential of the inverse of the matrix \(f\left(\mathbf{q}\right)\in SE\left(3\right)\) as function of \(\mathbf{q}\). The differential of \(f\) follows from (55) as
\[\mathrm{d}^{k}f_{\mathbf{q}}\left(\mathbf{x}\right)=h_{\mathbf{q}}^{\left(k \right)}(\mathbf{x})\,f\left(\mathbf{q}\right)-\sum_{i=1}^{k-1}\binom{k-1}{i- 1}\mathrm{d}^{i}f_{\mathbf{q}}\left(\mathbf{x}\right)\mathrm{d}^{k-i}f_{ \mathbf{q}}^{-1}\left(\mathbf{x}\right)f\left(\mathbf{q}\right). \tag{56}\]
This is a recursive relation for \(\mathrm{d}^{k}f_{\mathbf{q}}\) in terms of \(\mathrm{d}^{i}f_{\mathbf{q}},i<k\), but also involves the differentials of \(f\left(\mathbf{q}\right)^{-1}\) up to order \(k-1\).
The differential of order \(k\) of \(\mathbf{I}=f\left(\mathbf{q}\right)f\left(\mathbf{q}\right)^{-1}\) leads to
\[\mathbf{0}=\sum_{i=0}^{k}\binom{k}{i}\mathrm{d}^{i}f_{\mathbf{q}}\left(\mathbf{ x}\right)\mathrm{d}^{k-i}f_{\mathbf{q}}^{-1}(\mathbf{x})=f\left(\mathbf{q} \right)\mathrm{d}^{k}f_{\mathbf{q}}^{-1}(\mathbf{x})+\sum_{i=1}^{k}\binom{k}{i }\mathrm{d}^{i}f_{\mathbf{q}}\left(\mathbf{x}\right)\mathrm{d}^{k-i}f_{ \mathbf{q}}^{-1}(\mathbf{x}) \tag{57}\]
and thus
\[\mathrm{d}^{k}f_{\mathbf{q}}^{-1}\left(\mathbf{x}\right)=-f\left(\mathbf{q} \right)^{-1}\sum_{i=1}^{k}\binom{k}{i}\mathrm{d}^{i}f_{\mathbf{q}}\left( \mathbf{x}\right)\mathrm{d}^{k-i}f_{\mathbf{q}}^{-1}(\mathbf{x}). \tag{58}\]
The relation (58) involves \(\mathrm{d}^{i}f_{\mathbf{q}}^{-1},i<k\) and \(\mathrm{d}^{i}f_{\mathbf{q}},i\leq k\). Recursive evaluation of the latter, using (56), requires \(\mathrm{d}^{i}f_{\mathbf{q}},i<k\) and thus \(\mathrm{d}^{i}f_{\mathbf{q}}^{-1}(\mathbf{x}),i<k\).
It remains to determine the mappings \(h_{\mathbf{q}}^{\left(k\right)}\), which, according to (53), requires the differentials of \(\mathbf{S}_{i}\). The differential follows with (10) as
\[\mathrm{d}\mathbf{S}_{i,\mathbf{q}}\left(\mathbf{x}\right)=\sum_{j<i}[\mathbf{ S}_{j}\left(\mathbf{q}\right),\mathbf{S}_{i}\left(\mathbf{q}\right)]x_{j}. \tag{59}\]
Applying Leibnitz' rule to (59) yields the relation for general order \(k\) in terms of differentials of order less than \(k\)
\[\mathrm{d}^{k}\mathbf{S}_{i,\mathbf{q}}\left(\mathbf{x}\right)=\sum_{j<i<n}\sum_{ l=0}^{k-1}\binom{k-1}{l}[\mathrm{d}^{l}\mathbf{S}_{j,\mathbf{q}}\left(\mathbf{x} \right),\mathrm{d}^{k-l-1}\mathbf{S}_{i,\mathbf{q}}\left(\mathbf{x}\right)]x_{ j},k\geq 1. \tag{60}\]
Inserting this into (53) finally yields
\[h_{\mathbf{q}}^{(k)}(\mathbf{x})=\sum_{j<i<n}x_{i}x_{j}\sum_{l=0}^{k-2}\binom{ k-2}{l}[\mathrm{d}^{l}\widehat{\mathbf{S}}_{j,\mathbf{q}}\left(\mathbf{x} \right),\mathrm{d}^{k-l-2}\widehat{\mathbf{S}}_{i,\mathbf{q}}\left(\mathbf{x} \right)],k>1. \tag{61}\]
In summary, the differential \(\mathrm{d}^{k}f_{\mathbf{q}}\) of the kinematic mapping (2) at \(\mathbf{q}\) is determined recursively via the relation (56). This involves \(\mathrm{d}^{k}f_{\mathbf{q}}^{-1}\), which is determined recursively by (58) in terms of \(\mathrm{d}^{l}f_{\mathbf{q}},i<k\). It also involves \(h_{\mathbf{q}}^{(k)}\), which deliver the actual contributions of the joint screw according to (61) along with (60). For \(k=0\) it is \(\mathrm{d}^{k}\mathbf{S}_{j,\mathbf{q}}=\mathbf{S}_{j}\left(\mathbf{q}\right)\) and \(\mathrm{d}^{k}f_{\mathbf{q}}^{-1}=f^{-1}\left(\mathbf{q}\right)\).
The \(\mathrm{d}^{k}f_{\mathbf{q}}\left(\mathbf{x}\right),\mathrm{d}^{k}f_{\mathbf{q }}^{-1}\left(\mathbf{x}\right),h_{\mathbf{q}}^{(k)}\left(\mathbf{x}\right), \mathrm{d}^{k}\mathbf{S}_{i,\mathbf{q}}\left(\mathbf{x}\right)\) are homogenous polynomials in \(\mathbf{x}\) of degree \(k\).
**Remark 9**.: _The mappings (53) can be evaluated using the explicit expression (12) for the partial derivatives as_
\[h_{\mathbf{q}}^{(k+1)}(\mathbf{x}) =\sum_{\alpha_{1},\ldots,\alpha_{k},i\leq n}x_{i}x_{\alpha_{1}} \cdots x_{\alpha_{k}}\frac{\partial^{l}\widehat{\mathbf{S}}_{i}}{\partial q_{ \alpha_{1}}\cdots\partial q_{\alpha_{k}}}=\sum_{i\leq n}x_{i}\mathrm{d}^{k} \widehat{\mathbf{S}}_{i,\mathbf{q}}\left(\mathbf{x}\right)=\sum_{i\leq n}\sum _{|\mathbf{a}_{i-1}|=k}\frac{k!}{\mathbf{a}_{i-1}!}x_{i}\mathbf{x}^{\alpha_{i -1}}\widehat{\mathbf{S}}_{i}\] \[=\sum_{i\leq n}\sum_{|\mathbf{a}_{i-1}|=k}\frac{k!}{\mathbf{a}_{ i-1}!}x_{i}\mathbf{x}^{\alpha_{i-1}}\Big{(}\prod_{j=1}^{i-1}\mathbf{ad}_{ \mathbf{S}_{j}}^{a_{j}}(\mathbf{S}_{i})\Big{)}. \tag{62}\]
_Evaluation of (62) in closed form requires repeated evaluation of all involved Lie brackets, respectively the adjoint operations. The recursive formulation (61), on the other hand, is computationally more efficient and easier to implement._
**Remark 10**.: _If a low-order truncation of the Taylor series (50) is sufficient, (56) along with (58) can be rolled out explicitly and the following relations (up to order 4, for instance) be used directly_
\[\mathrm{d}^{1}f_{\mathbf{q}}\left(\mathbf{x}\right) =h_{\mathbf{q}}^{(1)}(\mathbf{x})\,f\left(\mathbf{q}\right) \tag{63}\] \[\mathrm{d}^{2}f_{\mathbf{q}}\left(\mathbf{x}\right) =\left(h_{\mathbf{q}}^{(2)}(\mathbf{x})+h_{\mathbf{q}}^{(1)}( \mathbf{x})\,h_{\mathbf{q}}^{(1)}(\mathbf{x})\right)f\left(\mathbf{q}\right)\] (64) \[\mathrm{d}^{3}f_{\mathbf{q}}\left(\mathbf{x}\right) =\left(h_{\mathbf{q}}^{(3)}(\mathbf{x})+h_{\mathbf{q}}^{(1)}( \mathbf{x})\,h_{\mathbf{q}}^{(2)}(\mathbf{x})+2h_{\mathbf{q}}^{(2)}(\mathbf{x} )h_{\mathbf{q}}^{(1)}(\mathbf{x})+h_{\mathbf{q}}^{(1)}(\mathbf{x})\,h_{\mathbf{ q}}^{(1)}(\mathbf{x})\,h_{\mathbf{q}}^{(1)}(\mathbf{x})\right)f\left(\mathbf{q}\right)\] \[\mathrm{d}^{4}f_{\mathbf{q}}\left(\mathbf{x}\right) =\left(h_{\mathbf{q}}^{(4)}(\mathbf{x})+h_{\mathbf{q}}^{(1)}( \mathbf{x})\,h_{\mathbf{q}}^{(3)}(\mathbf{x})+3h_{\mathbf{q}}^{(3)}(\mathbf{x })h_{\mathbf{q}}^{(1)}(\mathbf{x})+3h_{\mathbf{q}}^{(2)}(\mathbf{x})\,h_{ \mathbf{q}}^{(2)}(\mathbf{x}\right)\] (65) \[\qquad\qquad+3h_{\mathbf{q}}^{(2)}(\mathbf{x})\,h_{\mathbf{q}}^{( 1)}(\mathbf{x})\,h_{\mathbf{q}}^{(1)}(\mathbf{x})+2h_{\mathbf{q}}^{(1)}( \mathbf{x})\,h_{\mathbf{q}}^{(2)}(\mathbf{x})\,h_{\mathbf{q}}^{(1)}(\mathbf{x })+h_{\mathbf{q}}^{(1)}(\mathbf{x})\,h_{\mathbf{q}}^{(1)}(\mathbf{x})\,h_{ \mathbf{q}}^{(1)}(\mathbf{x})\right)f\left(\mathbf{q}\right). \tag{66}\]
**Remark 11**.: _The second-order term \(h_{\mathbf{q}}^{(2)}(\mathbf{x})\) reads explicitly_
\[h_{\mathbf{q}}^{(2)}(\mathbf{x})=\sum_{j<i\leq n}x_{i}x_{j}\frac{\partial \widehat{\mathbf{S}}_{i}}{\partial q_{j}}=\sum_{j<i\leq n}x_{i}x_{j}\mathrm{ ad}_{\widehat{j}}\widehat{\mathbf{S}}_{i}). \tag{67}\]
_The \(h_{\mathbf{q}}^{(k)}\) map to \(se\left(3\right)\), so \(h_{\mathbf{q}}^{(2)}(\mathbf{x})\in se\left(3\right)\) can be represented as 6-vector. Then_
\[\sum_{j<l\leq n}x_{i}x_{j}\mathbf{ad}_{\mathbf{S}_{j}}\mathbf{S}_{i }=\left(\mathbf{x}^{T}\otimes\mathbf{I}_{6}\right)\mathrm{diag}\ \left(\mathbf{ad}_{\mathbf{S}_{1}},\ldots,\mathbf{ad}_{\mathbf{S}_{n}}\right) \left(\begin{array}{cccc}\mathbf{0}&\mathbf{S}_{2}&\mathbf{S}_{3}&\cdots& \mathbf{S}_{n-1}&\mathbf{S}_{n}\\ \mathbf{0}&\mathbf{S}_{3}&\cdots&\mathbf{S}_{n-1}&\mathbf{S}_{n}\\ \mathbf{0}&\ddots&\vdots&\vdots\\ \mathbf{0}&\mathbf{0}&\mathbf{S}_{n}\\ \mathbf{0}\end{array}\right)\mathbf{x}\] \[=\frac{1}{2}\left(\mathbf{x}^{T}\otimes\mathbf{I}_{6}\right) \left(\begin{array}{ccccc}\mathbf{0}&\mathbf{ad}_{\mathbf{S}_{1}}\mathbf{S }_{2}&\mathbf{ad}_{\mathbf{S}_{1}}\mathbf{S}_{3}&\mathbf{ad}_{\mathbf{S}_{1}} \mathbf{S}_{4}&\cdots&\mathbf{ad}_{\mathbf{S}_{1}}\mathbf{S}_{n-1}&\mathbf{ ad}_{\mathbf{S}_{1}}\mathbf{S}_{n}\\ \mathbf{ad}_{\mathbf{S}_{1}}\mathbf{S}_{2}&\mathbf{0}&\mathbf{ad}_{\mathbf{S}_ {2}}\mathbf{S}_{3}&\mathbf{ad}_{\mathbf{S}_{2}}\mathbf{S}_{4}&\cdots&\mathbf{ ad}_{\mathbf{S}_{2}}\mathbf{S}_{n-1}&\mathbf{ad}_{\mathbf{S}_{2}}\mathbf{S}_{n}\\ \mathbf{ad}_{\mathbf{S}_{1}}\mathbf{S}_{3}&\mathbf{ad}_{\mathbf{S}_{2}}\mathbf{ S}_{3}&\mathbf{0}&\mathbf{S}_{4}&\cdots&\mathbf{S}_{n-1}&\mathbf{S}_{n}\\ \mathbf{ad}_{\mathbf{S}_{1}}\mathbf{S}_{4}&\mathbf{ad}_{\mathbf{S}_{2}}\mathbf{ S}_{4}&\mathbf{ad}_{\mathbf{S}_{2}}\mathbf{S}_{n}&\mathbf{0}&\mathbf{S}_{n-1}& \mathbf{S}_{n}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \mathbf{ad}_{\mathbf{S}_{1}}\mathbf{S}_{n-1}&\mathbf{ad}_{\mathbf{S}_{2}} \mathbf{S}_{n-1}&-\mathbf{S}_{3}&-\mathbf{S}_{4}&\cdots&\mathbf{0}&\mathbf{S} _{n}\\ \mathbf{ad}_{\mathbf{S}_{1}}\mathbf{S}_{n}&\mathbf{ad}_{\mathbf{S}_{2}}\mathbf{ S}_{n}&-\mathbf{S}_{3}&-\mathbf{S}_{4}&\cdots&-\mathbf{S}_{n-1}&\mathbf{0} \end{array}\right)\mathbf{x}. \tag{68}\]
_This quadratic form has been used in [124] to identify immobile shaky linkages._
**Remark 12**.: _For the analysis of a particular configuration, the joint variables can always be defined so that the configuration of interest is the reference configuration with \(\mathbf{q}=\mathbf{0}\). In this case the instantaneous joints screw coordinates are \(\mathbf{S}_{j}\left(\mathbf{0}\right)=\mathbf{Y}_{j}\). Then (50) is the MacLaurin series of \(f\)._
### Application: Local Approximation of the Configuration Space
The KM defines the c-space \(V\) of a closed loop linkage in (38). Its (local) dimension at \(\mathbf{q}\in V\) is the (local) finite DOF of the linkage. The local geometry of \(V\) reveals the finite mobility and c-space singularities of the linkages. A global analysis of \(V\) is not possible in general. A local approximation of \(V\) is given by replacing \(f\) in (38) with a finite truncation of its series expansion (50). Since \(f\left(\mathbf{q}\right)=\mathbf{I}\) for \(\mathbf{q}\in V\), the \(k\)th-order local approximation of the c-space at \(\mathbf{q}\in V\) is
\[V_{\mathbf{q}}^{k}:=\{\mathbf{x}\in\mathbb{R}^{n}|\mathrm{d}f_{\mathbf{q}} \left(\mathbf{x}\right)+\frac{1}{2}\mathrm{d}^{2}f_{\mathbf{q}}\left(\mathbf{x }\right)+\ldots+\frac{1}{k!}\mathrm{d}^{k}f_{\mathbf{q}}\left(\mathbf{x} \right)=\mathbf{0}\}. \tag{69}\]
\(V_{\mathbf{q}}^{\mathbf{x}}\) is an algebraic variety of degree \(k\). The dimension of \(V_{\mathbf{q}}^{k}\) is the \(k\)th-order local DOF at \(\mathbf{q}\in V\). There exist a neighborhood \(U\left(\mathbf{q}\right)\) and an order \(\kappa\) such that \(V_{\mathbf{q}}^{\mathbf{x}}\cap U\left(\mathbf{q}\right)=V\cap U\left( \mathbf{q}\right)\). That is, a finite approximation of order \(\kappa\) is sufficient and thus \(\delta_{\mathrm{loc}}\left(\mathbf{q}\right)=\dim V_{\mathbf{q}}^{\kappa}\).
The local mobility determination is an open problem in mechanism theory. An approach that is applicable to general mechanisms is the local analysis of the c-space \(V\). No general computational framework was proposed so far, and the above relations may serve as a basis for such a higher-order local approximation, as proposed in [88]. One publication were special cases are addressed is [21]. A Taylor series approximation is used in [21] in order to characterize possible motions at singularities and to check for the mobility of overconstrained single loop linkages, in particular the Bennett conditions for 4R linkages. The Taylor expansion was derived specifically for 4R linkages.
**Remark 13**.: \(V_{\mathbf{q}}^{k}\) _is an algebraic variety of order \(k\) approximating the analytic variety \(V\). The approximation order necessary to determine the local DOF is not known a priori and depends on the linkage as well as the configuration. At regular configurations of non-overconstrained linkages, for which the rank of the constraint Jacobian \(\mathbf{J}\) in (39) is locally constant, a first-order approximation is sufficient. In singular configurations, where the rank of \(\mathbf{J}\) is not locally constant, higher-order approximations are necessary. Moreover, the mobility analysis demands a higher-order approximation since it is not known beforehand whether the configuration is regular or singular, and whether the linkage is overconstrained or not. The same applies to underconstrained (shaky) linkages [80, 122, 123], which possess a higher differential DOF than local DOF even in regular configurations. While the above relations deliver the \(k\)th-order polynomials defining a local approximation it remains to analyze the corresponding algebraic variety._
Example 1 (cont.): 4C-Linkage with a shaky motion modeThe differentials of \(f\) are determined with the recursive relation (56). The first and second differential are
\[\mathrm{d}f_{\mathbf{q}_{0}}\left(\mathbf{x}\right)=\left(\begin{array}{cccc}0& 0&x_{3}+x_{7}&x_{2}+x_{6}\\ 0&0&-x_{1}-x_{5}&x_{4}+x_{8}\\ -x_{3}-x_{7}&x_{1}+x_{5}&0&0\\ 0&0&0&0\end{array}\right)\]
\[\mathrm{d}f_{\mathbf{q}_{0}}\left(\mathbf{x}\right)+\frac{1}{2}\mathrm{d}^{2} f_{\mathbf{q}_{0}}\left(\mathbf{x}\right)=\left(\begin{array}{cccc}-\frac{1}{2} \left(x_{3}+x_{7}\right)^{2}&x_{3}x_{5}&x_{3}+x_{7}&x_{2}+x_{6}\\ x_{5}x_{7}+x_{1}\left(x_{3}+x_{7}\right)&-\frac{1}{2}\left(x_{1}+x_{5}\right)^ {2}&-x_{1}-x_{5}&0\\ -x_{3}-x_{7}&x_{1}+x_{5}&-\frac{1}{2}\left(\left(x_{1}+x_{5}\right)^{2}+\left( x_{3}+x_{7}\right)^{2}\right)&-x_{3}x_{6}+x_{5}x_{8}+x_{1}\left(x_{4}+x_{8} \right)\\ 0&0&0&0\end{array}\right).\]
The first-order approximation (always) is \(V_{\mathbf{q}_{0}}^{1}=\ker\mathrm{d}f_{\mathbf{q}_{0}}=\ker\mathbf{J}\left( \mathbf{q}_{0}\right)=K_{\mathbf{q}_{0}}^{1}\) in (45). The second-order approximation is defined by \(\mathrm{d}f_{\mathbf{q}_{0}}\left(\mathbf{x}\right)+\frac{1}{2}\mathrm{d}^{2} f_{\mathbf{q}_{0}}\left(\mathbf{x}\right)=\mathbf{0}\), which can be simplified to
\[V_{\mathbf{q}_{0}}^{2}=\{\mathbf{x}\in\mathbb{R}^{8}|x_{3}+x_{7}=0,x_{1}+x_{5 }=0,x_{3}x_{5}=0,x_{5}x_{7}+x_{1}\left(x_{3}+x_{7}\right)=0,-x_{3}x_{6}+x_{5} x_{8}+x_{1}\left(x_{4}+x_{8}\right)=0\}.\]
This can be solved, and it turns out that \(V_{\mathbf{q}_{0}}^{2}=C_{\mathbf{q}_{0}}^{\mathbf{k}}V=K_{\mathbf{q}_{0}}^{2 \left(I\right)}\cup K_{\mathbf{q}_{0}}^{2\left(II\right)}\cup K_{\mathbf{q}_{0 }}^{2\left(III\right)}\), with \(C_{\mathbf{q}_{0}}^{\mathbf{k}}V\) in (46). Also for all higher approximations it is \(V_{\mathbf{q}_{0}}^{k}=C_{\mathbf{q}_{0}}^{\mathbf{k}}V,k\geq 2\), which is the union of vector spaces. Consequently, for all possible finite motions there is a linear relation of the joint variable. It also shows that all motion branches correspond to smooth finite motions through the singularity, i.e. \(\mathbf{q}\) is a bifurcation point. Details can found in the provided Mathematica notebook.
Example 2 (cont.): Double Evans-LinkageIt was concluded in sec. 5.4 that this linkage does not admit smooth motions through the singular configuration, although it is mobile. The presented approach to the local analysis can be applied to multi-loop linkages by considering the topologically independent kinematic loops. Denote with \(f_{l}\) the mapping (2) defining the geometric closure condition, then the c-space is \(V=\{\mathbf{q}\in\mathbb{V}^{n}|f_{l}\left(\mathbf{q}\right)=\mathbf{I},l=1,2,3\}\). For \(\Lambda_{1}\) the closure mapping is \(f_{1}\left(\mathbf{q}\right)=\exp(\mathbf{Y}_{1}q_{1})\exp(\mathbf{Y}_{2}q_{2} )\exp(\mathbf{Y}_{3}q_{3})\exp(\mathbf{Y}_{4}q_{4})\), for \(\Lambda_{2}\) it is \(f_{2}\left(\mathbf{q}\right)=\exp(\mathbf{Y}_{3}q_{8})\exp(\mathbf{Y}_{7}q_{7 })\exp(\mathbf{Y}_{6}q_{6})\exp(\mathbf{Y}_{5}q_{5})\), and for \(\Lambda_{3}\) it is \(f_{3}\left(\mathbf{q}\right)=\exp(\mathbf{Y}_{1}q_{1})\exp(\mathbf{Y}_{2}q_{2} )\exp(\mathbf{Y}_{9}q_{9})\exp(\mathbf{Y}_{10}q_{10})\exp(\mathbf{Y}_{6}q_{6}) \exp(\mathbf{Y}_{5}q_{5})\). Their series expansions, defining the approximation \(V_{\mathbf{q}_{0}}^{k}\) in (69), are easily constructed with the above recursive relations (details are omitted again, for sake of readability). This yields a system of polynomials of degree \(i\). The crucial step, however, is checking the real dimension of the so defined algebraic varieties \(V_{\mathbf{q}_{0}}^{k}\), or to even explicitly solve these systems in order to obtain a parameterization of the motion curve. This will possibly require using dedicated algorithms from algebraic geometry [30, 117]. For this example, the dimension test was performed with the software Singular [45]. The first-order approximation is, by definition, always \(V_{\mathbf{q}}^{1}=K_{\mathbf{q}}^{1}\), i.e. \(\dim V_{\mathbf{q}_{0}}^{1}=2\), see (48). The polynomials defining \(V_{\mathbf{q}_{0}}^{i}\) were determined with the recursive relations (56) and (58) using the provided Mathematica package, while the dimension of the algebraic variety \(V_{\mathbf{q}_{0}}^{i}\) defined by this system of polynomials was determined with the software Singular [45]. Details can be found in the accompanying Mathematica notebook. The higher-order approximations all have \(\dim V_{\mathbf{q}_{0}}^{k}=1\). It is hence concluded that the linkage has finite local mobility \(\delta(\mathbf{q}_{0})=1\).
## 7 Time Derivatives and Series Expansion of Minors of the Geometric Jacobian
### Time derivatives of Minors of the Geometric Jacobian
W.l.o.g. the spatial Jacobian \(\mathbf{J}_{n}^{s}\) of body \(n\) is considered, and for the sake of simplicity, this is denoted with \(\mathbf{J}\).
In the following, \(\mathbf{J}_{\mathbf{q}\mathbf{\underline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{\mathbfmathbf \mathbfmathbf{ \mathbfmathbfmathbf \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}\) denotes the \(k\times k\) submatrix of \(\mathbf{J}\), consisting of the elements of the rows \(\alpha_{i}\in\{1,\ldots,6\}\) and columns \(\beta_{j}\in\{1,\ldots,n\}\) of \(\mathbf{J}\) summarized in the index sets \(\mathbf{\alpha}=\{\alpha_{1},\ldots,\alpha_{k}\},\alpha_{i-1}<\alpha_{i}\) and \(\mathbf{\beta}=\{\beta_{1},\ldots,\beta_{k}\},\beta_{j-1}<\beta_{j}\). The determinant of this matrix is denoted with \(m_{\mathbf{q}\mathbf{\underline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf \mathbf{ } \mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf \mathbf{ \mathbf \mathbf{ \mathbf \mathbf{\mathbf \mathbf \mathbf{\mathbf \mathbf{ } }}}}}}}\) and referred to as the \(\mathbf{\alpha}\mathbf{\beta}\)-minor of \(\mathbf{J}\) of order \(k\). Notice that this differs from the conventional definition of minors where the index set would indicate the rows and columns that are eliminated from \(\mathbf{J}\). Apparently, if \(\mathbf{J}\) is a square \(k\times k\) matrix, then the (only one) \(k\) minor is \(m_{\mathbf{\underline{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbf\mathbf \mathbf\mathbf{ \mathbf{\mathbf \mathbf \mathbf{\mathbf \mathbf \mathbf{\mathbf \mathbf \mathbf{\mathbf \mathbf \mathbf \mathbf{\mathbf \mathbf{\mathbf \mathbf \mathbf \mathbf{\mathbf \mathbf \mathbf{\mathbf \mathbf \mathbf{\mathbf \mathbf{\mathbf \mathbf \mathbf{\mathbf \mathbf \mathbf{\mathbf \mathbf{\mathbf \mathbf \mathbf{\mathbf \mathbf{\mathbf \leftleftleftleftleftleftleftleft \leftleftleftleftleftleftleftleftleftleftleft \leftleftleftleftleftleftleftleftleftleft{{\leftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleftleftleft \leftleftleftleft{{{\leftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleft {{{\leftleftleftleftleftleftleftleftleftleftleft{{{\left{\leftleftleftleft{{\leftleft{\leftleft{\leftleft{\leftleft{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ }}}}}}}}}}}}}}
using the property of determinants [109]. Up to 5th order, for instance, they are
\[\frac{d}{dt}m_{\boldsymbol{\alpha}\boldsymbol{\beta}} =\sum_{\mu\in\boldsymbol{\beta}}\left|\mathbf{S}_{\boldsymbol{\alpha} \boldsymbol{\beta}_{1}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol {\beta}_{\mu}}\cdots\mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{k}}\right|\] \[\frac{d^{2}}{dt^{2}}m_{\boldsymbol{\alpha}\boldsymbol{\beta}} =\sum_{\mu\in\boldsymbol{\beta}}\left|\mathbf{S}_{\boldsymbol{ \alpha}\boldsymbol{\beta}_{1}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha} \boldsymbol{\beta}_{\mu}}\cdots\mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{ \beta}_{k}}\right|+\sum_{\mu\neq v\in\boldsymbol{\beta}}\left|\mathbf{S}_{ \boldsymbol{\alpha}\boldsymbol{\beta}_{1}}\cdots\dot{\mathbf{S}}_{\boldsymbol {\alpha}\boldsymbol{\nu}}\cdots\mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{ \beta}_{k}}\right|\] \[\frac{d^{3}}{dt^{3}}m_{\boldsymbol{\alpha}\boldsymbol{\beta}} =\sum_{\mu\in\boldsymbol{\beta}}\left|\mathbf{S}_{\boldsymbol{ \alpha}\boldsymbol{\beta}_{1}}\cdots\ddot{\mathbf{S}}_{\boldsymbol{\alpha} \boldsymbol{\nu}}\cdots\mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{k}} \right|+3\sum_{\mu\neq v\in\boldsymbol{\beta}}\left|\mathbf{S}_{\boldsymbol{ \alpha}\boldsymbol{\beta}_{1}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha} \boldsymbol{\nu}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol{\nu}} \cdots\mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{k}}\right|+\sum_{\mu \neq v\neq\lambda\in\boldsymbol{\beta}}\left|\mathbf{S}_{\boldsymbol{\alpha} \boldsymbol{\beta}_{1}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol {\nu}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol{\nu}}\cdots\dot{ \mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{k}}\right|\] \[\frac{d^{4}}{dt^{4}}m_{\boldsymbol{\alpha}\boldsymbol{\beta}} =\sum_{\mu\in\boldsymbol{\beta}}\left|\mathbf{S}_{\boldsymbol{ \alpha}\boldsymbol{\beta}_{1}}\cdots\mathbf{S}_{\boldsymbol{\alpha}\boldsymbol {\nu}}^{(4)}\cdots\mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{k}} \right|+4\sum_{\mu\in\boldsymbol{\beta}}\left|\mathbf{S}_{\boldsymbol{\alpha} \boldsymbol{\beta}_{1}}\cdots\ddot{\mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol {\nu}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol{\nu}}\cdots\mathbf{ S}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{k}}\right|+3\sum_{\mu\neq v\in\boldsymbol{\beta}}\left| \mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{1}}\cdots\mathbf{S}_{ \boldsymbol{\alpha}\boldsymbol{\nu}}\cdots\dot{\mathbf{S}}_{\boldsymbol{ \alpha}\boldsymbol{\nu}}\cdots\mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{ \beta}_{k}}\right|\] \[+6\sum_{\mu\neq v\neq\lambda\in\boldsymbol{\beta}}\left|\mathbf{S}_{ \boldsymbol{\alpha}\boldsymbol{\beta}_{1}}\cdots\ddot{\mathbf{S}}_{\boldsymbol {\alpha}\boldsymbol{\nu}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol {\nu}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol{\nu}}\cdots\dot{ \mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol{\nu}}\cdots\dot{\mathbf{S}}_{ \boldsymbol{\alpha}\boldsymbol{\nu}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha} \boldsymbol{\nu}}\cdots\dot{\mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol{\beta }_{k}}\right|\] \[\frac{d^{5}}{dt^{5}}m_{\boldsymbol{\alpha}\boldsymbol{\beta}} =\sum_{\beta_{1}\in\boldsymbol{\beta}}\left|\mathbf{S}_{ \boldsymbol{\alpha}\boldsymbol{\beta}_{1}}\cdots\mathbf{S}_{\boldsymbol{ \alpha}\boldsymbol{\beta}_{1}}^{(5)}\cdots\mathbf{S}_{\boldsymbol{\alpha} \boldsymbol{\beta}_{k}}\right|+5\sum_{\beta_{1}\neq\beta_{1}\in\boldsymbol{ \beta}}\left|\mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{1}}\cdots \mathbf{S}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{1}}^{(4)}\cdots\dot{ \mathbf{S}}_{\boldsymbol{\alpha}\boldsymbol{\beta}_{1}
### Series Expansion of Minors of the Geometric Jacobian
The \(\boldsymbol{\alpha\beta}\)-minor of order \(k\) of the Jacobian can be expanded in a Taylor series at \(\mathbf{q}\in\mathbb{V}^{n}\)
\[m_{\boldsymbol{\alpha\beta}}(\mathbf{q}+\mathbf{x})=m_{\boldsymbol{\alpha\beta }}(\mathbf{q})+\mathrm{d}m_{\boldsymbol{\alpha\beta},\mathbf{q}}(\mathbf{x})+ \frac{1}{2}\mathrm{d}^{2}m_{\boldsymbol{\alpha\beta},\mathbf{q}}(\mathbf{x})+ \ldots+\frac{1}{\nu!}\mathrm{d}^{\nu}m_{\boldsymbol{\alpha\beta},\mathbf{q}}( \mathbf{x})\,.\]
The relation (70) can be immediately carried over to the differentials, which yields
\[\mathrm{d}^{i}m_{\boldsymbol{\alpha\beta},\mathbf{q}}=\sum_{|\mathbf{x}_{k}|=i }\left|\mathrm{d}^{\alpha_{1}}\mathbf{S}_{\boldsymbol{\alpha\beta}_{1}, \mathbf{q}}\left(\mathbf{x}\right)\ \mathrm{d}^{\alpha_{2}}\mathbf{S}_{\boldsymbol{\alpha\beta}_{2}, \mathbf{q}}\left(\mathbf{x}\right)\ \cdots\ \mathrm{d}^{\alpha_{k}}\mathbf{S}_{\boldsymbol{\alpha\beta}_{k}, \mathbf{q}}\left(\mathbf{x}\right)\right|\frac{i!}{\mathbf{a}_{k}!n_{1}! \cdots n_{l}!}. \tag{71}\]
This can be evaluated using the relations (60) for the differentials of instantaneous joint screw coordinates.
### Applications
#### 7.3.1 Local Analysis of Smooth Motions with certain Rank
C-space singularities of linkages with kinematic loops are characterized by a rank deficient constraint Jacobian. A deeper understanding of the singularities of a linkage is gained by investigating possible motions with certain rank.
Denote with \(L_{k}\) the subvariety of \(V\) where rank of \(\mathbf{J}\) is less than \(k\). The Jacobian \(\mathbf{J}\) has rank less than \(k\) iff all \(k\)-minors vanish: \(m_{\boldsymbol{\alpha\beta}}=0,|\boldsymbol{\alpha}|=|\boldsymbol{\beta}|=k\), where \(|\boldsymbol{\alpha}|\) is the cardinality (number of elements) of the set \(\boldsymbol{\alpha}\). This gives rise to the following definition
\[L_{k}:\,=\left\{\mathbf{q}\in\mathbb{V}^{n}|f\left(\mathbf{q}\right)=\mathbf{I },m_{\boldsymbol{\alpha\beta}}(\mathbf{q})=0\ \forall\boldsymbol{\alpha}\subseteq\{1,\ldots,6\},\boldsymbol{\beta}\subseteq \{1,\ldots,n\},|\boldsymbol{\alpha}|=|\boldsymbol{\beta}|=k\right\} \tag{72}\]
A finite motion \(\mathbf{q}\left(t\right)\) in \(L_{k}\), i.e. where rank \(\mathbf{J}<k\), satisfies all higher-order constraints (40) and all time derivatives of \(m_{\boldsymbol{\alpha\beta}}\) vanish. The set of tangents to curves through \(\mathbf{q}\in V\) where rank \(\mathbf{J}<k\) forms the _kinematic tangent cone to \(L_{k}\)_, denoted with \(C_{\mathbf{q}}^{\mathbf{K}}L_{k}\subseteq C_{\mathbf{q}}^{\mathbf{K}}V\). To simplify notation, define the functions
\[M_{\boldsymbol{\alpha\beta}}^{(1)}(\mathbf{q},\mathbf{q}):=\frac{d}{dt}m_{ \boldsymbol{\alpha\beta}}(\mathbf{q}),\ M_{\boldsymbol{\alpha\beta}}^{(2)}( \mathbf{q},\mathbf{q},\dot{\mathbf{q}}):=\frac{d^{2}}{dt^{2}}m_{\boldsymbol{ \alpha\beta}}(\mathbf{q}),\ldots,\ M_{\boldsymbol{\alpha\beta}}^{(i)}( \mathbf{q},\mathbf{q},\dot{\mathbf{q}},\ldots,\mathbf{q}^{(i)}):=\frac{d^{i}}{ dt^{i}}m_{\boldsymbol{\alpha\beta}}(\mathbf{q}). \tag{73}\]
The kinematic tangent cone to \(L_{k}\) is then determined by the sequence
\[C_{\mathbf{q}}^{\mathbf{K}}L_{k}=K_{\mathbf{q}}^{k,\mathbf{K}}\subset\ldots \subset K_{\mathbf{q}}^{k,3}\subset K_{\mathbf{q}}^{k,2}\subset K_{\mathbf{q }}^{k,1} \tag{74}\]
with
\[\begin{array}{c}K_{\mathbf{q}}^{k,i}:=\{\mathbf{x}|\exists\mathbf{y}, \mathbf{z},\ldots\in\mathbb{R}^{n}:H^{(1)}(\mathbf{q},\mathbf{x})=H^{(2)}( \mathbf{q},\mathbf{x},\mathbf{y})=\ldots=&H^{(i)}(\mathbf{q},\mathbf{x}, \mathbf{y},\mathbf{z},\ldots)=\mathbf{0},\\ M_{\boldsymbol{\alpha\beta}}^{(1)}(\mathbf{q},\mathbf{x})=M_{\boldsymbol{ \alpha\beta}}^{(2)}(\mathbf{q},\mathbf{x},\mathbf{y})=\ldots=&M_{\boldsymbol{ \alpha\beta}}^{(4)}(\mathbf{q},\mathbf{x},\mathbf{y},\mathbf{z},\ldots)=0,\\ \forall\boldsymbol{\alpha}\subseteq\{1,\ldots,6\},\boldsymbol{\beta}\subseteq \{1,\ldots,n\},|\boldsymbol{\alpha}|=|\boldsymbol{\beta}|=k\,.\end{array} \tag{75}\]
In general, each \(K_{\mathbf{q}}^{k,i}\) is a cone. The sequence terminates with a finite \(\kappa\). The latter indicates the order of differential motions that do not correspond to finite motions with rank less than \(k\).
The higher-order analysis of smooth motions with rank deficient Jacobian has only been reported briefly in [87].
Example 1 (cont.): 4C-Linkage with a shaky motion modeThe joint screw coordinate vectors in the reference configuration are given in (44). At \(\mathbf{q}_{0}=\mathbf{0}\) it is rank \(\mathbf{J}\left(\mathbf{q}_{0}\right)=4\), and thus \(\mathbf{q}_{0}\in L_{k},k\geq 5\). Since the maximal rank of \(\mathbf{J}\) is 6, only \(L_{5}\) and \(L_{6}\) need to be analyzed.
All \(M_{\boldsymbol{\alpha\beta}}^{(1)}(\mathbf{q}_{0},\mathbf{x})\,,|\boldsymbol{ \alpha}|=|\boldsymbol{\beta}|=6\) vanish, so that \(K_{\mathbf{q}_{0}}^{6,1}=K_{\mathbf{q}_{0}}^{1}\). The non-trivial second derivatives of the 6-minors are
\[\{M_{\boldsymbol{\alpha\beta}}^{(2)}(\mathbf{q}_{0},\mathbf{x})\,,|\boldsymbol{ \alpha}|=|\boldsymbol{\beta}|=6\}=\{-2x_{3}^{2},2x_{3}^{2},-2x_{3}x_{5},2x_{3}x_{5},-2x_{5}^{2},2x_{5}^{2},2x_{4}x_{5}-2x_{3}x_{6},-2x_{4}x_{5}+2x_{3}x_{6}\}. \tag{76}\]
This yields
\[K_{\mathbf{q}_{0}}^{6,2}=\{|\mathbf{x}=(0,t,0,s,0,-t,0,-s),s,t\in\mathbb{R}\}\in \mathbb{R}^{8}. \tag{77}\]
Proceeding for higher derivatives shows that \(C_{\mathbf{q}_{0}}^{\mathbf{K}}L_{6}=K_{\mathbf{q}_{0}}^{6,i},i\geq 2\).
The non-trivial derivatives of the 5-minors are
\[\{M_{\mathbf{q}\mathbf{\oplus}}^{(1)}(\mathbf{q}_{0},\mathbf{x})\,,|\mathbf{ \alpha}|=|\mathbf{\beta}|=5\}=\{-x_{3},x_{3},-x_{4},x_{4},-x_{5},x_{5},-x_{6}, x_{6}\}. \tag{78}\]
Therewith follows that \(K_{\mathbf{q}_{0}}^{5,1}=\{\mathbf{0}\}\in\mathbb{R}^{8}\). Since the sequence (74) is non-increasing, it follows that \(C_{\mathbf{q}_{0}}^{\mathbf{K}}L_{5}=\{\mathbf{0}\}\). This shows that there is no smooth finite motion with rank \(\mathbf{J}=4\), so that in any point of the neighborhood of \(\mathbf{q}_{0}\) the rank (thus the differential mobility) increases. There are 2-dimensional smooth finite motions through \(\mathbf{q}_{0}\) with rank \(\mathbf{J}\leq 5\) whose tangents are given by \(C_{\mathbf{q}_{0}}^{\mathbf{K}}L_{6}\). Since \(\mathbf{J}\) has rank 4 only at \(\mathbf{q}_{0}\), these are 2-dim motions with rank \(\mathbf{J}=5\). Because of \(K_{\mathbf{q}_{0}}^{2(II)}=C_{\mathbf{q}_{0}}^{\mathbf{K}}L_{6}\), in (46), the corresponding motion mode II with rank \(\mathbf{J}=5\) (Fig. 3b) the linkage is shaky since in this motion mode the local finite DOF is \(\delta_{\text{loc}}(\mathbf{q}_{0})=2\) while the differential (instantaneous) DOF is \(\delta_{\text{diff}}(\mathbf{q}_{0})=n-\text{rank }\mathbf{J}(\mathbf{q}_{0})=3\). This 2-dim manifold of configurations with rank 5 is characterized by \(q_{1}=q_{3}=q_{5}=q_{7}=0\). Thus the 4C linkage is underconstrained. It should be noticed that this could not be identified by a first-order analysis of the singularity \(\mathbf{q}_{0}\) nor by the higher-order analysis of the finite mobility (sec. 5.4) as pursued in [69].
#### 7.3.2 Local Approximation of the Set of Configurations with certain Rank
In addition to bifurcations of motion branches, there may be non-smooth motions with rank-deficient Jacobian. Such'stationary singularities' are e.g. manifested as cusps in the c-space \(V\). Local analysis requires an approximation of the local geometry of \(L_{k}\) at \(\mathbf{q}\). The subvariety of configurations with rank \(\mathbf{J}<k\) is
\[L_{k,\mathbf{q}}^{i}=\{\mathbf{x}\in\mathbb{R}^{n}|\text{d}f_{\mathbf{q}}( \mathbf{x})+\frac{1}{2}\text{d}^{2}f_{\mathbf{q}}(\mathbf{x})+\ldots+\frac{1} {i!}\text{d}^{i}f_{\mathbf{q}}(\mathbf{x})=\mathbf{0},\text{d}m_{\mathbf{q} \mathbf{\oplus},\mathbf{q}}(\mathbf{x})+\frac{1}{2}\text{d}^{2}m_{\mathbf{q} \mathbf{\oplus},\mathbf{q}}(\mathbf{x})+\ldots+\frac{1}{i!}\text{d}^{i}m_{ \mathbf{q}\mathbf{\oplus},\mathbf{q}}(\mathbf{x})=0,|\mathbf{\alpha}|=| \mathbf{\beta}|=k\} \tag{79}\]
with the differentials of \(f\) and \(m_{\mathbf{q}\mathbf{\oplus}}\) in (51) and (71), respectively. There is a neighborhood \(U\left(\mathbf{q}\right)\) and order \(\kappa\) so that \(L_{k,\mathbf{q}}^{\mathbf{K}}\cap U\left(\mathbf{q}\right)=L_{k}\cap U\left( \mathbf{q}\right)\).
This gives rise to a stratification of \(V\) according to the rank.
Example 1 (cont.): 4C-Linkage with a shaky motion modeThe analysis yields \(L_{k,\mathbf{q}_{0}}^{6}=K_{\mathbf{q}_{0}}^{6,i},i=1,2,3,\ldots\), thus the finite motions with rank 5 are smooth. The reader is referred to the provided Mathematica notebook for details.
## 8 Higher-Order Inverse Kinematics
### Inverse kinematics of a robotic arm
#### 8.1.1 Recursive higher-order inverse kinematics
A robotic arm is a serial kinematic chain with an EE attached at the terminal link \(n\). The \(k\)th-order inverse kinematics problem consists in finding the \(k\)th time derivatives of the joint variables \(\mathbf{q}\left(t\right)\) for a given EE motion, i.e. given EE pose, EE twist, and its time derivatives up to order \(k-1\).
With (5), the spatial twist of the terminal link \(n\) is \(\mathbf{V}_{n}^{\text{s}}=\mathbf{J}_{n}^{\text{s}}\dot{\mathbf{q}}\). For the sake of simplicity, the Jacobian is denoted with \(\mathbf{J}\).
**Assumption 1**.: _In the following, the robotic arm is assumed to be non-redundant. That is, the DOF (i.e. the number \(n\) of joint variables) is equal to the dimension of the image space of the KM, so that the Jacobian \(\mathbf{J}\) is a full rank \(n\times n\) matrix._
With \(\text{D}^{(k)}\mathbf{V}_{n}^{\text{s}}=\text{D}^{(k)}\mathbf{S}_{i}\), the expression (17) for time derivatives of the twist of terminal body \(n\) can be written as
\[\text{D}^{(k)}\mathbf{V}_{n}^{\text{s}}=\mathbf{J}\left(\mathbf{q}\right) \mathbf{q}^{(k+1)}+\sum_{l\leq n}\sum_{l=1}^{k}\binom{k}{l}\text{D}^{(l)} \mathbf{S}_{i}\left(\mathbf{q}\right)q_{i}^{(k-l+1)}. \tag{80}\]
With the full-rank Jacobian (except at singularities), (80) can be solved as
\[\mathbf{q}^{(k)}=\mathbf{J}^{-1}(\mathbf{q})\left(\text{D}^{(k-1)}\mathbf{V}_ {n}^{\text{s}}-\sum_{l\leq n}\sum_{l=1}^{k-1}\binom{k-1}{l}\text{D}^{(l)} \mathbf{S}_{i}\left(\mathbf{q}\right)q_{i}^{(k-l)}\right). \tag{81}\]
This is the \(k\)th-order inverse kinematics solution, which gives the \(k\)th time derivative \(\mathbf{q}^{(k)}\), when given the EE twist \(\mathbf{V}_{n}^{\mathrm{s}}\) and the joint variables as well as their respective time derivatives of up to order \(k-1\). The configuration \(\mathbf{q}\) is known from (numerically) solving the geometric inverse kinematics problem, i.e. solving \(\mathbf{C}_{n}=f_{n}\left(\mathbf{q}\right)\) for \(\mathbf{q}\) for given \(\mathbf{C}_{n}\).
#### 8.1.2 Higher-order inverse kinematics algorithm for a robotic arm
The expression (81) involves time derivatives of the joint screw coordinates, given by (19), of the twists of all bodies of the kinematic chain, as well as time derivatives of \(\mathbf{q}\). Therefore, (81) must be evaluated consecutively for increasing order. Starting with the given configuration \(\mathbf{q}\) and \(\mathbf{V}_{n}^{\mathrm{s}}\), the first-order inverse kinematics is solved to obtain \(\dot{\mathbf{q}}\). This is then propagated through the kinematic chain in order to obtain \(\mathbf{V}_{i}^{\mathrm{s}},i<n\). Next, with the known \(\mathbf{q},\dot{\mathbf{q}},\mathbf{V}_{i}^{\mathrm{s}},i=1,\ldots,n\), the second-order inverse kinematics is solved for \(\ddot{\mathbf{q}}\), which is then propagated via forward kinematics to obtain \(\dot{\mathbf{V}}_{i}^{\mathrm{s}},i<n\). These steps are repeated until the \(k\)th-order solution is found. This is summarized in the following inverse kinematics algorithm:
```
Input:\(\mathbf{q},\mathbf{V}_{n}^{\mathrm{s}},\dot{\mathbf{V}}_{n}^{\mathrm{s}}, \dot{\mathbf{V}}_{n}^{\mathrm{s}},\ldots,\mathrm{D}^{(k-1)}\mathbf{V}_{n}^{ \mathrm{s}}\) FOR \(r=1,\ldots,k\) 1) Evaluate (81) to get \(\mathbf{q}^{(r)}\) (\(r\)th-order inverse kinematics solution of manipulator) 2) Evaluate (17) to get \(\mathrm{D}^{(r-1)}\mathbf{V}_{i}^{\mathrm{s}},i<n\) (\(r\)th-order forward kinematics solution of linkage) END Output:\(\dot{\mathbf{q}},\ldots,\mathbf{q}^{(k)}\) (primary results), \(\mathbf{V}_{i}^{\mathrm{s}},\dot{\mathbf{V}}_{i}^{\mathrm{s}},\dot{\mathbf{V} }_{i}^{\mathrm{s}},\ldots,\mathrm{D}^{(k-1)}\mathbf{V}_{i}^{\mathrm{s}},i=1, \ldots,n\) (secondary results)
```
The individual recursive runs 1) and 2) have complexity \(O\left(n\right)\). The overall complexity of the algorithm is dictated by the inversion of the Jacobian. This can be alleviated, for wrist-partitioned robotic arms (which are predominately used in praxis), since then the inversion of the \(6\times 6\) Jacobian splits into the inversion of three \(3\times 3\) submatrices [5]. Alternatively, the linear system \(\mathbf{J}\mathbf{q}^{(k)}=\mathrm{rhs}\). in (81) can be solved using a tailored numerical solution scheme.
For evaluation of \(\mathrm{D}^{(r-1)}\mathbf{V}_{i}^{\mathrm{s}}\) in step 2), the derivatives \(\mathrm{D}^{(l)}\mathbf{S}_{i},l\leq r-1\) already obtained in the preceding evaluations of (17) are reused.
#### 8.1.3 Explicit expressions for low orders
For low order (see section 8.1.4a), it can be helpful to explicitly roll out the relation (81). Up to order 4, this yields
\[\dot{\mathbf{q}} =\mathbf{J}^{-1}\mathbf{V}_{n}^{\mathrm{s}} \tag{82}\] \[\ddot{\mathbf{q}} =\mathbf{J}^{-1}\left(\dot{\mathbf{V}}_{n}^{\mathrm{s}}-\sum_{l \leq n}\left(2\dot{q}_{l}[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]+\dot{q} _{l}\left([\dot{\mathbf{V}}_{i}^{\mathrm{s}},\mathbf{S}_{i}]+[\mathbf{V}_{i}^ {\mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]]\right)\right) =\mathbf{J}^{-1}\left(\dot{\mathbf{V}}_{n}^{\mathrm{s}}-\sum_{l \leq n}\left(2\dot{q}_{l}\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}+\dot{q}_{l }(\mathbf{ad}_{\bar{\mathbf{V}}_{i}^{\mathrm{s}}}+\mathbf{ad}_{\bar{\mathbf{ V}}_{i}^{\mathrm{s}}}^{2})\right)\mathbf{S}_{i}\right)\] (84) \[\ddot{\mathbf{q}} =\mathbf{J}^{-1}\left(\ddot{\mathbf{V}}_{n}^{\mathrm{s}}-\sum_{l \leq n}\left(\dot{q}_{l}[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]+3\ddot{q }_{l}\left([\dot{\mathbf{V}}_{i}^{\mathrm{s}},\mathbf{S}_{i}]+[\mathbf{V}_{i} ^{\mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]]\right)+\dot{q}_{l }\left([\dot{\mathbf{V}}_{i}^{\mathrm{s}},\mathbf{S}_{i}]+2[\dot{\mathbf{V}}_{ i}^{\mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]]+[\mathbf{V}_{i}^{ \mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]]+[\mathbf{V}_{i}^{ \mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_ {i}]]]\right)\right)\] \[=\mathbf{J}^{-1}\left(\ddot{\mathbf{V}}_{n}^{\mathrm{s}}-\sum_{l \leq n}\left(\ddot{q}_{l}[\mathbf{a}\mathbf{d}_{\mathbf{V}_{i}^{\mathrm{s}}}+ 3\dot{q}_{l}(\mathbf{ad}_{\bar{\mathbf{V}}_{i}^{\mathrm{s}}}+\mathbf{ad}_{\bar{ \mathbf{V}}_{i}^{\mathrm{s}}}^{2})+\dot{q}_{i}(\mathbf{ad}_{\bar{\mathbf{V}}_{ i}^{\mathrm{s}}}+2\mathbf{ad}_{\bar{\mathbf{V}}_{i}^{\mathrm{s}}}\mathbf{ad}_{ \mathbf{V}_{i}^{\mathrm{s}}}+\mathbf{ad}_{\bar{\mathbf{V}}_{i}^{\mathrm{s}}}^{ 2})\right)\mathbf{S}_{i}\right). \tag{85}\]
These are evaluated together with (27)-(29). First (82) is evaluated, which delivers the joint velocities for given EE twist. Then (27) is used to propagate the velocity and to compute the twists of all links. These are used in (83) to obtain the joint accelerations, which are then used to compute the accelerations of all links with (28). This is continued analogously for the higher derivatives. The complexity for evaluating the terms in brackets in (83-85) is still \(O\left(n\right)\).
```
0:\(\mathbf{q},\mathbf{V}_{n}^{\mathrm{s}},\dot{\mathbf{V}}_{n}^{\mathrm{s}}, \dot{\mathbf{V}}_{n}^{\mathrm{s}},\ldots,\mathrm{D}^{(k-1)}\mathbf{V}_{n}^{ \mathrm{s}}\) FOR \(r=1,\ldots,k\) 1) Evaluate (81) to get \(\mathbf{q}^{(r)}\) (\(r\)th-order inverse kinematics solution of manipulator) 2) Evaluate (17) to get \(\mathrm{D}^{(r-1)}\mathbf{V}_{i}^{\mathrm{s}},i<n\) (\(r\)th-order forward kinematics solution of linkage) END Output:\(\dot{\mathbf{q}},\ldots,\mathbf{q}^{(k)}\) (primary results), \(\mathbf{V}_{i}^{\mathrm{s}},\dot{\mathbf{V}}_{i}^{\mathrm{s}},\dot{\mathbf{V}}_{i }^{\mathrm{s}},\dot{\mathbf{V}}_{i}^{\mathrm{s}},\dot{\mathbf{V}}_{i}^{\mathrm{s }},\ldots,\mathrm{D}^{(k-1)}\mathbf{V}_{i}^{\mathrm{s}},i=1,\ldots,n\) (secondary results)
```
The individual recursive runs 1) and 2) have complexity \(O\left(n\right)\). The overall complexity of the algorithm is dictated by the inversion of the Jacobian. This can be alleviated, for wrist-partitioned robotic arms (which are predominately used in praxis), since then the inversion of the \(6\times 6\) Jacobian splits into the inversion of three \(3\times 3\) submatrices [5]. Alternatively, the linear system \(\mathbf{J}\mathbf{q}^{(k)}=\mathrm{rhs}\). in (81) can be solved using a tailored numerical solution scheme.
For evaluation of \(\mathrm{D}^{(r-1)}\mathbf{V}_{i}^{\mathrm{s}}\) in step 2), the derivatives \(\mathrm{D}^{(l)}\mathbf{S}_{i},l\leq r-1\) already obtained in the preceding evaluations of (17) are reused.
#### 8.1.3 Explicit expressions for low orders
For low order (see section 8.1.4a), it can be helpful to explicitly roll out the relation (81). Up to order 4, this yields
\[\dot{\mathbf{q}} =\mathbf{J}^{-1}\mathbf{V}_{n}^{\mathrm{s}}\] (82) \[\ddot{\mathbf{q}} =\mathbf{J}^{-1}\left(\dot{\mathbf{V}}_{n}^{\mathrm{s}}-\sum_{l \leq n}\dot{S}_{i}\dot{q}_{l}\right)\] (83) \[=\mathbf{J}^{-1}\left(\dot{\mathbf{V}}_{n}^{\mathrm{s}}-\sum_{l \leq n}\dot{q}_{l}[\mathbf{V}_{i}^{\mathrm{s}},\mathbf{S}_{i}]\right)= \mathbf{J}^{-1}\left(\dot{\mathbf{V}}_{n}^{\mathrm{s}}-\sum_{l\leq n}\dot{q}_{l }
#### 8.1.4 Applications
a) Flatness-based control of robots with elastic actuatorsIt is known that robotic arms with series elastic actuators (SEA) are differentially flat control systems, which means that the derivatives of the joint variables (inputs) can be expressed as function of the derivatives of the EE twist (outputs), see sec. 5.4b). For robotic arms consisting of rigid links and SEA, the vector relative degree of \(\mathbf{q}\) is 4 [31, 91]. Thus, derivatives of \(\mathbf{q}\) up to 4th-order are necessary for application of flatness-based control algorithms (so that the second time derivative of the EOM are required, see sec. 5.4).
The trajectory planning is usually carried out in the robot workspace, so that the derivatives \(\dot{\mathbf{q}},\dot{\mathbf{q}},\dddot{\mathbf{q}},\ddot{\mathbf{q}}\) are not given a priori but must be determined from derivatives of the EE twist up to 3rd-order by solving the inverse kinematics problem. This problem is not discussed in any of the relevant publications, rather \(\dot{\mathbf{q}},\ddot{\mathbf{q}},\dddot{\mathbf{q}}\) are assumed to be given.
b) Control of robots with structural flexibilityControlling inherently flexible robots (such as elastic lightweight arms) must ensure that the trajectories are sufficiently smooth so to avoid excitation of vibrations. To this end, the higher-order time derivatives of the joint coordinates up to a certain order \(k\) must be bounded. This applies in particular to the optimal control, e.g. time optimal control, of robotic arms. Within the trajectory planning, by numerical solution of the optimal control problem, the constraints \(\mathbf{q}_{\min}^{(k)}\leq\mathbf{q}^{(k)}\left(t\right)\leq\mathbf{q}_{\max} ^{(k)}\) must be included, with a selected order \(k\). It should be mentioned that for redundant manipulators the solution needs to be amended in order to account for possible self motions [97].
### Generalized Inverse Kinematics of a Kinematic Chain
When the motions of all bodies in the kinematic chain are known, the inverse kinematics problem is to determine the corresponding joint motions. This is the counterpart of the forward kinematic problem of a linkage, which is to determine the motion of all bodies for given joint motion.
The equation (26) is an overdetermined system in \(\dot{q}_{i}\), and so are (27)-(29) in \(\ddot{q}_{i}\), \(\dddot{q}_{i}\), and \(\dddot{q}_{i}\), respectively. They possesses unique solutions that give rise to the following recursive relations
\[\dot{q}_{i} =\mathbf{S}_{i}^{+}\left(\mathbf{V}_{i}^{\mathrm{s}}-\mathbf{V}_ {i-1}^{\mathrm{s}}\right) \tag{86}\] \[\ddot{q}_{i} =\mathbf{S}_{i}^{+}\left(\mathbf{\dot{V}}_{i}^{\mathrm{s}}- \mathbf{\dot{V}}_{i-1}^{\mathrm{s}}-\dot{q}_{i}\mathbf{ad}_{\mathbf{V}_{i}^{ \mathrm{s}}}\mathbf{S}_{i}\right)\] (87) \[\dddot{q}_{i} =\mathbf{S}_{i}^{+}\left(\mathbf{\dot{V}}_{i}^{\mathrm{s}}- \mathbf{\dot{V}}_{i-1}^{\mathrm{s}}-\left(3\dddot{q}_{i}\mathbf{ad}_{\mathbf{V }_{i}^{\mathrm{s}}}+3\dot{q}_{i}(\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}+ \mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}^{2})+\dot{q}_{i}(\mathbf{ad}_{ \mathbf{V}_{i}^{\mathrm{s}}}+2\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}\mathbf{ ad}_{\mathbf{V}_{i}^{\mathrm{s}}}+\mathbf{ad}_{\mathbf{V}_{i}^{\mathrm{s}}}^{3}) \right)\mathbf{S}_{i}\right) \tag{89}\]
with \(\mathbf{S}_{i}^{+}=\mathbf{S}_{i}^{T}/\left(\mathbf{S}_{i}^{T}\mathbf{S}_{i} \right)=\mathbf{S}_{i}^{T}/\left\|\mathbf{S}_{i}\right\|^{2}\), where \(\left\|\mathbf{S}_{i}\right\|^{2}=\mathbf{S}_{i}^{T}\mathbf{S}_{i}=\left\| \mathbf{S}_{i}\right\|^{2}+\left\|\mathbf{\eta}_{i}\right\|^{2}\).
The explicit form of the recursive expressions can be simplified. Noticing, with (164) and \(\mathbf{S}_{i}=\left(\mathbf{\dot{S}}_{i}^{\mathrm{s}},\mathbf{\eta}_{i} \right),\mathbf{V}_{i}^{\mathrm{s}}=\left(\mathbf{\dot{\mathbf{0}}}_{i}^{ \mathrm{s}},\mathbf{v}_{i}^{\mathrm{s}}\right)\), that
\[-\mathbf{S}_{i}^{T}\mathbf{ad}_{\mathbf{V}_{i}}\mathbf{S}_{i} =\mathbf{S}_{i}^{T}\mathbf{ad}_{\mathbf{S}_{i}}\mathbf{V}_{i}^{ \mathrm{s}}=\left(\begin{array}{cc}\mathbf{\dot{\mathbf{0}}}_{i}^{T}\mathbf{ \dot{\mathbf{5}}}_{i}+\mathbf{\eta}_{i}^{T}\mathbf{\dot{\mathbf{1}}}_{i}& \mathbf{\eta}_{i}^{T}\mathbf{\dot{\mathbf{5}}}_{i}\end{array}\right)\mathbf{V }_{i}^{\mathrm{s}}\] \[=\left(\begin{array}{cc}\mathbf{\dot{\mathbf{0}}}&\mathbf{\dot{ \mathbf{\eta}}}_{i}^{T}\mathbf{\dot{\mathbf{5}}}_{i}\end{array}\right)\mathbf{ V}_{i}^{\mathrm{s}}=-\left(\mathbf{\dot{\mathbf{5}}}_{i}\mathbf{\eta}_{i} \right)^{T}\mathbf{v}_{i}^{\mathrm{s}}\]
the relation (87) for the acceleration, for instance, becomes
\[\ddot{q}_{i}=\mathbf{S}_{i}^{+}\left(\mathbf{\dot{V}}_{i}^{\mathrm{s}}-\mathbf{ \dot{V}}_{i-1}^{\mathrm{s}}\right)-\frac{\dot{q}_{i}}{\left\|\mathbf{S}_{i} \right\|^{2}}\left(\mathbf{\dot{\mathbf{5}}}_{i}\times\mathbf{\eta}_{i}\right)^{T }\mathbf{v}_{i}^{\mathrm{s}}. \tag{90}\]
The last term in (90) allows a geometric interpretation recalling that \(\mathbf{\dot{\mathbf{5}}}_{i}\times\mathbf{\eta}_{i}\) is the coordinate vector to the point on the instantaneous joint screw axis which is closest to the origin of \(\mathcal{P}_{0}\).
The above solutions (87-89) are exact as long as the twists of all bodies and their derivatives are consistent with the kinematics, i.e. that they satisfy the inter-body constraints due to the joints. If this is not the case, then (87-89) represent the unique solution with minimum error. This is for instance the case when processing measurement data of motion capture systems to generate motions of a human body model.
**Remark 16**.: _In the above relations (86)-(89), the screw coordinate vector \(\mathbf{S}_{i}\) is regarded as a vector in \(\mathbb{R}^{6}\) with Euclidean norm \(\left\|\mathbf{S}_{i}\right\|^{2}\), and \(\mathbf{S}_{i}^{+}\) is the pseudoinverse of \(\mathbf{S}_{i}\) according to the metric of \(\mathbb{R}^{6}\). While these relations follow with basic
linear algebra, it should be remarked that there is no frame invariant inner product of screws. Geometrically, \(\mathbf{S}_{i}^{T}\mathbf{S}_{i}\) must be interpreted as the pairing of the screw coordinates representing the twist \(\mathbf{S}_{i}\dot{q}_{i}\) with some screw coordinates representing a wrench with intensity \(f\) that is not reciprocal to the former, i.e. a wrench that performs work on the twist. The latter is given in ray coordinates by \(f\mathbf{S}_{i}\)._
## 9 Taylor-Series Expansion of the Spatial Jacobian
### Taylor-series of instantaneous joint screws
The joint screw coordinates as function of the joint variables \(\mathbf{q}\) admit the Taylor series
\[\mathbf{S}_{i}(\mathbf{q}+\mathbf{x}) =\mathbf{S}_{i}(\mathbf{q})+\sum_{k\geq 1}\frac{1}{k!}\mathrm{d}^{ k}\mathbf{S}_{i,\mathbf{q}}(\mathbf{x}) \tag{91}\] \[=\mathbf{S}_{i}(\mathbf{q})+\sum_{j<i}x_{j}\frac{\partial\mathbf{ S}_{i}}{\partial q_{j}}+\frac{1}{2!}\sum_{k\leq j<i}x_{k}x_{j}\frac{\partial^{2} \mathbf{S}_{i}}{\partial q_{k}\partial q_{j}}+\frac{1}{3!}\sum_{l\leq k\leq j<i }x_{l}x_{k}x_{j}\frac{\partial^{3}\mathbf{S}_{i}}{\partial q_{l}\partial q_{k }\partial q_{j}}+\ldots\] \[=\mathbf{S}_{i}(\mathbf{q})+\sum_{j<i}x_{j}\left[\mathbf{S}_{j},\mathbf{S}_{i}\right]]+\frac{1}{2!}\sum_{k\leq j<i}x_{k}x_{j}\left[\mathbf{ S}_{k},\left[\mathbf{S}_{j},\mathbf{S}_{i}\right]\right]+\frac{1}{3!}\sum_{l \leq k\leq j<i}x_{l}x_{k}x_{j}\left[\mathbf{S}_{l},\left[\mathbf{S}_{k},\left[ \mathbf{S}_{j},\mathbf{S}_{i}\right]\right]\right]+\ldots \tag{92}\]
where the differential (54) is now written explicitly as sum of Lie brackets (11). The series (92) can be evaluated directly, or the recursive relation (60) can be used to evaluate (91).
Taylor-series of the Jacobian and the involutive closure of the image space of the kinematic mapping
As in section a), consider the terminal link \(n\) of a kinematic chain, and, for simplicity, denote its KM with \(f\) and its spatial Jacobian with \(\mathbf{J}\). The series expansion of the spatial Jacobian of the serial chain with \(n\) joints is
\[\mathbf{J}\left(\mathbf{q}+\mathbf{x}\right):=\left(\mathbf{S}_{i}(\mathbf{q} )\left|\mathbf{S}_{2}(\mathbf{q}+\mathbf{x})\left|\mathbf{S}_{3}(\mathbf{q}+ \mathbf{x})\right.\right|\cdots\left|\mathbf{S}_{n}(\mathbf{q}+\mathbf{x})\right.\right) \tag{93}\]
with \(\mathbf{S}_{i}=\mathbf{S}_{i}(\mathbf{q})\) and the expansion of the instantaneous joint screws (92). In accordance with (8), the screw coordinate of the first joint is constant. The \(i\)th column of the spatial Jacobian depends on the increments of joint variables of the preceding joints \(j=1,\ldots,i-1\).
The image space of \(\mathbf{J}\left(\mathbf{q}\right)\) is the \(se\left(3\right)\)-subspace of possible twists of the terminal link \(n\) of the open kinematic chain at \(\mathbf{q}\). The space of possible twist at another configuration is determined by inserting a particular \(\mathbf{x}\in\mathbb{R}^{n}\) in (93). The form (92) allows to relate these spaces and thus to estimate the image space of the KM.
From (92) follows that, for any \(\mathbf{x}\in\mathbb{R}^{n}\),
\[\mathrm{im}\ \mathbf{J}\left(\mathbf{q}+\mathbf{x}\right)\subseteq\mathrm{ span}\ \left(\mathbf{S}_{i},\left[\mathbf{S}_{j},\mathbf{S}_{i}\right]\right],\left[\mathbf{S} _{k},\left[\mathbf{S}_{j},\mathbf{S}_{i}\right]\right],\left[\mathbf{S}_{l}, \left[\mathbf{S}_{k},\left[\mathbf{S}_{j},\mathbf{S}_{i}\right]\right],\left[ \mathbf{S}_{m},\left[\mathbf{S}_{l},\left[\mathbf{S}_{k},\left[\mathbf{S}_{j}, \mathbf{S}_{i}\right]\right]\right],\ldots\right) \tag{94}\]
with \(\mathbf{S}_{i}=\mathbf{S}_{i}(\mathbf{q})\). A basis for any subalgebra of the semi-direct product \(se\left(3\right)=so\left(3\right)\times\mathbb{R}^{3}\) (semidirect product of two 3-dim Lie algebras) is obtained after at most three-fold application of the Lie bracket (this was discussed for \(se\left(3\right)\) in [49]). Consequently, the image space of \(\mathbf{J}\) at any \(\mathbf{q}\) is a vector subspace of the Lie algebra \(\overline{D}=\mathrm{span}\ \left(\mathbf{S}_{i},\left[\mathbf{S}_{j},\mathbf{S}_{i} \right]\right),\left[\mathbf{S}_{k},\left[\mathbf{S}_{j},\mathbf{S}_{i}\right] \right],\left[\mathbf{S}_{l},\left[\mathbf{S}_{k},\left[\mathbf{S}_{j}, \mathbf{S}_{i}\right]\right]\right)\). The latter is the involutive closure of the screw system \(\left\{\mathbf{S}_{1},\ldots,\mathbf{S}_{n}\right\}\). The expression of \(f_{i}\) in (2) along with (8) shows that \(\mathrm{Ad}_{f_{i}}\overline{D}=\overline{D},i=1,\ldots,n\), which simply means that the motion of any joint of the chain does not change the space of possible twists that the terminal link can perform for all possible configurations. Thus, the smallest \(se\left(3\right)\)-subalgebra, i.e. the vector space, to which the terminal twist belongs for any possible configuration of the chain, is
\[\overline{D}=\mathrm{span}\ \left(\mathbf{Y}_{i},\left[\mathbf{Y}_{j},\mathbf{Y}_{i} \right]\right),\left[\mathbf{Y}_{k},\left[\mathbf{Y}_{j},\mathbf{Y}_{i} \right]\right],\left[\mathbf{Y}_{l},\left[\mathbf{Y}_{k},\left[\mathbf{Y}_{j}, \mathbf{Y}_{i}\right]\right],\ i,j,k,l=1,\ldots,n\right). \tag{95}\]
The corresponding Lie group, denoted \(G:=\exp\overline{D}\), is the smallest subgroup containing the image space of the KM.
Thus, an involutive closure of the image space of the KM can be determined by 3-fold Lie brackets of the joint screws \(\mathbf{Y}_{i}\) in the reference configuration. The \(\mathbf{Y}_{i}\) can be determined in any reference configuration, even singular.
### Applications
a) Structural mobility formulae for estimating the finite DOF of linkagesThe local finite DOF \(\delta_{\mathrm{loc}}\left(\mathbf{q}\right)\) of a single-loop linkage in configuration \(\mathbf{q}\) is the local dimension of the c-space \(V\) at \(\mathbf{q}\), which is the solution variety (38) of the loop constraints \(f\left(\mathbf{q}\right)=\mathbf{I}\). The maximal possible number \(m\) of independent constraints is the dimension of the image space of \(f\), which is the maximal rank of \(\mathbf{J}\left(\mathbf{q}\right)\) for \(\mathbf{q}\in\mathbb{V}^{n}\). Since the dimension of the loop algebra is never lower than that of \(\mathrm{im}\,f\), it follows that \(\delta_{\mathrm{loc}}\left(\mathbf{q}\right)\geq n-\dim\mathrm{im}\,f\geq n- \dim\overline{D}\). It thus provides an upper bound on the number of independent constraints. The dimension of the loop algebra (respectively of the motion group) is the parameter appearing in all _structural mobility_ formulae [43], which estimate the (internal) finite DOF by relating the DOFs of bodies and the number of constraints imposed by the joints. Best known is the Chebychev-Kutzbach-Grubler (CKG) formula that computes the structural mobility as \(\delta_{\mathrm{str}}=g\left(n_{\mathrm{B}}-1\right)-\sum_{i}^{n_{\mathrm{B}} }\left(g-f_{i}\right)=\sum_{i}^{n_{\mathrm{f}}}f_{i}-g\left(n_{\mathrm{J}}-n_{ \mathrm{B}}+1\right)\), wherein the characteristic parameter \(g\) indicates the assumed DOF of the unconstrained bodies, i.e. before assembling the mechanism. Common choices are \(g=3\) for 'planar' and'spherical', and \(g=6\) for'spatial' mechanisms. That is, members of the mechanism are a priori supposed to be restricted to a certain motion subspace. The CKG formula can be written as \(\delta_{\mathrm{str}}=\sum_{i}^{n_{\mathrm{f}}}f_{i}-g\gamma\), where \(\gamma=n_{\mathrm{J}}-n_{\mathrm{B}}+1\) is the number fundamental cycles (FCs) \(\Lambda_{1},\ldots,\Lambda_{\gamma}\) of the mechanism's topological graph [84, 85]. Separating this for the individual FCs yields \(\delta_{\mathrm{str}}=\sum_{i}f_{i}-\left(g_{1}+g_{2}\ldots+g_{\gamma}\right)\). The CKG formula computes the correct DOF for mechanisms without kinematic loops. For a closed loop mechanism (\(\gamma>0\)) \(g_{l}\) is the number of constraints imposed on the FC \(\Lambda_{l}\) in order to close the corresponding kinematic loop. For a given linkage the number of constraints that are independent for a general configuration \(\mathbf{q}\in V\) depends on the particular geometry. The fact that the latter cannot be inferred directly from the geometry for overconstrained linkages is still a topic of research. However, since each FC corresponds to a kinematic chain, the dimension of the loop algebra provides an upper estimate of the number of independent constraints: \(g_{l}=\dim\overline{D}_{l}\). Possible values are \(g_{l}=1,2,3,4,6\) since there is no 5-dimensional \(SE\left(3\right)\)-subgroup. This admits a systematic treatment without the need to guess about the motion characteristic \(g\), which is usually assumed known a priori rather than determined from the linkage kinematics.
The motion subgroup associated to a kinematic loop has been the central element in the classification proposed by Herve [50, 51]. According to this classification, a linkage is called 'trivial' if the CKG formula with \(g_{l}=\dim\overline{D}_{l}\) yields the correct finite mobility. It is called 'exceptional' if the mobility can be explained by the intersection of the motion subgroups associated to two subchains (obtained by opening the loop), which are also be determined by (95). Otherwise the linkage is called 'paradoxical'.
Rico & Ravani [100, 101] developed mobility formulae based on the Lie algebra \(\overline{D}\) generated by the chain. These mobility criteria were refined by using the intersection of Lie algebras generated by subchains in certain order [102, 103]. These closure algebras are also determined by (95).
The different motion spaces generated by individual kinematic chains were used addressed in [112] for generation of motion equations of parallel manipulators.
Example 1 (cont.): 4C-Linkage with a shaky motion modeThe loop algebra of the 4C linkage in fig. 2 is determined with the screw coordinates (44). The loop algebra is invariant and can even be deduced from this singular configuration. According to (95), the nested Lie brackets yield \(\overline{D}=se\left(3\right)\). As shown in sec. 5.4 the linkage has local DOF \(\delta=2\). The 4C linkage thus obeys the CKG formula with \(g=\dim\overline{D}=6\) and \(\sum_{i}f_{i}=8\), i.e. \(\delta_{\mathrm{str}}=8-6=2\). According to the terminology proposed by Herve [50, 51], this linkage is trivial.
Example 2: 4-bar with 2R and 2C jointsThe linkage shown in fig. 5a) comprises two revolute and two cylindrical joints (modeled as a revolute followed by a prismatic joint). The links have equal lengths \(L\). The joint screw coordinates in the shown reference configuration are
\[\mathbf{Y}_{1} =\left(0,0,1,0,0,0\right)^{T},\mathbf{Y}_{2}=\left(0,0,1,0,-L,0 \right)^{T},\mathbf{Y}_{3}=\left(0,0,1,L,-L,0\right)^{T}\] \[\mathbf{Y}_{4} =\left(0,0,0,0,0,1\right)^{T},\mathbf{Y}_{5}=\left(0,0,1,L,0,0 \right)^{T},\mathbf{Y}_{6}=\left(0,0,0,0,0,1\right)^{T}. \tag{96}\]
The closure algebra is readily found as \(\overline{D}=se\left(2\right)\times\mathbb{R}\), the algebra of planar motions and translations along the plane normal. Thus \(g=\dim\overline{D}=4\), and the CKG formula yields \(\delta_{\mathrm{str}}=6-4=2\). This is a correct estimation, and the linkage is trivial. The calculation can be found in the provided Mathematica notebook.
Example 3: Delassus 4H linkageThe linkage comprising four helical joints with respective pitch \(h_{1},h_{2},h_{3},h_{4}\) is shown in fig. 5b). The condition for mobility is that \(h_{1}+h_{3}=h_{2}+h_{4}\). Links 1 and 3 have length \(a\), and links 2 and 4 have length \(b\). In the shown reference configuration the joint screw coordinates are
\[\mathbf{Y}_{1}=\left(0,0,1,0,0,h_{1}\right)^{T},\mathbf{Y}_{2}=\left(0,0,1,0,-a, h_{2}\right)^{T},\mathbf{Y}_{3}=\left(0,0,1,b,-a,h_{3}\right)^{T},\mathbf{Y}_{4}= \left(0,0,1,b,0,h_{4}\right)^{T}.\]
The closure algebra is found as \(\overline{D}=so\left(2\right)\times\mathbb{R}^{3}\), which is the algebra of Schonflies motions (Scara motions) with \(g=\dim\overline{D}=4\). The CKG formula would incorrectly estimate the DOF \(\delta_{\text{str}}=4-4=0\). The linkage is paradoxical.
For the special case that \(h=h_{1}=h_{2}=h_{3}=h_{4}\) the closure algebra becomes
\[\overline{D}=\text{span}\,\left(\left(0,0,0,0,1,0\right)^{T},\left(0,0,0,1,0,0 \right)^{T},\left(0,0,1,0,0,h\right)^{T}\right)=H_{h}\ltimes\mathbb{R}^{2}\]
which is the algebra of planar motions and screw motions with pitch \(h\) perpendicular to the plane of motion. Then \(g=\dim\overline{D}=3\) and the CKG formula gives the correct result \(\delta_{\text{str}}=4-3=1\), i.e. the linkage with this geometry is trivial. Again, the calculation can be found in the provided Mathematica notebook.
b) Elimination of redundant loop constraintsThe existence of redundant loop constraints is a critical issue in computational kinematics and multibody dynamics [55, 120] since they lead to velocity constraints with singular coefficient matrix. Simulation tools usually do not distinguish between different motion spaces and always assign the maximal number of constraints. The standard approach is to employ numerically robust algorithms for solving redundant linear systems (e.g. SVD), which leads to a significant increase computational effort.
However, instead of such a purely numerical approach, the problem can be addressed making use of the loop algebra.
Consider a single kinematic loop. The velocity constraints are given by (39). The maximal rank of the Jacobian is generally less than 6 depending on the motion space of the kinematic chain, for which the loop algebra provides an upper bound. The idea is to reduce the constraints to this subalgebra. A basis for the loop algebra is given by (95). Denote with \(\mathbf{B}\) the matrix with \(\mathbf{Y}_{i}\) and the Lie brackets in (95) as its columns. It has rank \(g=\dim\overline{D}\leq 6\) and admits a singular value decomposition \(\mathbf{B}=\mathbf{U}^{T}\mathbf{\Sigma}\mathbf{V}\), with \(\mathbf{U}^{T}\mathbf{U}=\mathbf{I}_{6},\mathbf{V}^{T}\mathbf{V}=\mathbf{I}_{g}\) and \(\mathbf{\Sigma}=\text{diag}\left(\sigma_{1},\ldots,\sigma_{g},0,\ldots,0\right)\), where \(\sigma_{1},\ldots,\sigma_{g}\) are the non-zero singular values. The constraint Jacobian has rank \(\mathbf{J}\leq g\). It can be reduced to the \(g\times n\) matrix \(\overline{\mathbf{J}}:=\overline{\mathbf{U}}\mathbf{J}\) where \(\overline{\mathbf{U}}\) is the \(g\times 6\) matrix consisting of the first \(g\) rows of \(\mathbf{U}\). The reduced constraints are then \(\overline{\mathbf{J}}\mathbf{q}=\mathbf{0}\). This method never removes independent constraints. It always removes the correct number of redundant constraints for so-called trivial and exceptional linkages. It is numerically well-posed since the basis matrix (consisting of screw coordinates) is well-conditioned. Its application to multi-loop linkages was presented in [77].
## 10 Higher-Order Solution of Loop Closure Constraints
Due to the closure constraints, the joint coordinates of a kinematic loop are dependent. A set of independent coordinates can be selected to parameterize the configuration of the linkage. The motion of the linkage can be determined from a prescribed motion of the independent joint coordinates by solution of the geometric loop constraints. The latter cannot be solved analytically, but a higher-order approximate solution of the closure constraints in form of a power series can be used. This problem has been addressed in [75] where a series expansion was derived in terms of screw coordinates. It was also mentioned as an application of the formulation presented in [61]. In the recent publication [57] a systematic approach to the higher-order solution was presented.
Figure 5: a) 2-DOF linkage comprising two revolute and two cylindrical joints. b) 1-DOF linkage comprising 4 helical joints.
### Time derivatives of the solution of loop closure constraints
Possible joint velocities \(\dot{\mathbf{q}}\) of a closed loop must satisfy the velocity loop constraints (39). Admissible accelerations are the solutions of the acceleration constraints, i.e. of the time derivative of (39). Finally, the \(k\)th time derivative of \(\mathbf{q}\) must satisfy the constraints (40).
**Assumption 2**.: _The Jacobian in the velocity loop constraints (39) is a full rank \(m\times n\) matrix, so that the differential and local DOF of the kinematic loop is \(\delta_{\mathrm{diff}}\left(\mathbf{q}\right)=\delta_{\mathrm{loc}}\left( \mathbf{q}\right)=n-m\) (i.e. the linkage is not shaky/underconstrained). It is assumed that its rank is locally constant near the considered point \(\mathbf{q}\in\mathbb{R}^{n}\)._
The solution of the velocity constraints (39) can be expressed in terms of \(\delta\) independent joint velocities. To this end, the joint coordinate vector \(\mathbf{q}\) is split into the vector \(\mathbf{u}\in\mathbb{V}^{8}\) of \(\delta\) selected independent coordinates and the vector \(\mathbf{d}\in\mathbb{V}^{m}\) comprises the \(m\) dependent coordinates, and are collected in the (rearranged) vector of joint coordinates \(\overline{\mathbf{q}}:=\left(\mathbf{d},\mathbf{u}\right)\). The constraints are accordingly written as
\[\mathbf{0}=\mathbf{J}_{\mathrm{d}}\dot{\mathbf{d}}+\mathbf{J}_{\mathrm{u}} \dot{\mathbf{u}} \tag{97}\]
with corresponding partitioning of the Jacobian
\[\mathbf{J}_{\mathrm{d}} =\left(\mathbf{S}_{\mathrm{d}_{1}},\ldots\mathbf{S}_{\mathrm{d} _{m}}\right),\alpha_{1}<\ldots<\alpha_{m}\in I_{\mathrm{d}} \tag{98}\] \[\mathbf{J}_{\mathrm{u}} =\left(\mathbf{S}_{\mathrm{d}_{1}},\ldots\mathbf{S}_{\mathrm{d} _{\mathrm{d}}}\right),\alpha_{1}<\ldots<\alpha_{8}\in I_{\mathrm{u}}\]
where \(I_{\mathrm{d}}\) and \(I_{\mathrm{u}}\) is the index set of the \(m\) dependent and the \(\delta\) independent joint coordinates, respectively. A solution of (39) is given in terms of the orthogonal complement \(\mathbf{F}\) of \(\mathbf{J}\) as
\[\dot{\overline{\mathbf{q}}}=\mathbf{F}\mathbf{u},\text{ with }\mathbf{F}:= \left(\begin{array}{c}\mathbf{D}\\ \mathbf{I}_{\mathrm{\delta}}\end{array}\right),\ \mathbf{D}:=-\mathbf{J}_{ \mathrm{d}}^{-1}\mathbf{J}_{\mathrm{u}}. \tag{99}\]
Deriving solutions for higher time derivatives of \(\mathbf{q}\) in terms of those of \(\mathbf{u}\) boils down to time derivatives of \(\dot{\mathbf{d}}=\mathbf{D}\mathbf{u}\):
\[\mathbf{d}^{(k)}=\sum_{i=0}^{k-1}\binom{k-1}{l}\mathrm{D}^{(l)}\mathbf{D} \mathrm{D}^{(k-l)}\mathbf{u}. \tag{100}\]
According to the definition (99), it is
\[\mathrm{D}^{(k)}\mathbf{D}=-\sum_{l=0}^{k}\binom{k}{l}\mathrm{D}^{(l)}\mathbf{ J}_{\mathrm{d}}^{-1}\mathrm{D}^{(k-l)}\mathbf{J}_{\mathrm{u}}. \tag{101}\]
Rearranging the derivatives of \(\mathbf{J}_{\mathrm{d}}\mathbf{J}_{\mathrm{d}}^{-1}=\mathbf{I}\), the derivatives of \(\mathbf{J}_{\mathrm{d}}^{-1}\) are found as
\[\mathrm{D}^{(k)}\mathbf{J}_{\mathrm{d}}^{-1}=-\mathbf{J}_{\mathrm{d}}^{-1} \sum_{l=1}^{k}\binom{k}{l}\mathrm{D}^{(l)}\mathbf{J}_{\mathrm{d}}\mathrm{D}^{( k-l)}\mathbf{J}_{\mathrm{d}}^{-1}. \tag{102}\]
Time derivatives of \(\mathbf{J}_{\mathrm{d}}\) and \(\mathbf{J}_{\mathrm{u}}\) are known with those of the \(\mathbf{S}_{i},i=1,\ldots,n\). However, the screw coordinates must be considered as functions of the \(\delta\) independent coordinates \(\mathbf{u}\). This dependence is indicated by the notation \(\overline{\mathbf{S}}_{i}\left(\mathbf{u}\right):=\mathbf{S}_{i}\left( \mathbf{q}\left(\mathbf{u}\right)\right)\), \(\overline{\mathbf{S}}_{i}\left(\mathbf{u}\right):=\mathbf{S}_{i}\left( \mathbf{q}\left(\mathbf{u}\right)\right)\), and \(\overline{\mathbf{F}}\left(\mathbf{u}\right):=\mathbf{F}\left(\mathbf{q} \left(\mathbf{u}\right)\right)\).
Due to (99), partial derivatives of \(\overline{\mathbf{S}}_{i}\) w.r.t. the independent coordinates are
\[\frac{\partial}{\partial u_{l}}\overline{\mathbf{S}}_{i}=\sum_{j\leq n}\frac{ \partial}{\partial q_{j}}\mathbf{S}_{i}F_{jl}=\sum_{j\leq n}[\overline{ \mathbf{S}}_{j},\overline{\mathbf{S}}_{i}]F_{jl}. \tag{103}\]
Thus the time derivatives of \(\overline{\mathbf{S}}_{i}\) are
\[\widehat{\mathbf{S}}_{i}=\sum_{l\in L_{h}}\sum_{j\leq i}\overline{ \mathbf{S}}_{j},\overline{\mathbf{S}}_{i}|F_{ji}\dot{u}_{i}=[\overline{\mathbf{S }}_{i},\overline{\mathbf{S}}_{i}] \tag{104}\]
where now, in analogy to (16),
\[\overline{\mathbf{S}}_{i}:=\sum_{l\in L_{h}}\sum_{j\leq i}\overline{\mathbf{S} }_{j}F_{ji}\dot{u}_{i}. \tag{105}\]
Higher-order time derivatives then follow with Leibnitz' rule, similarly to (19) and (17), as
\[\mathrm{D}^{(k)}\overline{\mathbf{S}}_{i} =\sum_{l=0}^{k-1}\binom{k-1}{l}[\mathrm{D}^{(l)}\overline{ \mathbf{S}}_{i},\mathrm{D}^{(k-l-1)}\overline{\mathbf{S}}_{i}] \tag{106}\] \[\mathrm{D}^{(k)}\overline{\mathbf{S}}_{i} =\sum_{j\leq i}\sum_{l=0}^{k}\binom{k}{l}\mathrm{D}^{(l)} \overline{\mathbf{S}}_{j}q_{j}^{(k-l+1)}. \tag{107}\]
Noting (99), the time derivatives of \(\mathbf{q}\) in (107) are determined by those of \(\mathbf{u}\) as
\[\overline{\mathbf{q}}^{(k)}=\sum_{l=0}^{k-1}\binom{k-1}{l}\mathrm{D}^{(l)} \mathbf{F}\mathbf{u}^{(k-l)}. \tag{108}\]
Finally, with (99), the derivatives of the orthogonal complement are available with (101) as
\[\mathrm{D}^{(k)}\mathbf{F}=\left(\begin{array}{c}\mathrm{D}^{(k)}\mathbf{D} \\ \mathbf{0}_{\overline{\mathbf{S}}}\end{array}\right),k\geq 1 \tag{109}\]
In summary, the relations (100,101,102) allow to determine solutions of the higher-order loop constraints, where the time derivatives of the joint screw coordinates are given by (106,107). These relations are recursive.
**Remark 17**.: _The curve parameter \(t\) does not have to be time. It can be any parameter describing the curve \(\mathbf{q}\left(t\right)\)._
**Remark 18**.: _The treatment presented in the following is still applicable even if the assumption 2, that the constraint Jacobian has full rank, is not satisfied. It must only be assumed that the rank is locally constant in \(V\) (i.e. the linkage is not in a singularity). Then \(\mathbf{J}_{d}\) is not full rank and its pseudoinverse must be used._
### Approximate Solution of Closure Constraints
The higher-order solutions can be used to compute \(k\)th-order approximate solutions of the motion of a closed-loop linkage. Given the independent joint coordinates \(\mathbf{u}\left(t\right)\) and their time derivatives up to order \(k\) as function of a parameter \(t\) (e.g. time), then a \(k\)th-order approximate solution for the motion of the linkage is
\[\mathbf{q}\left(t+\Delta t\right)=\mathbf{q}\left(t\right)+ \Delta t\dot{\mathbf{q}}\left(t\right)+\frac{\Delta t^{2}}{2}\ddot{\mathbf{q}} \left(t\right)+\ldots+\frac{\Delta t^{k}}{k!}\mathbf{q}^{(k)}\left(t\right). \tag{110}\]
In (110) the solution (108) in terms of the independent coordinates is used.
Example: Planar 4-Bar linkageAs a simple example consider the planar 4-bar linkage in the configuration shown in Fig. 6. The screw coordinates are then
\[\mathbf{Y}_{1}=\left(0,0,1,0,0,0\right)^{T},\mathbf{Y}_{2}=\left(0,0,1,0,-2,0 \right)^{T},\mathbf{Y}_{3}=\left(0,0,1,1,-1,0\right)^{T},\mathbf{Y}_{4}=\left( 0,0,1,1,0,0\right)^{T}.\]
First \(q_{4}\) is used as the independent joint angle, i.e. it serves as input for a kinematic motion analysis. The index set of dependent and independent coordinates are \(I_{\text{d}}=\{1,2,3\}\) and \(I_{\text{u}}=\{4\}\), respectively. The recursive relations yield the time derivatives of \(\mathbf{q}\left(t\right)=\left(q_{1}\left(t\right),q_{2}\left(t\right),q_{3} \left(t\right),q_{4}\left(t\right)\right)\) up to 4th order, for instance, in terms of those of \(q_{4}\left(t\right)\) as
\[\mathbf{q}=\left(\begin{array}{c}-\frac{1}{2}q_{4}\\ \frac{1}{2}\ddot{q}_{4}\\ -\ddot{q}_{4}\\ \dot{q}_{4}\end{array}\right),\ddot{\mathbf{q}}=\left(\begin{array}{c}\frac{ 1}{4}\left(2\ddot{q}_{4}^{2}-2\ddot{q}_{4}\right)\\ \frac{1}{4}\left(2\ddot{q}_{4}+4\ddot{q}_{4}\ddot{q}_{4}\right)\\ -\ddot{q}_{4}-\frac{1}{2}\dot{q}_{4}^{2}\\ \ddot{q}_{4}\end{array}\right),\ddot{\mathbf{q}}=\left(\begin{array}{c}\frac {1}{4}\left(-2\ddot{q}_{4}+3\dot{q}_{4}^{2}+8q_{4}^{4}+4\,\ddot{q}_{4}\,4+18 \dot{q}_{4}^{2}\ddot{q}_{4}\right)\\ \frac{1}{4}\left(2\ddot{q}_{4}+3\dot{q}_{4}^{2}+2\dot{q}_{4}^{2}+4\,\ddot{q}_{4 }\,\ddot{q}_{4}\right)\\ \frac{1}{4}\left(-4\ddot{q}_{4}-3\dot{q}_{4}^{2}-5\dot{q}_{4}^{2}-4\,\ddot{q}_{ 4}\,\ddot{q}_{4}\right)\\ \ddot{\ddot{q}}_{4}\end{array}\right),\ddot{\mathbf{q}}=\left(\begin{array}{c} \frac{1}{4}\left(-2\ddot{q}_{4}+3\dot{q}_{4}^{2}+8q_{4}^{4}+4\,\ddot{q}_{4} \,4+18\dot{q}_{4}^{2}\ddot{q}_{4}\right)\\ \frac{1}{4}\left(2\ddot{q}_{4}+3\dot{q}_{4}^{2}+2\dot{q}_{4}^{2}+4\,\ddot{q}_{ 4}\,\ddot{q}_{4}\right)\\ \frac{1}{2}\left(-2\ddot{q}_{4}-3\dot{q}_{4}^{2}-5\dot{q}_{4}^{2}-4\,\ddot{q} _{4}\,\ddot{q}_{4}-9\dot{q}_{4}^{2}\ddot{q}_{4}\right)\\ \ddot{\ddot{q}}_{4}\end{array}\right).\]
These expressions can be used to construct a 4th-order approximation of the solution of the closure constraints, denoted \(\mathbf{q}^{\left[4\right]}\left(t\right)\). For the particular case of \(q_{4}\left(t\right)=\sin t\), this is
\[\mathbf{q}^{\left[4\right]}\left(t\right)=\left(\begin{array}{c}\frac{1}{24} \left(t^{4}+5t^{3}+3t^{2}-12t\right),-\frac{1}{48}\left(t^{4}+4t^{3}-6t^{2}-2 4t\right),-\frac{1}{48}\left(t^{4}-2t^{3}+12t^{2}+48t\right),\sin t\end{array} \right)^{T}.\]
b)Now \(q_{1}\) is used as independent coordinate. The index sets are \(I_{\text{d}}=\{2,3,4\}\) and \(I_{\text{u}}=\{1\}\), and the time derivatives are determined as
\[\mathbf{q}=\left(\begin{array}{c}\dot{q}_{1}\\ -\dot{q}_{1}\\ 2\dot{q}_{1}\\ -2q_{1}\end{array}\right),\ddot{\mathbf{q}}=\left(\begin{array}{c}\ddot{q}_ {1}\\ 2\dot{q}_{1}^{2}-\ddot{q}_{1}\\ 2\dot{q}_{1}-4\dot{q}_{1}^{2}\\ 2\left(\ddot{q}_{1}^{2}-\ddot{q}_{1}\right)\end{array}\right),\ \ddot{\mathbf{q}}=\left(\begin{array}{c}\ddot{q}_{1}\\ -\ddot{q}_{1}-12\dot{q}_{1}^{3}+6\dot{q}_{1}\dot{q}_{1}\\ 2\left(\ddot{q}_{1}+154\dot{q}_{1}^{3}-6\dot{q}_{1}\ddot{q}_{1}\right)\\ -2\left(\ddot{q}_{1}^{2}+9\dot{q}_{1}^{3}-3\dot{q}_{1}\ddot{q}_{1}\right) \end{array}\right),\ddot{\mathbf{q}}=\left(\begin{array}{c}\ddot{q}_{1}\\ -\ddot{q}_{1}+6\dot{q}_{1}^{2}+154\dot{q}_{1}^{4}+8\,\ddot{q}_{1}\,\dot{q}_{1} -72\dot{q}_{1}^{2}\ddot{q}_{1}\\ 2\left(\ddot{q}_{1}+154\dot{q}_{1}^{3}-6\dot{q}_{1}\ddot{q}_{1}\right)\\ -2\left(\ddot{q}_{1}^{2}+9\dot{q}_{1}^{3}-3\dot{q}_{1}\ddot{q}_{1}\right) \end{array}\right),\ddot{\mathbf{q}}=\left(\begin{array}{c}-\ddot{q}_{1}+6\dot{q }_{1}^{2}+154\dot{q}_{1}^{4}+8\,\ddot{q}_{1}\,\dot{q}_{1}-72\dot{q}_{1}^{2} \ddot{q}_{1}\\ 2\left(\ddot{q}_{1}+154\dot{q}_{1}^{3}-6\dot{q}_{1}\ddot{q}_{1}\right)\\ -2\left(\ddot{q}_{1}^{2}+9\dot{q}_{1}^{3}-3\dot{q}_{1}\ddot{q}_{1}\right) \end{array}\right),\ddot{\mathbf{q}}=\left(\begin{array}{c}-\ddot{q}_{1}+6 \dot{q}_{1}^{2}+154\dot{q}_{1}^{4}+8\,\ddot{q}_{1}\,\dot{q}_{1}-72\dot{q}_{1}^ {2}\ddot{q}_{1}\\ 2\left(\ddot{q}_{1}^{2}+154\dot{q}_{1}^{4}+8\,\ddot{q}_{1}^{2}\,\dot{q}_{1}-10 8\dot{q}_{1}^{2}\ddot{q}_{1}\right)\end{array}\right).\]
The 4th-order approximation in terms of the independent coordinate \(q_{1}\) with \(q_{1}\left(t\right)=\sin t\) is
\[\mathbf{q}^{\left[4\right]}\left(t\right)=\left(\sin t,\frac{73}{12}t^{4}- \frac{11}{6}t^{3}+t^{2}-t,-\frac{2}{3}\left(22t^{4}-7t^{3}+3t^{2}-3t\right), \frac{103}{12}t^{4}-\frac{8}{3}t^{3}+t^{2}-2t\right)^{T}.\]
For illustration purpose, the approximations are compared with the exact solutions in Fig. 7.
Figure 6: Planar 4-bar linkage in reference configuration \(\mathbf{q}=\mathbf{0}\).
## 11 Other Representations of Twists
The advantage of the spatial representation is that the twists of all bodies are measured and resolved in the same frame, namely the IFR \(\mathcal{F}_{0}\). For this reason it is used in theoretical kinematics but recently also in MBS dynamics. For motion control of robotic manipulators and in the classical formulations of the motion equations for MBS, the body-fixed and the hybrid representation of twists is used. The kinematics and dynamics formulations of MBS are traditionally expressed in terms of body-fixed twists [2, 5, 7, 38, 92, 52]. Using the spatial representation leads to computationally more efficient algorithms, however, which is already apparent from the recursive relations (26-29) as it does not involve a frame transformation of twists. Recent formulations therefore employ the spatial representations of twists (and dually of wrenches) that stem (at least conceptually) from the spatial operator algebra introduced in [104, 105, 106] and advanced in [56]. The Articulated-Body and the Composite-Rigid-Body algorithm [36, 37] are the most prominent.
Apparently, in various situations it is necessary to relate the different representations. The relevant relations are presented in the following.
### Body-Fixed Representation
#### 11.1.1 Instantaneous joint screws and the body-fixed Jacobian
The twist of body \(i\) in _body-fixed_ representation is expressed by the coordinate vector
\[\mathbf{V}_{i}^{\text{b}}=\left(\begin{array}{c}\boldsymbol{\omega}_{i}^{ \text{b}}\\ \mathbf{v}_{i}^{\text{b}}\end{array}\right) \tag{111}\]
where \(\boldsymbol{\omega}^{\text{b}}\) is the angular velocity of body-fixed frame \(\mathcal{F}_{i}\) relative to the world frame \(\mathcal{F}_{0}\) resolved in \(\mathcal{F}_{i}\), and \(\mathbf{v}^{\text{b}}=\mathbf{R}^{T}\mathbf{r}\) is the translational velocity of the origin of \(\mathcal{F}_{i}\) relative to \(\mathcal{F}_{0}\) resolved in \(\mathcal{F}_{i}\).
Denote with \({}^{i}\mathbf{b}_{i,j}\left(\mathbf{q}\right)\) the instantaneous position vector of a point on the axis of joint \(j\) measured in the body-fixed frame \(\mathcal{F}_{i}\), and with \({}^{i}\mathbf{e}_{j}\left(\mathbf{q}\right)\) a unit vector along the axis resolved in \(\mathcal{F}_{i}\). The body-fixed twist coordinate vector is readily constructed geometrically as [82, 89, 5]
\[\mathbf{V}_{i}^{\text{b}} =\dot{q}_{1}\left(\begin{array}{c}{}^{i}\mathbf{e}_{1}\\ {}^{i}\mathbf{b}_{i,1}\times{}^{i}\mathbf{e}_{1}+{}^{i}\mathbf{e}_{1}h_{1} \end{array}\right)+\dot{q}_{2}\left(\begin{array}{c}{}^{i}\mathbf{e}_{2}\\ {}^{i}\mathbf{b}_{i,2}\times{}^{i}\mathbf{e}_{2}+{}^{i}\mathbf{e}_{2}h_{2} \end{array}\right)+\ldots+\dot{q}_{i}\left(\begin{array}{c}{}^{i}\mathbf{e}_{ i}\\ {}^{i}\mathbf{b}_{i,i}\times{}^{i}\mathbf{e}_{i}+{}^{i}\mathbf{e}_{i}h_{i} \end{array}\right) \tag{112}\] \[=\dot{q}_{1}\mathbf{B}_{i,1}+\dot{q}_{2}\mathbf{B}_{i,2}+\ldots+ \dot{q}_{i}\mathbf{B}_{i,i}\] \[=\mathbf{J}_{i}^{\text{b}}\left(\mathbf{q}\right)\dot{\mathbf{q}} \tag{113}\]
where
\[\mathbf{B}_{i,j}=\left(\begin{array}{c}{}^{i}\mathbf{e}_{j}\\ {}^{i}\mathbf{b}_{i,j}\times{}^{i}\mathbf{e}_{j}+{}^{i}\mathbf{e}_{j}h_{j} \end{array}\right) \tag{114}\]
is the instantaneous screw coordinate vector of joint \(j\) represented in frame \(\mathcal{F}_{i}\) fixed at body \(i\). The _body-fixed Jacobian_ of
**Corrected Preprint -- Published in Mechanism and Machine Theory, Vol. 142, 2019**
Figure 7: Exact (solid line) and approximate solution (dashed line) of \(\mathbf{q}(t)\) for the planar 4-bar linkage when a) \(q_{4}(t)=\sin t\) and b) \(q_{1}(t)=\sin t\) is used input.
body \(i\) is thus
\[\mathbf{J}_{i}^{\mathrm{b}}\left(\mathbf{q}\right):=\Big{(}\mathbf{B}_{i,1}( \mathbf{q})\left|\cdots\left|\mathbf{B}_{i,\hat{k}}(\mathbf{q})\left|\mathbf{0} \right|\cdots\left|\mathbf{0}\right.\right\right). \tag{115}\]
The body-fixed joint screws are determined analytically as
\[\mathbf{B}_{i,j}(\mathbf{q})=\mathbf{A}\mathbf{d}_{\mathbf{C}_{i,j}\mathbf{A} _{j}^{-1}}\mathbf{Y}_{j},\ j\leq i \tag{116}\]
where \(\mathbf{C}_{i,j}:=\mathbf{C}_{i}^{-1}\mathbf{C}_{j}=\mathbf{A}_{i}^{-1}\exp \left(-\mathbf{Y}_{i}q_{i}\right)\cdot\ldots\exp\left(-\mathbf{Y}_{j+1}q_{j+1} \right)\mathbf{A}_{j}\) is the configuration of body \(j\) relative to body \(i\). Consequently, the screw of joint \(j\) represented in frame \(\mathcal{T}_{i}\) on body \(i\) depends on the joint variables \(q_{j+1},\ldots,q_{i}\).
For \(i=j\), relation (116) shows that the screw coordinate vector of joint \(i\) represented in body-fixed frame \(\mathcal{T}_{i}\) is constant, and is denoted with1\(\mathbf{X}_{i}:=\mathbf{A}\mathbf{d}_{\mathbf{A}_{i}^{-1}}\mathbf{Y}_{i}\). The latter are thus related to the joint screw coordinates (3) in spatial representation, determined in the reference configuration, by the frame transformation \(\mathbf{Y}_{i}=\mathbf{A}\mathbf{d}_{\mathbf{A}}\mathbf{X}_{i}\).
The main difference to the spatial representation is that the body-fixed screw coordinates (116) of joint \(j\) depend on the body \(i\). Whereas in spatial representation the non-zero joint screws (8) in the Jacobian (7) are identical, they must be determined for the specific body \(i\) to construct the Jacobian (115). Yet the body-fixed twist admits the recursive relation, similarly to (9),
Footnote 1: To be precise, it should be indicated by ‘\(\mathbf{X}_{i}\)’ that the screw coordinates of joint \(i\) are represented in frame \(\mathcal{T}_{i}\), as in [82]. This is omitted for simplicity.
\[\mathbf{V}_{i}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathbf{C}_{i,i-1}}\mathbf{ V}_{i-1}^{\mathrm{b}}+\mathbf{X}_{i}\dot{q}_{i} \tag{117}\]
and accordingly
\[\mathbf{B}_{i,j}=\left\{\begin{aligned} &\mathbf{A}\mathbf{d}_{\mathbf{C}_{i,j-1}} \mathbf{B}_{i-1,j},j<i\\ &\mathbf{X}_{i},j=i.\end{aligned}\right. \tag{118}\]
**Remark 19**.: _The body-fixed is traditionally used for dynamics modeling and MBS dynamics in particular [5, 119] basically because the inertia tensor of body \(i\) is constant when represented in \(\mathcal{T}_{i}\)._
#### 11.1.2 Partial derivatives of joint screw coordinates in body-fixed representation
The instantaneous screw coordinates (116) of joint \(j\) expressed in the frame affixed at body \(i\) depends on the joint variables \(q_{j+1},\ldots,q_{i}\). The nonvanishing partial derivatives are [78]
\[\frac{\partial\mathbf{B}_{i,j}}{\partial q_{k}} =[\mathbf{B}_{i,j},\mathbf{B}_{i,k}]\] \[=\mathbf{ad}_{\mathbf{B}_{i,j}}\mathbf{B}_{i,k},\ j<k\leq i \tag{119}\]
and the nonvanishing repeated vth-order partial derivatives are
\[\frac{\partial^{\mathrm{v}}\mathbf{B}_{i,j}}{\partial q_{\alpha_{ 1}}\partial q_{\alpha_{2}}\cdots\partial q_{\alpha_{\mathrm{v}}}} =[\ldots[[[\mathbf{B}_{i,j},\mathbf{B}_{i,\beta_{1}}],\mathbf{B} _{i,\beta_{2}}],\mathbf{B}_{i,\beta_{3}}]\ldots,\mathbf{B}_{i,\beta_{\mathrm{v }}}],\] \[=(-1)^{\mathrm{v}}\mathbf{ad}_{\mathbf{B}_{i,\beta_{\mathrm{v}}}} \mathbf{ad}_{\mathbf{B}_{i,\beta_{\mathrm{v}-1}}}\mathbf{ad}_{\mathbf{B}_{i, \beta_{\mathrm{v}-2}}}\cdots\mathbf{ad}_{\mathbf{B}_{i,\beta_{1}}}\mathbf{B}_{i, j},\ \ \ \mathrm{if}\ j<\alpha_{1},\ldots,\alpha_{\mathrm{v}}\leq i \tag{120}\]
where \(j<\beta_{1}\leq\beta_{2}\leq\ldots\leq\beta_{\mathrm{v-1}}\leq\beta_{\mathrm{v }}\leq i\) is the ordered set of indexes \(\{\alpha_{1},\ldots,\alpha_{\mathrm{v}}\}\). The sign \((-1)^{\mathrm{v}}\) is caused by swapping the arguments in the Lie brackets, which is used in order to rearrange the screws in decreasing order so to avoid long expressing with nested subscripts of the \(\mathbf{ad}\) matrices.
Like (12) the partial derivatives can be expressed in terms of the multi-index as
\[\partial^{\mathbf{a}}\mathbf{B}_{i,j}=(-1)^{\mathrm{v}}\mathbf{ad}_{\mathbf{B} _{i,\alpha_{1}}}^{\alpha_{i}}\mathbf{ad}_{\mathbf{B}_{i,\alpha_{-1}}}^{\alpha_{i -1}}\ldots\mathbf{ad}_{\mathbf{B}_{i,\alpha_{j+1}}}^{\alpha_{j+1}}\mathbf{B}_{ i,j},\ \mathrm{for}\ a_{k}\neq 0,j<k\leq i,\mathrm{v}=|\mathbf{a}|. \tag{121}\]
#### 11.1.3 Time derivatives of low degree using closed form relations for partial derivatives
The time derivative of the body-fixed twist of body \(i\) (113) is
\[\dot{\mathbf{V}}_{i}^{\rm b}=\sum_{j\leq i}\mathbf{B}_{i,j}\ddot{q}_{j}+\sum_{j< k\leq i}[\mathbf{B}_{i,j},\mathbf{B}_{i,k}]\dot{q}_{j}\dot{q}_{k}. \tag{122}\]
The second time derivative can be simplified to
\[\dot{\mathbf{V}}_{i}^{\rm b}=\sum_{j\leq i}\mathbf{B}_{i,j}\dddot{q}_{j}+2\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The hybrid twist is determined analogously to the body-fixed twist (112), except that all vectors are resolved in the inertial frame. Denote with \(\mathbf{b}_{i,j}\) the position vector of a point on the axis of joint \(j\) measured from the origin of \(\mathcal{F}_{i}\), and with \(\mathbf{e}_{j}\) a unit vector along the axis,where both are expressed in \(\mathcal{F}_{0}\). The hybrid twist of body \(i\) is given by
\[\mathbf{V}_{i}^{\text{h}} =\dot{q}_{1}\begin{pmatrix}\mathbf{e}_{1}\\ \mathbf{b}_{i,1}\times\mathbf{e}_{1}+h_{1}\mathbf{e}_{1}\end{pmatrix}+\dot{q} _{2}\begin{pmatrix}\mathbf{e}_{2}\\ \mathbf{b}_{i,2}\times\mathbf{e}_{2}+h_{2}\mathbf{e}_{2}\end{pmatrix}+\ldots+ \dot{q}_{i}\begin{pmatrix}\mathbf{e}_{i}\\ \mathbf{b}_{i,i}\times\mathbf{e}_{i}+h_{i}\mathbf{e}_{i}\end{pmatrix} \tag{130}\] \[=\dot{q}_{1}\mathbf{H}_{i,1}+\dot{q}_{2}\mathbf{H}_{i,2}+\ldots+ \dot{q}_{i}\mathbf{H}_{i,j}\] \[=\mathbf{J}_{i}^{\text{h}}\left(\mathbf{q}\right)\dot{\mathbf{q}}\]
where \(\mathbf{H}_{i,j}\) is the hybrid form of the screw coordinates of joint \(j\), and the _hybrid Jacobian_ of body \(i\) is
\[\mathbf{J}_{i}^{\text{h}}\left(\mathbf{q}\right):=\left(\mathbf{H}_{i,1}( \mathbf{q})\left|\cdots\left|\mathbf{H}_{i,i}(\mathbf{q})\left|\mathbf{0} \right|\cdots\left|\mathbf{0}\right.\right.\right. \tag{131}\]
The reference point where the velocity is measured is the same for hybrid and body-fixed twist. The only difference is the reference frame in which the vectors are resolved. Therefore the hybrid twist is given as \(\mathbf{V}_{i}^{\text{h}}=\mathbf{Ad}_{\mathbf{R}_{i}}\mathbf{V}_{i}^{\text{h}}\) (using (160)) and the instantaneous joint screw coordinates are
\[\mathbf{H}_{i,j}=\mathbf{Ad}_{\mathbf{R}_{i}}\mathbf{B}_{i,j} \tag{132}\]
where \(\mathbf{R}_{i}\) is the rotation matrix of body \(i\) in (1). From (118) follows the recursive relation
\[\mathbf{H}_{i,j}=\mathbf{Ad}_{\mathbf{r}_{i,i-1}}\mathbf{H}_{i-1,j},j<i \tag{133}\]
with the relative displacement \(\mathbf{r}_{i,j-1}=\mathbf{r}_{i}-\mathbf{r}_{i-1}\) of body \(i\) (i.e. the origin of \(\mathcal{F}_{i}\)) and body \(i-1\) (i.e. the origin of \(\mathcal{F}_{i-1}\)).
#### 11.2.2 Explicit recursive relations for low-order
The first and second time derivative of (132) is [83]
\[\dot{\mathbf{H}}_{i,j} =\left(\mathbf{ad}_{\mathbf{r}_{i,j}}+\mathbf{Ad}_{\mathbf{r}_{i, j-1}}\mathbf{ad}_{\mathbf{oi}}\right)\mathbf{H}_{j,j} \tag{134}\] \[\ddot{\mathbf{H}}_{i,j} =\left(\mathbf{ad}_{\mathbf{r}_{i,j}}+2\mathbf{ad}_{\mathbf{r}_{ i,j}}\mathbf{ad}_{\mathbf{oi}}^{\text{a}}+\mathbf{Ad}_{\mathbf{r}_{i,j}}( \mathbf{ad}_{\mathbf{oi}}^{\text{a}}+\mathbf{ad}_{\mathbf{oi}}^{\text{a}} \mathbf{ad}_{\mathbf{oi}}^{\text{a}})\mathbf{H}_{j,j}\right). \tag{135}\]
This yields the explicit expressions for the acceleration and jerk in hybrid representation
\[\dot{\mathbf{V}}_{i}^{\text{h}} =\sum_{j\leq i}(\mathbf{J}_{i,j}^{\text{h}}\ddot{q}_{j}+(\mathbf{ ad}_{\mathbf{r}_{i,j}}+\mathbf{Ad}_{\mathbf{r}_{i,j}}\mathbf{ad}_{\mathbf{oi}}^{ \text{a}})\mathbf{H}_{j,j}\dot{q}_{j}) \tag{136}\] \[\dot{\mathbf{V}}_{i}^{\text{h}} =\sum_{j\leq i}\left(\mathbf{J}_{i,j}^{\text{h}}\dddot{q}_{j}+2 \mathbf{ad}_{\mathbf{r}_{i,j}}\ddot{q}_{j}+\left(\mathbf{ad}_{\mathbf{r}_{i,j} }+2\mathbf{ad}_{\mathbf{r}_{i,j}}\mathbf{ad}_{\mathbf{oi}}^{\text{a}}\right) \dot{q}_{j}+\mathbf{Ad}_{\mathbf{r}_{i,j}}\left(2\mathbf{ad}_{\mathbf{oi}} \ddot{q}_{j}+\mathbf{ad}_{\mathbf{oi}}^{\text{a}}+\mathbf{ad}_{\mathbf{oi}} \mathbf{ad}_{\mathbf{oi}}^{\text{a}}\mathbf{ad}_{\mathbf{oi}}^{\text{a}} \right)\dot{q}_{j}\right)\mathbf{H}_{j,j}\right). \tag{137}\]
These are the core relation in the so-called'spatial vector' formulation (i.e. using the hybrid representation of twists) [104, 106, 68, 104, 56]. In this context the Lie bracket, respectively screw product, (163) has been termed the'spatial cross product' [36, 37].
### Relation of the different representations
While the spatial representation is dominantly used in kinematics and mechanism theory, in various robotic applications body-fixed (111) and hybrid representations (129) are used to described EE twist. Also in MBS dynamics these representations are traditionally used. Yet the spatial representation is deemed to be computationally more efficient, which is documented by the recently proposed MBS dynamics algorithms [37]. The recursive relations for higher time derivatives (17) and (127) for the twist of body \(i\) have the same complexity. However, when computing this for another body, the spatial version can reuse the derivatives (17) since they are body independent. It seems therefore, desirable to employ the results
for the spatial twist even when using the body-fixed or hybrid version. To this end, in the following, the twists and their derivatives are related to the spatial twist derivatives.
The three representations of twists are related as follows
\[\begin{array}{l}\mathbf{V}_{i}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{C}_{ i}}\mathbf{V}_{i}^{\mathrm{b}},\ \ \ \mathbf{V}_{i}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{r}_{i}}\mathbf{V}_{i}^{ \mathrm{h}},\ \ \ \mathbf{V}_{i}^{\mathrm{h}}=\mathbf{A}\mathbf{d}_{\mathrm{R}_{i}}\mathbf{V}_{i}^ {\mathrm{b}}\\ \mathbf{V}_{i}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathrm{C}_{i}}^{-1}\mathbf{ V}_{i}^{\mathrm{s}},\ \ \mathbf{V}_{i}^{\mathrm{h}}=\mathbf{A}\mathbf{d}_{\mathrm{r}_{i}}\mathbf{V}_{i}^ {\mathrm{s}},\ \ \mathbf{V}_{i}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathrm{R}_{i}}\mathbf{V}_{i}^ {\mathrm{h}}.\end{array} \tag{138}\]
using the notations (159) and (160), respectively. The instantaneous screw coordinates and Jacobians transform accordingly. With \(\frac{d}{dt}\mathbf{A}\mathbf{d}_{\mathrm{C}}^{-1}=-\mathbf{A}\mathbf{d}_{ \mathrm{C}}^{-1}\mathbf{a}\mathbf{d}_{\mathrm{V}}\), the relation of the time derivative of spatial and body-fixed twist is obtained from (138) as \(\dot{\mathbf{V}}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}^{-1}\dot{ \mathbf{V}}^{\mathrm{s}}\). For higher time derivatives this yields
\[\mathrm{D}^{(k)}\mathbf{V}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}^{-1 }\big{(}\sum_{i=0}^{k-1}\binom{k-1}{i}\,(-1)^{i}\,\mathbf{a}\mathbf{d}_{\mathrm{ V}_{i}}^{\mathrm{i}}\mathrm{D}^{(k-i)}\mathbf{V}^{\mathrm{s}}\big{)},k\geq 1. \tag{139}\]
With (159) and (166) it is \(\frac{d}{dt}\mathbf{A}\mathbf{d}_{-\mathrm{r}}=-\mathbf{a}\mathbf{d}_{ \mathrm{F}}\), and the time derivatives of spatial and hybrid twists are related by
\[\mathrm{D}^{(k)}\mathbf{V}^{\mathrm{h}}=\mathbf{A}\mathbf{d}_{-\mathrm{r}} \mathrm{D}^{(k)}\mathbf{V}^{\mathrm{s}}-\sum_{i=1}^{k}\binom{k}{i}\mathbf{a} \mathbf{d}_{\mathrm{r}^{(i)}}\mathrm{D}^{(k-i)}\mathbf{V}^{\mathrm{s}},k\geq 1. \tag{140}\]
The explicit expressions of derivatives of \(\mathbf{V}^{\mathrm{b}}\) in terms of those of \(\mathbf{V}^{\mathrm{s}}\), up to fourth-order, are then
\[\begin{array}{l}\mathbf{V}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}^{ -1}\mathbf{V}^{\mathrm{s}},\ \ \dot{\mathbf{V}}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}^{-1}\mathbf{V}^{ \mathrm{s}},\ \ \dot{\mathbf{V}}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}^{-1}\big{(}\dot{ \mathbf{V}}^{\mathrm{s}}-\mathbf{a}\mathbf{d}_{\mathrm{V}}\mathbf{V}^{ \mathrm{s}}\big{)}\\ \ddot{\mathbf{V}}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}^{-1}\big{(} \ddot{\mathbf{V}}^{\mathrm{s}}-2\mathbf{a}\mathbf{d}_{\mathrm{V}_{i}}\dot{ \mathbf{V}}^{\mathrm{s}}+\mathbf{a}\mathbf{d}_{\mathrm{V}_{i}}^{2}\dot{ \mathbf{V}}^{\mathrm{s}}\big{)}\\ \ddot{\mathbf{V}}^{\mathrm{b}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}^{-1}\big{(} \ddot{\mathbf{V}}^{\mathrm{s}}-3\mathbf{a}\mathbf{d}_{\mathrm{V}_{i}}\ddot{ \mathbf{V}}^{\mathrm{s}}-2\mathbf{a}\mathbf{d}_{\mathrm{V}_{i}}\big{)}\,\ddot{ \mathbf{V}}^{\mathrm{s}}+\big{(}\mathbf{a}\mathbf{d}_{\mathrm{V}_{i}}\mathbf{ a}\mathbf{d}_{\mathrm{V}^{\mathrm{s}}}-\mathbf{a}\mathbf{d}_{\mathrm{V}_{i}}^{ \mathrm{3}}\big{)}\,\dot{\mathbf{V}}^{\mathrm{s}}\big{)}\end{array} \tag{141}\]
and of the derivatives of \(\mathbf{V}^{\mathrm{h}}\) are
\[\begin{array}{l}\mathbf{V}^{\mathrm{h}}=\mathbf{A}\mathbf{d}_{-\mathrm{r}} \mathbf{V}^{\mathrm{s}},\ \ \dot{\mathbf{V}}^{\mathrm{h}}=\mathbf{A}\mathbf{d}_{-\mathrm{r}}\dot{\mathbf{V}}^{ \mathrm{s}}-\mathbf{a}\mathbf{d}_{\mathrm{r}}\mathbf{V}^{\mathrm{s}},\ \ \dot{\mathbf{V}}^{\mathrm{h}}=\mathbf{A}\mathbf{d}_{-\mathrm{r}}\dot{\mathbf{V}}^{ \mathrm{s}}-2\mathbf{a}\mathbf{d}_{\mathrm{r}}\dot{\mathbf{V}}^{\mathrm{s}}- \mathbf{a}\mathbf{d}_{\mathrm{r}}\mathbf{V}^{\mathrm{s}},\\ \ddot{\mathbf{V}}^{\mathrm{h}}=\mathbf{A}\mathbf{d}_{-\mathrm{r}}\ \ddot{\mathbf{V}}^{ \mathrm{s}}-3\mathbf{a}\mathbf{d}_{\mathrm{r}}\ddot{\mathbf{V}}^{ \mathrm{s}}-3\mathbf{a}\mathbf{d}_{\mathrm{r}}\dot{\mathbf{V}}^{\mathrm{s}}- \mathbf{a}\mathbf{d}_{\mathrm{r}}\dot{\mathbf{V}}^{\mathrm{s}}\\ \dot{\mathbf{V}}^{\mathrm{h}}=\mathbf{A}\mathbf{d}_{-\mathrm{r}}\ \dot{\mathbf{V}}^{ \mathrm{s}}-4\mathbf{a}\mathbf{d}_{\mathrm{r}}\ddot{\mathbf{V}}^{ \mathrm{s}}-6\mathbf{a}\mathbf{d}_{\mathrm{r}}\dot{\mathbf{V}}^{\mathrm{s}} -4\mathbf{a}\mathbf{d}_{\mathrm{r}}\dot{\mathbf{V}}^{\mathrm{s}}-\mathbf{a} \mathbf{d}_{\mathrm{r}}\mathbf{V}^{\mathrm{s}}.\end{array} \tag{142}\]
The relations (139) and (140), or their explicit versions (141) and (142), allow to determine the time derivatives in body-fixed and hybrid representations from given time derivatives of spatial twists, e.g. for the higher-order forward kinematics in sec. 5.4 using (17) or the relations (26)-(29). Then the spatial twists merely serve as algorithmic variables.
The relations \(\mathbf{V}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}\mathbf{V}^{\mathrm{b}}\), along with \(\frac{d}{dt}\mathbf{A}\mathbf{d}_{\mathrm{C}}=\mathbf{a}\mathbf{d}_{\mathrm{V}} \mathbf{A}\mathbf{d}_{\mathrm{C}}\) yields \(\dot{\mathbf{V}}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}\dot{\mathbf{V}}^{ \mathrm{b}}\), and thus
\[\mathrm{D}^{(k)}\mathbf{V}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}\big{(} \sum_{i=0}^{k-1}\binom{k-1}{i}\mathbf{a}\mathbf{d}_{\mathrm{V}^{\mathrm{b}}} \mathrm{D}^{(k-i)}\mathbf{V}^{\mathrm{b}}\big{)}. \tag{143}\]
The relation \(\mathbf{V}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{r}}\mathbf{V}^{\mathrm{h}}\), along with \(\frac{d}{dt}\mathbf{A}\mathbf{d}_{\mathrm{r}}=\mathbf{a}\mathbf{d}_{\mathrm{r}}\), yields
\[\mathrm{D}^{(k)}\mathbf{V}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{r}} \big{(}\mathrm{D}^{(k)}\mathbf{V}^{\mathrm{h}}+\sum_{i=1}^{k}\binom{k}{i} \mathbf{a}\mathbf{d}_{\mathrm{r}^{(i)}}\mathrm{D}^{(k-i)}\mathbf{V}^{ \mathrm{h}}\big{)}. \tag{144}\]
The relations (143) for the derivatives of \(\mathbf{V}^{\mathrm{s}}\) in terms of those \(\mathbf{V}^{\mathrm{b}}\) are explicitly, up to degree \(k=4\),
\[\begin{array}{l}\mathbf{V}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{C}} \mathbf{V}^{\mathrm{b}},\ \ \dot{\mathbf{V}}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}\dot{\mathbf{V}}^{ \mathrm{b}},\ \ \dot{\mathbf{V}}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}\big{(}\ddot{ \mathbf{V}}^{\mathrm{b}}+\mathbf{a}\mathbf{d}_{\mathrm{V}^{\mathrm{b}}}\dot{ \mathbf{V}}^{\mathrm{b}}\big{)}\\ \ddot{\mathbf{V}}^{\mathrm{s}}=\mathbf{A}\mathbf{d}_{\mathrm{C}}\big{(}\ddot{ \mathbf{V}}^{\mathrm{b}}+2
and in terms of the derivatives of \(\mathbf{V}^{\text{h}}\) these are
\[\begin{split}\mathbf{V}^{\text{s}}&=\mathbf{A}\mathbf{d }_{\mathbf{r}}\mathbf{V}^{\text{h}},\ \ \mathbf{\bar{V}}^{\text{s}}=\mathbf{A}\mathbf{d}_{\mathbf{r}}\mathbf{V}^{\text{h}} +\mathbf{a}\mathbf{d}_{\mathbf{r}}\mathbf{V}^{\text{h}},\ \ \mathbf{\bar{V}}^{\text{s}}=\mathbf{A}\mathbf{d}_{ \mathbf{r}}\mathbf{\bar{V}}^{\text{h}}+2\mathbf{a}\mathbf{d}_{\mathbf{r}} \mathbf{\bar{V}}^{\text{h}}+\mathbf{a}\mathbf{d}_{\mathbf{r}}\mathbf{V}^{ \text{h}}\\ \mathbf{\bar{V}}^{\text{s}}&=\mathbf{A}\mathbf{d}_{ \mathbf{r}}\ \mathbf{\bar{V}}^{\text{h}}+3\mathbf{a}\mathbf{d}_{\mathbf{r}}\mathbf{\bar{V}} ^{\text{h}}+3\mathbf{a}\mathbf{d}_{\mathbf{r}}\mathbf{\bar{V}}^{\text{h}}+ \mathbf{a}\mathbf{d}_{\mathbf{r}}\mathbf{V}^{\text{h}}\\ \mathbf{\bar{V}}^{\text{s}}&=\mathbf{A}\mathbf{d}_{ \mathbf{r}}\ \mathbf{\bar{V}}^{\text{h}}+4\mathbf{a}\mathbf{d}_{\mathbf{r}}\mathbf{\bar{V}} ^{\text{h}}+6\mathbf{a}\mathbf{d}_{\mathbf{r}}\mathbf{\bar{V}}^{\text{h}}+4 \mathbf{a}\mathbf{d}_{\mathbf{r}}\mathbf{\bar{V}}^{\text{h}}+\mathbf{a} \mathbf{d}_{\mathbf{\bar{r}}}\mathbf{V}^{\text{h}}\end{split} \tag{146}\]
where \(\mathbf{\bar{r}},\mathbf{\bar{r}},\mathbf{\bar{r}},\mathbf{\bar{r}}\) are known as part of \(\mathbf{V}^{\text{h}},\mathbf{\bar{V}}^{\text{h}},\mathbf{\bar{V}}^{\text{b}},\mathbf{\bar{V}}^{\text{b}}\).
These expressions admit to use the inverse kinematics algorithm in sec. 8.1.2. Then (145), respectively (146), are used to determine the EE twist \(\mathbf{V}^{\text{s}}_{n}\) and its derivatives. The necessary derivatives of \(\mathbf{V}^{\text{s}}_{i},i\leq n\) of degree less than \(k\) are determined in step 2 of the algorithm evaluating (17). Likewise the explicit recursions (82-85) together with (27)-(29) can be used up to degree 4. Notice that the spatial twists again only serve as algorithmic variables.
### Conclusion and Outlook
Formulating the kinematics of serial and closed loop linkages using the POE provides a flexible modeling approach, in terms of readily available geometric data, and gives rise to very compact and efficient expressions for all derivatives necessary for various kinematic and dynamic tasks. The basic (including some novel) relations were summarized in this paper and it was shown how they can be applied to some of the basic tasks in robotics and mechanism analysis. The formulations were derived for the spatial representation of twists. The latter leads to efficient recursive formulations, which is why it is used in mechanisms theory and recently also in MBS dynamics. Yet, in several robotics applications the motion is prescribed using the body-fixed or hybrid representation of twists. Their relations have been briefly discussed in section 11. The derivations of the reported formulae using these representations is an open research topic. Future research will in particular investigate and compare the numerical efficiency of the relations using different representations.
In order to make the formulations easier accessible to the reader, several of the presented relations have been implemented in a Mathematica\({}^{\text{O}}\) package that is available at www.dropbox.com/sh/487ghqzncegped3/AABuEafULYvCSD8Cke6IAvV0a?dl=0.
## Appendix A Geometric background
### Exponential mapping
In the following, the relevant facts about the geometry of rigid body motions and screws is presented as far as necessary to follow the paper. A thorough account of the Lie group structure and its relevance for kinematics can be found in the textbooks [71, 89, 107].
Consider two frames \(\bar{\mathcal{F}}_{1}\) and \(\bar{\mathcal{F}}_{2}\) that are moving relative to one another, while initially both frames coincide. The transformation of homogenous point coordinates from \(\bar{\mathcal{F}}_{2}\) to \(\bar{\mathcal{F}}_{1}\) is described by a matrix
\[\mathbf{\bar{C}}_{12}=\left(\begin{array}{cc}\mathbf{\bar{R}}_{12}&{}^{1} \mathbf{\bar{r}}_{12}\\ \mathbf{0}&1\end{array}\right)\in SE\left(3\right), \tag{147}\]
where \({}^{1}\mathbf{\bar{r}}_{12}\) is the position vector from the origin of \(\bar{\mathcal{F}}_{1}\) to that of \(\bar{\mathcal{F}}_{2}\) resolved in \(\bar{\mathcal{F}}_{1}\) (superscript indicates the frame in which a vector is resolved), and \(\mathbf{\bar{R}}_{12}\in SO\left(3\right)\) is the rotation matrix transforming coordinate vectors resolved in \(\bar{\mathcal{F}}_{2}\) to those resolved in \(\bar{\mathcal{F}}_{1}\). For instance, consider a point \(P\) fixed to \(\bar{\mathcal{F}}_{2}\). Denoted with \({}^{2}\mathbf{x}_{\text{2P}}\) its position vector expressed in \(\bar{\mathcal{F}}_{2}\), then \({}^{1}\mathbf{x}_{\text{1P}}=\mathbf{\bar{R}}_{12}{}^{2}\mathbf{x}_{\text{2P}} +{}^{1}\mathbf{\bar{r}}_{12}\) is the position vector of \(P\) expressed in \(\bar{\mathcal{F}}_{1}\). Since this is true for any point fixed in \(\bar{\mathcal{F}}_{2}\), the \(\mathbf{\bar{C}}_{12}\) itself describes the relative configuration of \(\bar{\mathcal{F}}_{2}\) w.r.t. \(\bar{\mathcal{F}}_{1}\).
The motion of a frame, i.e. of a rigid body, is a screw motion that can be expressed in terms of an instantaneous screw axis. Let \({}^{1}\mathbf{e}\) be a unit vector along the screw axis, and \({}^{1}\mathbf{p}\in\mathbb{R}^{3}\) be a vector to any point on that axis. Then the screw coordinates associated to the motion of \(\bar{\mathcal{F}}_{2}\) relative to \(\bar{\mathcal{F}}_{1}\), represented in \(\bar{\mathcal{F}}_{1}\), are \({}^{1}\mathbf{X}=\left({}^{1}\mathbf{e},{}^{1}\mathbf{p}\times{}^{1}\mathbf{e} +{}^{1}\mathbf{e}{}\right)^{T}\), where \(h\) is the pitch of the screw. It should be noticed, that screws are traditionally denoted with the symbol \(\mathbf{\hat{S}}\)[53].
There is unique correspondence of a screw coordinate vector and a \(4\times 4\) matrix as follows (omitting superscripts)
\[\mathbf{X}=\left(\begin{array}{c}\mathbf{e}\\ \mathbf{p}\times\mathbf{e}+\mathbf{e}\mathbf{i}\end{array}\right)\ \in\mathbb{R}^{6}\ \leftrightarrow\ \mathbf{\widehat{X}}=\left(\begin{array}{cc}\mathbf{\bar{e}}&\mathbf{p}\times \mathbf{e}+\mathbf{e}\mathbf{i}\\ \mathbf{0}&0\end{array}\right)\in se\left(3\right), \tag{148}\]
**Corrected Preprint -- Published in Mechanism and Machine Theory, Vol. 142, 2019**
and generally, for an arbitrarily given screw coordinate vector,
\[\mathbf{X}=\left(\begin{array}{c}\mathbf{\xi}\\ \mathbf{\eta}\end{array}\right)\ \in\mathbb{R}^{6}\ \leftrightarrow\ \ \widehat{\mathbf{X}}=\left(\begin{array}{c}\mathbf{\tilde{\xi}}\\ \mathbf{0}\end{array}\right.\ \mathbf{\eta}\right)\in se\left(3\right). \tag{149}\]
The screw motion of \(\mathcal{\tilde{J}}_{2}\) relative to \(\mathcal{\tilde{J}}_{1}\) is given as \(\mathbf{\tilde{C}}_{12}=\exp(^{1}\mathbf{\widehat{X}}\varphi)\). The exponential mapping attains the closed form
\[\exp(\varphi\mathbf{\widehat{X}})=\left(\begin{array}{cc}\exp(\varphi \mathbf{\widehat{e}})&(\mathbf{I}-\exp(\varphi\mathbf{\widehat{e}}))\mathbf{ p}+\varphi h\mathbf{e}\\ \mathbf{0}&1\end{array}\right),\ \text{for}\ h\neq\infty \tag{150}\]
with the rotation angle \(\varphi\), and for pure translations, i.e. infinite pitch,
\[\exp(\varphi\mathbf{\widehat{X}})=\left(\begin{array}{cc}\mathbf{I}& \varphi\mathbf{e}\\ \mathbf{0}&1\end{array}\right),\ \text{for}\ h=\infty \tag{151}\]
where now \(\varphi\) is the translation variable. The rotation matrix is given by the \(\exp\) mapping on \(SO\left(3\right)\), which possesses the following closed form expressions (known as Euler-Rodrigues formula) [81]
\[\exp\mathbf{\widehat{x}} =\mathbf{I}+\frac{\sin\left\|\mathbf{x}\right\|}{\left\|\mathbf{ x}\right\|}\mathbf{\widehat{x}}+\frac{1-\cos\left\|\mathbf{x}\right\|}{ \left\|\mathbf{x}\right\|^{2}}\mathbf{\widehat{x}}^{2} \tag{152}\] \[=\mathbf{I}+\text{sinc}\left\|\mathbf{x}\right\|\mathbf{\widehat {x}}+\frac{1}{2}\text{sinc}^{2}\frac{\left\|\mathbf{x}\right\|}{2}\mathbf{ \widehat{x}}^{2}\] (153) \[=\mathbf{I}+\sin\mathbf{\widehat{m}}+\left(1-\cos\varphi\right) \mathbf{\widehat{n}}^{2}. \tag{154}\]
The formulae (150) and (151) are advantageous since they describe the frame transformation matrix explicitly in terms of the screw characteristics \(\mathbf{p}\), \(\mathbf{e}\), and \(h\). Another closed form is available [81, 89] in terms of a general screw coordinate vector of the form (149), which is not relevant for this paper. For sake of simplicity, frequently, the screw coordinate vector is considered as argument of the \(\exp\) mapping, and the notion \(\exp(\varphi\mathbf{X})\) is used.
Clearly, for \(\varphi=0\) the \(\exp\) mapping yields the identity matrix, \(\mathbf{\tilde{C}}_{12}=\mathbf{I}\), i.e. both frames coincide in the reference configuration. The frame \(\mathcal{\tilde{J}}_{2}\) is obviously rather special. The motion of a general frame \(\mathcal{\tilde{J}}_{2}\), which is rigidly connected to \(\mathcal{\tilde{J}}_{2}\), is obtained via a subsequent transformation \(\mathbf{A}_{2}\in SE\left(3\right)\) from \(\mathcal{F}_{2}\) to \(\mathcal{\tilde{J}}_{2}\) so that \(\mathbf{C}_{12}=\mathbf{\tilde{C}}_{12}\mathbf{A}_{2}\) is the transformation from \(\mathcal{F}_{2}\) to \(\mathcal{\tilde{J}}_{1}\). In the zero reference configuration (\(\varphi=0\)), it is \(\mathbf{C}_{12}=\mathbf{A}_{2}\). Therefore, \(\mathbf{A}_{2}\) is numerically identical to the the transformation from \(\mathcal{F}_{2}\) to \(\mathcal{\tilde{J}}_{1}\) in the reference configuration.
It is important to observe that all vectors in (150) and (151) are resolved in \(\mathcal{\tilde{J}}_{1}\), and that the matrix \(\exp(^{1}\mathbf{\widehat{X}}\varphi)\) in (147) transforms point coordinates measured in \(\mathcal{\tilde{J}}_{2}\) back to those measured in frame \(\mathcal{\tilde{J}}_{1}\).
Throughout the paper, the sub- and superscripts referring to the world frame \(\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{ }}}}}}}}}}}}}}}}}\)) are omitted, i.e. \(\mathbf{\mathbf{F}}\) is used instead of \({}^{0}\mathbf{r}\), \(\mathbf{X}\) is used instead of \({}^{0}\mathbf{X}\), and \(\mathbf{C}_{k}\) instead of \(\mathbf{C}_{0k}\) for the configuration relative to the world frame.
For a constant screw coordinate vector the derivative of \(\exp\) attains the simple form
\[\frac{d}{dt}\exp(i\mathbf{\widehat{X}})=\exp(i\mathbf{\widehat{X}})\mathbf{ \widehat{X}}=\mathbf{\widehat{X}}\exp(i\mathbf{\widehat{X}}). \tag{155}\]
This identity follows from the fact that the screw axis is invariant under a screw motion about that axis: \(\mathbf{\widehat{X}}=\text{Ad}_{\exp(i\mathbf{\widehat{X}})}(\mathbf{ \widehat{X}})=\exp(i\mathbf{\widehat{X}})\mathbf{\widehat{X}}\exp(-i\mathbf{ \widehat{X}})\) and thus \(\exp(i\mathbf{\widehat{X}})\mathbf{\widehat{X}}=\mathbf{\widehat{X}}\exp(i \mathbf{\widehat{X}})\).
### Frame Transformations of Screw Coordinates
Let \(\mathbf{S}_{1,2}\) be the matrix transforming homogenous point coordinates expressed in frame \(\mathcal{F}_{2}\) to those expressed in frame \(\mathcal{F}_{1}\). Then the screw coordinates transform according to
\[{}^{1}\mathbf{X}=\left(\begin{array}{cc}{}^{1}\mathbf{e}\\ {}^{1}\mathbf{p}_{1}\times{}^{1}\mathbf{e}+{}^{1}\mathbf{e}h\end{array} \right)=\left(\begin{array}{cc}\mathbf{R}_{1,2}&\mathbf{0}\\ {}^{1}\mathbf{\widehat{a}}_{1,2}\mathbf{R}_{1,2}&\mathbf{R}_{1,2}\end{array} \right)\left(\begin{array}{cc}{}^{2}\mathbf{e}\\ {}^{2}\mathbf{p}_{2}\times{}^{2}\mathbf{e}+{}^{2}\mathbf{e}h\end{array}\right)= \mathbf{Ads}_{1,2}{}^{2}\mathbf{X}\]
where
\[\mathbf{Ad_{C}}=\begin{pmatrix}\mathbf{R}&\mathbf{0}\\ \widetilde{\mathbf{r}}\mathbf{R}&\mathbf{R}\end{pmatrix} \tag{156}\]
is the _adjoint transformation_ matrix to \(\mathbf{C}\in SE\left(3\right)\). \(\mathbf{C}\) describes a frame transformation, while \(\mathbf{Ad_{C}}\) describes the corresponding transformation of screw coordinates that belong to \(se\left(3\right)\). Using the \(4\times 4\) representation of screw coordinates, the adjoint transformation \(\mathbf{Ad_{C}X}\) is described by
\[\mathrm{Ad_{C}}(\widehat{\mathbf{X}})=\mathbf{C}\widehat{\mathbf{X}}\mathbf{ C}^{-1}. \tag{157}\]
A helpful property is that
\[\mathbf{Ad_{C_{1}C_{2}}}=\mathbf{Ad_{C_{1}}Ad_{C_{2}}}. \tag{158}\]
With slight abuse of notation, the adjoint transformation for pure translations (i.e. \(\mathbf{R}=\mathbf{I}\)) is denoted with
\[\mathbf{Ad_{r}}=\begin{pmatrix}\mathbf{I}&\mathbf{0}\\ \widetilde{\mathbf{r}}&\mathbf{I}\end{pmatrix} \tag{159}\]
and for pure rotations with
\[\mathbf{Ad_{R}}=\begin{pmatrix}\mathbf{R}&\mathbf{0}\\ \mathbf{0}&\mathbf{R}\end{pmatrix}. \tag{160}\]
The tangential aspect of the frame transformation of a constant screw coordinate vector \(\mathbf{Y}\) under the screw motion \(\exp(t\widehat{\mathbf{X}})\) is determined by
\[\frac{d}{dt}\mathrm{Ad_{exp(\widehat{\mathbf{X}})}}(\widehat{ \mathbf{Y}}) =\frac{d}{dt}\exp(t\widehat{\mathbf{X}})\widehat{\mathbf{Y}}\exp (-t\widehat{\mathbf{X}})+\exp(t\widehat{\mathbf{X}})\widehat{\mathbf{Y}}\frac {d}{dt}\exp(-t\widehat{\mathbf{X}})\] \[=\widehat{\mathbf{X}}\exp(t\widehat{\mathbf{X}})\widehat{\mathbf{ Y}}\exp(-t\widehat{\mathbf{X}})-\exp(t\widehat{\mathbf{X}})\widehat{\mathbf{Y}} \exp(-t\widehat{\mathbf{X}})\widehat{\mathbf{X}}\] \[=\widehat{\mathbf{X}}\mathrm{Ad_{exp(\widehat{\mathbf{X}})}}( \widehat{\mathbf{Y}})-\mathrm{Ad_{exp(\widehat{\mathbf{X}})}}(\widehat{ \mathbf{Y}})\widehat{\mathbf{X}}\] \[=[\widehat{\mathbf{X}},\mathrm{Ad_{exp(\widehat{\mathbf{X}})}}( \widehat{\mathbf{Y}})]=:\mathrm{ad_{\widehat{\mathbf{X}}}}(\mathrm{Ad_{exp( \widehat{\mathbf{X}})}}(\widehat{\mathbf{Y}})) \tag{161}\]
where \([\widehat{\mathbf{X}}_{1},\widehat{\mathbf{X}}_{2}]=\widehat{\mathbf{X}}_{1} \widehat{\mathbf{X}}_{2}-\widehat{\mathbf{X}}_{2}\widehat{\mathbf{X}}_{1}\) is the commutator of the matrices \(\widehat{\mathbf{X}}_{1}\) and \(\widehat{\mathbf{X}}_{2}\) that defines the _Lie bracket_ on \(se\left(3\right)\). Since in particular \(\mathrm{ad_{\widehat{\mathbf{X}}}}\widehat{\mathbf{Y}}=[\widehat{\mathbf{X}}, \widehat{\mathbf{Y}}]=\frac{d}{dt}\mathrm{Ad_{exp(\widehat{\mathbf{X}})}} \left(\widehat{\mathbf{Y}}\right)\big{|}_{t=0}\) the Lie bracket is regarded as the adjoint operator on \(se\left(3\right)\). Throughout the paper either notation is used depending on which one simplifies the notation. The Lie bracket reads explicitly
\[[\widehat{\mathbf{X}}_{1},\widehat{\mathbf{X}}_{2}]=\mathrm{ad_{\widehat{ \mathbf{X}}_{1}}}(\widehat{\mathbf{X}}_{2})=\left(\begin{matrix}\widetilde{ \mathbf{\xi}}_{1}&\widetilde{\mathbf{\xi}}_{2}-\widetilde{\mathbf{\xi}}_{2} \widetilde{\mathbf{\xi}}_{1}&\widetilde{\mathbf{\xi}}_{1}\boldsymbol{\eta}_{ 2}-\widetilde{\mathbf{\xi}}_{2}\boldsymbol{\eta}_{1}\\ &0&0\end{matrix}\right)=\begin{pmatrix}\widetilde{\mathbf{\xi}}_{1}\times \widetilde{\mathbf{\xi}}_{2}&\boldsymbol{\eta}_{1}\times\widetilde{\mathbf{ \xi}}_{2}+\widetilde{\mathbf{\xi}}_{1}\times\boldsymbol{\eta}_{2}\\ &0&0\end{pmatrix}. \tag{162}\]
Applying the correspondence (149), this can be represented in vector notation of screws as
\[[\mathbf{X}_{1},\mathbf{X}_{2}]:=\mathbf{ad_{\mathbf{X}_{1}}X}_{2}=\begin{pmatrix} \widetilde{\mathbf{\xi}}_{1}&\mathbf{0}\\ \widetilde{\boldsymbol{\eta}}_{1}&\widetilde{\mathbf{\xi}}_{1}\end{pmatrix} \begin{pmatrix}\mathbf{\xi}_{2}\\ \boldsymbol{\eta}_{2}\end{pmatrix}=\begin{pmatrix}\mathbf{\xi}_{1}\times \mathbf{\xi}_{2}\\ \boldsymbol{\eta}_{1}\times\mathbf{\xi}_{2}+\mathbf{\xi}_{1}\times \boldsymbol{\eta}_{2}\end{pmatrix} \tag{163}\]
where the \(\mathrm{ad_{\widehat{\mathbf{X}}}}\) operator is represented by the matrix
\[\mathbf{ad_{\mathbf{X}}}=\begin{pmatrix}\widetilde{\mathbf{\xi}}&\mathbf{0}\\ \widetilde{\boldsymbol{\eta}}&\widetilde{\mathbf{\xi}}\end{pmatrix}. \tag{164}\]
The relation (149) is thus an isomorphism of \(se\left(3\right)\) and \(\mathbb{R}^{6}\) as Lie algebra, with the corresponding Lie bracket (162) and (163), respectively. The vector form (163) of Lie bracket is referred to as the _screw product_ of the screw coordinate vectors \(\mathbf{X}_{1}\) and \(\mathbf{X}_{2}\). Throughout the paper, the vector notation is used, which is also preferable for actual implementations. In vector notation, the relation (161) reads
\[\frac{d}{dt}\mathbf{Ad}_{\text{exp}(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}} \mathbf{Y}=\mathbf{ad}_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}} \mathbf{X}}\mathbf{Ad}_{\text{exp}(\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}} \mathbf{X}}} \mathbf{Y}}=[\mathbf{X},\mathbf{Ad}_{\text{exp}(\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{\{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\ |
2308.16466 | Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation
with Meta-Learning | While the Segment Anything Model (SAM) excels in semantic segmentation for
general-purpose images, its performance significantly deteriorates when applied
to medical images, primarily attributable to insufficient representation of
medical images in its training dataset. Nonetheless, gathering comprehensive
datasets and training models that are universally applicable is particularly
challenging due to the long-tail problem common in medical images. To address
this gap, here we present a Self-Sampling Meta SAM (SSM-SAM) framework for
few-shot medical image segmentation. Our innovation lies in the design of three
key modules: 1) An online fast gradient descent optimizer, further optimized by
a meta-learner, which ensures swift and robust adaptation to new tasks. 2) A
Self-Sampling module designed to provide well-aligned visual prompts for
improved attention allocation; and 3) A robust attention-based decoder
specifically designed for medical few-shot learning to capture relationship
between different slices. Extensive experiments on a popular abdominal CT
dataset and an MRI dataset demonstrate that the proposed method achieves
significant improvements over state-of-the-art methods in few-shot
segmentation, with an average improvements of 10.21% and 1.80% in terms of DSC,
respectively. In conclusion, we present a novel approach for rapid online
adaptation in interactive image segmentation, adapting to a new organ in just
0.83 minutes. Code is publicly available on GitHub upon acceptance. | Yiming Zhang, Tianang Leng, Kun Han, Xiaohui Xie | 2023-08-31T05:20:48Z | http://arxiv.org/abs/2308.16466v3 | # Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation with Meta-Learning
###### Abstract
While the Segment Anything Model (SAM) excels in semantic segmentation for general-purpose images, its performance significantly deteriorates when applied to medical images, primarily attributable to insufficient representation of medical images in its training dataset. Nonetheless, gathering comprehensive datasets and training models that are universally applicable is particularly challenging due to the long-tail problem common in medical images.
To address this gap, here we present a **Self-Sampling Meta SAM** (SSM-SAM) framework for few-shot medical image segmentation. Our innovation lies in the design of three key modules: 1) An online fast gradient descent optimizer, further optimized by a meta-learner, which ensures swift and robust adaptation to new tasks. 2) A Self-Sampling module designed to provide well-aligned visual prompts for improved attention allocation; and 3) A robust attention-based decoder specifically designed for few-shot medical image segmentation to capture relationship between different slices. Extensive experiments on a popular abdominal CT dataset and an MRI dataset demonstrate that the proposed method achieves significant improvements over state-of-the-art methods in few-shot segmentation, with an average improvements of 10.21% and 1.80% in terms of DSC, respectively. In conclusion, we present a novel approach for rapid online adaptation in interactive image segmentation, adapting to a new organ in just 0.83 minutes. Code is available at [https://github.com/DragonDescentZerotsu/SSM-SAM](https://github.com/DragonDescentZerotsu/SSM-SAM)
## 1 Introduction
Medical image segmentation plays a pivotal role in clinical applications, including disease diagnosis, abnormality detection and treatment planning. Historically, the segmentation of anatomical structures was performed manually by experienced physicians, a laborious and time-consuming task.
Recent advancements in deep learning offer automated segmentation tools that deliver near-human accuracy swiftly. Nevertheless, these tools need extensive training on large annotated datasets for optimal performance, which are costly and time-intensive since they require inputs from experts with extensive clinical experience. Thus, in the field of medical imaging, few-shot learning [29, 60, 49] has gained significant interest from researchers because it is able to segment accurately without extensive labeled data. In fact, to achieve optimal performance on unseen classes, few-shot learning models must excel in extracting representative features from limited data. However, current few-shot learning frameworks for medical image segmentation [54, 50, 14] predominantly pretrain their models using data from a single medical domain, often with restricted datasets, leading to limited feature extraction ability. If we can leverage the strong feature extraction abilities obtained from extensive training data of diverse domains, the model can capture more distinctive features, enhancing its adaptability to new tasks.
Recently, Segment anything model (SAM) [27] has attracted significant attention. Trained on a large segmentation dataset of over 1 billion masks across various domains of natural images, SAM has strong feature extraction and generalization abilities and can segment any object on a certain image. Given SAM's excellent zero-shot
transferability, a natural idea is to directly apply SAM for medical image segmentation to address the issue of relatively scarce medical image data [19, 46]. However, recent studies [40, 47] have shown that the performance of SAM is overall moderate and varies significantly across different datasets and different cases, demonstrating the potential promise of SAM within the context of medical images but also shows that the model cannot be applied directly with high confidence. Such phenomenon is attributed to the vast differences between natural and medical images, as SAM was primarily trained on natural images. To better adapt SAM to medical images, most prior studies focused on integrating lightweight Adapters [56, 9] or on freezing the heavy image and prompt encoders, opting to fine-tune solely the mask decoder [31, 38, 22]. However, the limited data available for unseen classes will hinder the few-shot segmentation performance of these approaches.
Driven by the desire to maximize the powerful extracted features of SAM through prompts [61], and to leverage its potential zero-shot capabilities without vast training data, we introduce the Self-Sampling Meta SAM (SSM-SAM) framework, which utilizes a Meta-learning method MAML++. Our framework consists of two main parts. The first part is a redesigned backbone that removes the original SAM's prompt encoder and mask decoder; instead, we employ a self-sampling prompt encoder and Flexible Mask Attention Decoder (FMAD). Also, we integrate adapters into the image encoder as previous works do, allowing SAM to learn features of unfamiliar tasks. This enables SAM to have better transferability for few-shot learning and is termed **SS-SAM** (Self-Sampling SAM without MAML++). The second part, which we refer to as **SSM-SAM** (with MAML++), serves as our final framework for few-shot learning. It is a meta-learning based optimizer layered on top of this backbone to further boost SAM's few-shot performance on medical images.
We performed experiments for few-shot learning on an abdomen CT dataset and an MRI dataset. We also utilized a fully supervised medical image segmentation task to evaluate the performance of our SS-SAM backbone (without MAML++) in comparison to prior methods on CT dataset.
Our contributions are summarized as follows:
* An effective online parameter adaptation technique optimized by a MAML++ [1] based meta-learner to enhance SAM's generalization and few-shot learning capacities.
* A positive-negative self-sampling module that can generate aligned visual prompts to better extract the contextual relationship.
* A novel Flexible Mask Attention Decoder specifically designed for medical image few-shot segmentation.
* Our method outperformed the SOTA framework for few-shot medical image segmentation by an average of 10.21% on MICCAI15 Multi-Atlas Abdomen Labeling challenge dataset [28] and 1.80% on ISBI 2019 Combined Healthy Abdominal Organ Segmentation Challenge [26] in terms of DSC.
Each module we use is plug-and-play, aiming to facilitate the deployment of fast and robust online image segmentation systems across various industries and making our model a baseline for improving performance where foundational models like SAM have struggled in the past.
## 2 Related Work
**Foundation Models.** A foundation model is essentially a large pre-trained model, typically developed using self-supervised learning across diverse datasets. This allows it to be fast adapted to specific tasks through mechanisms by fine-tuning or in-context learning. At present, foundation models have reached a level of maturity in the NLP domain, with models like BERT [12], GPT-3 [6]. In the realm of computer vision, the Segment Anything Model (SAM) [27], pre-trained on 11M images with 1B masks, first introduced a foundation model for image segmentation. This model can interpret a variety of prompts given by users, such as points, boxes, masks, and texts, to generate user-specified segmentations and is adept at segmenting any object within an image. However, despite its zero-shot generalization capabilities, SAM has demonstrated sub-optimal performance across multiple downstream tasks [25]. This observation has stimulated ongoing research efforts to enhance SAM's performance in these tasks [17, 38, 56, 63].
**Adapters.** Adapters have emerged as a powerful tool in the realm of transfer learning for NLP and computer vision. Introduced as a lightweight alternative to full model fine-tuning, adapters allow for model customization while preserving the pre-trained parameters [21]. Instead of updating all the parameters in the model, adapter methods insert small bottleneck layers in the model architecture, which are then fine-tuned while the original model parameters remain unchanged. Recently, the ViT-Adapter [11] has been developed to equip a standard ViT [15] with the capability to perform a variety of downstream tasks. In a further development, an Explicit Visual Prompting (EVP) [36] technique has facilitated the integration of explicit visual cues into the Adapters. In this work, We integrate such Adapters [9] into the image encoder of SAM to avoid training a large amount of parameters.
**Visual Prompts.** The application of visual prompts in computer vision tasks has its roots in interactive segmentation, a methodology that necessitates user input, such as clicks [52, 58, 52, 10, 24], bounding boxes [57, 43], or scribbles [33, 4, 5], or convert spatial queries to masks and feed them
into the image backbone [35] to assist the algorithm in accurately delineating object boundaries. These visual prompts augment the segmentation process, yielding more accurate and dependable outcomes [53]. However, such form of visual prompts either struggle to adapt to unseen prompts [27] or might be heavy in applications as each interaction necessitates processing the image through the feature extractor [35]. Recently, visual sampler has been introduced to transform all non-textual queries into visual prompts that reside within the same visual embedding space to address these limitations [65]. However, no prior research has endeavored to employ this approach for generating prompts using the SAM trained on an extensive image corpus. Here we seek to bridge this research gap.
**Model Agnostic Meta Learning (MAML).** One continuous challenge of few-shot medical image segmentation is the distribution mismatch between training and testing datasets, particularly the long tail problem. Fortunately, MAML [16] and MAML++ [1] offers meta-learning frameworks tailored to combat this issue. It is elegantly simple yet can find suitable model-agnostic initialization parameters that are trained through various tasks and can quickly adapt to new tasks. Many previous works like [39, 48, 59] use meta-learning to address different problems in medical image segmentation. Yet, MAML++ framework has been primarily focused on vision tasks like classification and recognition. All these previous works inspired us to utilize MAML++ to enhance foundational models' few-shot learning ability.
## 3 Methodology
### Overview of SSM-SAM architecture
Fig. 1 depicts the architecture of our few-shot learning framework. We keep the image encoder of SAM frozen and add a learnable **Adapter** layer [36] to each of the transformer layers for parameter-efficient training. After passing through the image encoder, we perform a self-sampling operation on the first image embedding to update it. Afterward, we feed the four output image embeddings into our Flexible Mask Attention Decoder (FMAD) to refine these embeddings and generate the predicted mask. On top of this backbone, we implement a MAML++ based meta-learner to search for optimal initialization parameters for rapid adaptation to different organs.
### Few-shot Online Optimizer
Segmentation models' clinical deployment has historically been hindered by a distribution mismatch between training and testing datasets. Given the impossibility of collecting comprehensive representative data, we have innovated in our training approach. Rather than creating a universal offline model, we designed our model to recognize and adapt to new data types, ensuring it remains relevant for unseen images. In our few-shot medical image segmentation scenario, we consider a distribution over organs, denoted as \(p(O)\), to which we aim our model to adapt. We represent our model as \(f_{\theta}\) with parameter \(\theta\), which is trained on images of different organs, \(I_{o_{i}}\), following the distribution \(p(I_{o_{i}})\), where \(o_{i}\) signifies the \(i^{th}\) organ. The online optimizer optimizes the parameters via back-propagating steps as follows:
\[\theta_{i}^{{}^{\prime}}\leftarrow\theta_{i}^{{}^{\prime}}-\alpha\nabla_{ \theta_{i}^{{}^{\prime}}}L_{o_{i}}(f_{\theta_{i}^{{}^{\prime}}}) \tag{1}\]
where \(\alpha\) is the learning rate and \(\theta_{i}^{{}^{\prime}}\) represents the model parameters adapted to fit the \(i^{th}\) organ. We employ balanced cross-entropy and IoU loss to supervise our network so \(L_{o_{i}}\) can be expressed as:
\[L_{o_{i}}=L_{bce}+L_{iou} \tag{2}\]
The overview of the online optimizer is outlined in Algorithm 1.
Figure 1: Overview of our proposed framework for SSM-SAM. SSM-SAM backbone consists of three main components: a) Injecting Adapters into the encoder of SAM to quickly acquire task-related information. b) Using the Self-Sampling Module to replace the position encoding based prompt encoder to strengthen the relationship between the feature and the prompt. c) Flexible Mask Attention Decoder (FMAD) to enhance boundary information and refine the final generated mask map. On top of the backbone, we employ a meta-learning based online optimizer.
is the meta-learner, which is defined as follows:
\[\theta\leftarrow\theta-\beta\nabla_{\theta}\frac{1}{T}\sum_{i}^{T}L_{o_{i}}(f_{ \theta_{i}^{\prime}}) \tag{3}\]
where \(i\) is the \(i^{th}\) organ. \(T\) is the number of organs in a task batch for optimizing the meta-learner. We incorporate the cosine annealing learning rate as recommended in [1] to enhance our model's efficiency in fitting the training set.
### Adapted Image Encoder
The image encoder of SAM uses a Vision Transformer (ViT) [15], which is pre-trained with MAE [18]. When given an image of any size, it is essential to first resize it to a resolution of 1024 x 1024. The ViT then processes this resized image to generate an image embedding with dimensions \(C\times H\times W\). For this research, we chose the ViT-B variant (SAM-b), (where \(C\) = 256, \(H\) = 64, \(W\) = 64). To enable efficient learning with faster updates and address the issue of excessive GPU memory usage, we keep the image encoder frozen and inject a trainable **Adapter** to each of the 12 transformer layers as mentioned before. We only train the parameters within **Adapters**, which are tasked with learning task-specific knowledge and low-level structural information from the features extracted from image embeddings. For the \(k\)-\(th\) Adapter layer, we take patch embedding \(\text{F}^{k}\) as input and obtain updated parameters \(\text{P}^{k}\)
\[\text{P}^{k}=\text{MLP}_{\text{up}}(\text{GELU}(\text{MLP}_{\text{tune}}^{k}(F ^{k})) \tag{4}\]
where \(\text{MLP}_{\text{tune}}^{k}\) refers to a linear layer within each Adapter for generating distinct prompts. \(\text{MLP}_{\text{up}}\) is an up-projection layer shared across all Adapters designed to match the dimension of transformer features. \(\text{P}^{k}\) represents the output associated with each transformer layer.
Also, we divided image encoder to four sub-blocks, which is carried out with the intent of deriving multi-level information [32]. After passing through the image encoder, these four image embeddings of size \(B\times 256\times 64\times 64\) are fed into the subsequent module.
### Self-Sampling Module
We employ a _positive-negative attention self-sampling_ module inspired by [65] to convert all kinds of non-textual queries to visual prompts that lie in the same visual embedding space, as opposed to the conventional position encoding in SAM [27]. Our algorithm is illustrated in Fig. 2 and
Figure 2: Structure of queries and prompt interaction during training and evaluation.
can be summarized as follows:
\[P=\textbf{concatenate}([P_{positive},P_{negative}]) \tag{5}\] \[P_{v}=\textbf{Self-Sampler}(P,Z_{h}) \tag{6}\]
where \(P_{positive}\), \(P_{negative}\) stand for positive points (region we aim to segment) and negative points (background region) sampled from the image, respectively. \(Z_{h}\) is the feature maps extracted from the image and \(P\) denotes the point prompts.
Afterward, we concatenated these positive points and negative points to form the point prompts. This approach ensures that the model does not only focus on the positive parts, which could potentially lead to a higher false positive rate, but also adequately attends to the negative parts. With this method, the model can better discerns the organ boundaries, enhancing segmentation performance when we only give one point to prompt the model during inference.
On receiving the point prompts, we perform direct interpolation from the feature map to get the corresponding embeddings as visual prompts. This ensures that the prompts and the image embeddings share identical semantic features. We initialize a few tokens as learnable global queries and concatenate them with these visual prompts. After that, we apply self-attention [51] between a set of learnable queries and the visual prompts. Then, cross-attention is applied only between the image embeddings and these updated queries.
### Flexible Mask Attention Decoder
Since we divided each 3D volume data into 12 chunks as in section 4.1, there is a certain pattern to how each organ changes across different slices. This observation inspired us to treat the segmentation task like a tracking task. Consequently, we introduced the Flexible Mask Attention Decoder (FMAD). The core of this decoder is the Flexible Mask Attention Module (FMAM), as illustrated in Fig 3. By leveraging cross-attention, we can expand or contract the blurred support mask to predict the mask for the query image, mitigating the challenge of direct mask prediction. The Gaussian blurred support masks provide the model with ample room to explore potential mask positions while efficiently suppressing distractions from other parts of the image.
In decoding, the attention block bridges the Gaussian blurred support mask and queries images to update the mask. We first apply a Gaussian filter on original support mask \(\textbf{M}\in\mathbb{R}^{H\times W\times 1}\), repeat it for \(C\) times and reshape to \(N\times C\) to get \(\textbf{M}^{{}^{\prime}}\in\mathbb{R}^{N\times C}\). Then we can easily cast it on query feature \(\textbf{F}\in\mathbb{R}^{N\times C}\) extracted from image encoder to get \(\textbf{K}\in\mathbb{R}^{N\times C}\) and \(\textbf{Q}\in\mathbb{R}^{N\times C}\) as follows:
\[\textbf{K}=(\textbf{M}^{{}^{\prime}}\odot\textbf{F})\textbf{W}^{K} \tag{7}\]
where \(\odot\) means element-wise production, \(\textbf{W}^{K}\in\mathbb{R}^{C\times C}\) means key projection matric. Following [51], we also adopt the dot-product to compute the similarity matrix \(\textbf{A}_{K\to Q}\in\mathbb{R}^{N\times C}\) between the query and key as follows:
\[\textbf{A}_{K\to Q}=\mathrm{Atten}(\textbf{Q},\textbf{K})=\mathrm{ Softmax}_{\mathrm{col}}(\bar{\textbf{Q}}\bar{\textbf{K}}^{T}/\tau) \tag{8}\]
where \(\bar{\textbf{Q}}\) and \(\bar{\textbf{K}}\) are \(l_{2}\)-normalized features of **Q** and **K** across the channel dimension, and \(\tau\) is a temperature parameter controlling the Softmax distribution, which is the same as in [55]. Then we convert \(\textbf{M}^{{}^{\prime}}\) to \(\textbf{V}\in\mathbb{R}^{N\times C}\) though:
\[\textbf{V}=(\textbf{M}^{{}^{\prime}})\textbf{W}^{V} \tag{9}\]
Figure 3: Process of Flexible Mask Attention Module (FMAM) and Flexible Mask Attention Decoder(FMAD).
where \(\mathbf{W}^{V}\in\mathbb{R}^{C\times C}\) denotes value projection matrix. With the attention matrix \(\mathbf{A}_{K\to Q}\) from key to query, we can transform the value via \(\mathbf{A}_{K\to Q}\mathbf{V}\in\mathbb{R}^{N\times C}\).
Unlike typical attention layers, at the first residual connection, we use _multiply_ & _Norm_ instead of _Add_ & _Norm_ to propagate query feature \(\mathbf{F}\) as follows:
\[\mathbf{F}^{{}^{\prime}}=\mathrm{InsNorm}(\mathbf{A}_{K\to Q}\mathbf{V} \odot\mathbf{F}) \tag{10}\]
By using the updated Gaussian mask from the attention layer, we can further refine the predicted mask. After processing each of the four image embeddings, we upsample, concatenate, and put them through a convolutional layer. This approach facilitates improved information integration across layers to predict the final mask.
## 4 Experiments
### Setup
**Datasets** To verify the generality and robustness of our approach, we perform experiments on two datasets, ABD-30 and ABD-MRI. We evaluate the complete few-shot segmentation framework (SSM-SAM) on both datasets, while the backbone (SS-SAM) is only assessed on ABD-30 to validate its efficiency.
- ABD-30 (from MICCAI15 Multi-Atlas Abdomen Labeling challenge [28]) contains 30 cases with 3779 axial abdominal clinical CT images.
- ABD-MRI (from Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge [26] held in IEEE International Symposium on Biomedical Imaging (ISBI) 2019) contains 20 3D abdominal MRI scans with total four different labels.
Liver, spleen and left and right kidney are used as semantic classes following previous settings [13, 42, 50]. Within each experiment, one organ is considered as unseen semantic class for testing while the rest are used for training.
**Evaluation Metrics** We use the Dice Similarity Coefficient (DSC) to evaluate the prediction mask \(\mathbf{m}\) against the ground truth mask \(\mathbf{g}\):
\[\text{DSC(m, g)}=\frac{2\left|m\cap g\right|}{\left|m\right|+\left|g\right|} \tag{11}\]
**Implementation Details** All the images extracted from 3D volume data are reshaped to \(1024\times 1024\) to fit into the SAM model. We follow the same protocol used in [42, 45, 50] to do 1-way 1-shot learning by dividing the 3D CT scans into 12 chunks and segmenting all the query slices in one chunk by using the center slice in the chunk as the support image, see Fig.4 for visual representation. We use ViT-B [15] version of SAM and supervise our network using balanced cross entropy loss and IoU loss as Eq. (2) between the predicted mask and ground truth mask. AdamW optimizer [37] is used for all the experiments with an initial learning rate of 2e-4. Cosine decay [1] is applied to the learning rate. Few-shot segmentation task is trained for 50 epochs on a single NVIDIA A40 GPU and fully supervised segmentation task is trained for 50 epochs using PyTorch on a single NVIDIA RTX 3090 GPU.
### Main Results
**Results on Few-shot Medical Image Segmentation** Table 1 shows the few-shot segmentation performance of SSM-SAM with previous work on ABD-30 and ABD-MRI respectively. SE-Net [45] represents the first architecture designed explicitly for few-shot medical image segmentation. PANet [54] is an extended version of the widely-used prototypical network [49], tailored for natural image segmentation. SSL-ALPNet [42] integrates self-supervised learning with prototypical networks. Affine denotes the accuracy result after aligning the support and query images globally using an affine transformation. RP-Net [50] stands as the state-of-the-art framework for few-shot medical image segmentation, leveraging both a context relation encoder and a recurrent module. [50] reported performance for all the methods above, so these numbers are directly quoted.
First, compared to state-of-the-art method RP-Net, SSM-SAM outperforms RP-Net by 10.21% and 1.80% on ABD-30 and ABD-MRI respectively. Second, the use of FMAD results in improvements of 2.17% \(\sim\) 3.41% and 1.84% \(\sim\) 2.62% respectively. Additionally, employing the MAML++ based optimizer leads to enhancements of 5.98% \(\sim\) 7.22% and 4.47% \(\sim\) 5.25% respectively. This indicates that both our meta-optimizer and FMAD are highly effective.
**Results on Fully Supervised Medical Image Segmentation**
To evaluate the performance of our backbone (without MAML++), we compare SS-SAM with recent SOTA methods on ABD-30 dataset without Flexible Mask Attention Module, including U-Net [44], Att-UNet [41], TransUnet [8], Swin-Unet [7], MissFormer [23], TransDeepLab [3], HiFormer [20], DAE-Former [2] and SAMed [62] following [62]. For a fair comparison, we integrate the self-sampling module into some of the state-of-the-art models because the original SAM and our method utilize visual prompts, which incorporate ground truth information into the model while previous state-of-the-art models did
Figure 4: Representative chunk
not leverage such additional information. The results are highlighted in blue italics in Tab. 2. Surprisingly, since these models are unaware of how to process this information, their performance slightly declined. We also observe that the performance slightly declined by the same set of models is not as good as the performance of the model.
\begin{table}
\begin{tabular}{|c|c|c c c c c|c|} \hline Dataset & Method & \multicolumn{1}{c}{Spleen \(\uparrow\)} & Kidney L \(\uparrow\) & Kidney R \(\uparrow\) & Liver \(\uparrow\) & mean \(\uparrow\) \\ \hline \multirow{8}{*}{ABD-30} & SE-Net [45] & 0.23 & 32.83 & 14.34 & 0.27 & 11.91 \\ & PANet [54] & 25.59 & 32.34 & 17.37 & 38.42 & 29.42 \\ & SSL-ALPNet [42] & 60.25 & 63.34 & 54.82 & 73.65 & 63.02 \\ & Affine & 48.99 & 43.44 & 45.67 & 68.93 & 51.75 \\ & RP-Net [50] & 72.19 & 75.03 & 70.89 & 80.53 & 74.66 \\ \cline{2-8} & **SS-SAM (w/o MAML++, w/ FMAD)** & 77.24 & 72.89 & 79.78 & 72.00 & 75.48 \\ & **SS-SAM (w/o MAML++, w/ FMAD)** & 78.99 & 73.33 & 83.49 & 74.81 & 77.65 \\ & **SSM-SAM (w/ MAML++, w/ FMAD)** & 80.55 & 77.83 & 82.79 & 82.70 & 81.46 \\ \cline{2-8} & **SSM-SAM (w/ MAML++, w/ FMAD)** & **86.95** & **80.96** & **84.47** & **87.12** & **84.87** \\ \hline \hline \multirow{8}{*}{ABD-MRI} & SE-Net [45] & 51.80 & 62.11 & 61.32 & 27.43 & 50.66 \\ & PANet [54] & 50.90 & 53.45 & 38.64 & 42.26 & 46.33 \\ & SSL-ALPNet [42] & 67.02 & 73.63 & 78.39 & 73.05 & 73.02 \\ & Affine & 62.87 & 64.70 & 69.10 & 65.00 & 65.41 \\ & RP-Net [50] & 75.69 & 79.30 & **84.66** & 71.51 & 77.79 \\ \cline{2-8} & **SS-SAM (w/o MAML++, w/o FMAD)** & 71.99 & 76.52 & 72.13 & 69.36 & 72.50 \\ & **SS-SAM (w/o MAML++, w/ FMAD)** & 73.64 & 78.04 & 76.69 & 72.13 & 75.12 \\ & **SSM-SAM (w/ MAML++, w/ FMAD)** & 76.77 & 80.47 & 77.44 & 76.32 & 77.75 \\ \cline{2-8} & **SSM-SAM (w/ MAML++, w/ FMAD)** & **78.81** & **81.70** & 80.38 & **77.50** & **79.59** \\ \hline \end{tabular}
\end{table}
Table 1: DSC comparison with other methods on ABD-30 and ABD-MRI for few-shot learning (unit:%).
\begin{table}
\begin{tabular}{|c|c|c c c c c c c|} \hline Method & DSC \(\uparrow\) & Aorta & Gallbladder & Kidney(L) & Kidney(R) & Liver & Pancreas & Spleen & Stomach \\ \hline TransUNet [8] & 77.48 & 87.23 & 63.13 & 81.87 & 77.02 & 94.08 & 55.86 & 85.08 & 75.62 \\ SwinUnet [7] & 79.13 & 85.47 & 66.53 & 83.28 & 79.61 & 94.29 & 56.58 & 90.66 & 76.60 \\ MissFormer [23] & 81.96 & 86.99 & 68.65 & 85.21 & 82.00 & 94.41 & 65.67 & 91.92 & 80.81 \\ HiFormer [20] & 80.39 & 86.21 & 65.69 & 85.23 & 79.77 & 94.61 & 59.52 & 90.99 & 81.08 \\ DAE-Former [2] & 82.43 & 88.96 & 72.30 & 86.08 & 80.88 & 94.98 & 65.12 & 91.94 & 79.19 \\ PaNN [64] & 90.8 & 92.5 & 72.9 & 95.3 & 92.0 & 95.3 & - & **96.8** & - \\ \hline _MissFormer_[23] & _78.84_ & _85.89_ & _64.75_ & _84.19_ & _76.98_ & _94.18_ & _55.19_ & _89.07_ & _80.47_ \\ _HiFormer_[20] & _78.24_ & _85.94_ & _60.88_ & _84.16_ & _76.57_ & _94.54_ & _54.00_ & _89.81_ & _80.02_ \\ _DAE-Former_[2] & _80.46_ & _87.26_ & _67.46_ & _82.15_ & _77.53_ & _94.36_ & _63.32_ & _90.85_ & _80.77_ \\ \hline SAMed [62] & 81.88 & 87.77 & 69.11 & 80.45 & 79.95 & 94.80 & 72.17 & 88.72 & 82.06 \\ SAM Adapter [9] & 92.01 & 95.35 & 87.88 & **96.16** & 96.73 & 97.23 & **79.86** & 90.60 & 92.26 \\ \hline
**SS-SAM** & **93.09** & **95.96** & **89.77** & 96.09 & **96.79** & **97.61** & 79.15 & 95.15 & **94.23** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison to state-of-the-art models using a fully supervised method on the ABD-30 dataset. Best results are highlighted in bold (unit:%). For a fair comparison, the results in italics are from the corresponding model with the same self-sampling module. Notably, our modified SAM (SS-SAM) outperforms other leading models.
Figure 5: Testing process of MAML++ based model (in deep blue) and Fine-tuning of a model pretrained on the same distribution of tasks without MAML++ (in light blue)
that SS-SAM achieves state-of-the-art performance. These experiments demonstrate that our models are capable of achieving high performance and also refute the potential claim that introducing prompts is a form of "cheating".
### Ablation Study
**Effect of self-sampling.** As previously explained, the self-sampling module derives visual prompts by interpolating between the provided point coordinates and image embeddings. Subsequently, it conducts cross-attention with the image embeddings. On the other hand, SAM's point prompt employs positional encoding on the points before performing cross-attention with the image embeddings. These two approaches differ in their treatment of the sampled points. To evaluate the effectiveness of the proposed methods, we performed ablation studies on four distinct organs: the spleen, left kidney, right kidney, and liver, using a fully supervised approach. For each organ, we kept the image encoder and decoder the same, with three different prompt methods: a) no prompt, b) point prompt with position encoding, and c) point prompt with self-sampling. As presented in Tab. 3, our self-sampling method achieved the highest mean
**Effects of Meta-learner.** To highlight the few-shot learning advantages of our model achieved by integrating a meta-learner, apart from the experiment setting in Tab. 1, we set up a more rigorous experimental setting to compare performances with and without the meta-learner.
Following the configurations specified in [16, 30], we selected only 125 images for the four organs. In each experiment, one organ is chosen as the test task (e.g., liver), while the remaining three organs (spleen, left kidney, right kidney) are used as training tasks. Only five images of each training task (spleen, left kidney, right kidney) are allocated during training. During testing, we adopt a 1-way, 5-shot approach, using five images of the test task (liver) as the support set for training. The remaining 120 images serve as the query set to evaluate the model's performance.
Our findings were illuminating. A visual representation of the online optimization process of each of the four organs is provided in Fig.5. With just a limited number of training images, models equipped with the meta-learner outperformed those without by a significant margin of 12.07% on average. Furthermore, models augmented by the meta-learner not only demonstrated swifter convergence in the initial epochs but also exhibited enhanced stability and performance in the concluding epochs.
### Qualitative Results
In Fig.6, we present masks produced by SAM and variants of our algorithm. Notably, SAM struggles to segment smaller organs or those with indistinct boundaries. Conversely, SSM-SAM with FMAM outperforms its counterpart without FMAM, effectively minimizing the impact of similar distracting regions in the CT scans.
## 5 Conclusions
In this paper, we introduce a universal approach, SSM-SAM, designed to optimize and adapt a foundational model such as the Segment Anything Model (SAM), for few-shot medical image segmentation. Our method incorporates a positive-negative Self-Sampling prompt encoder and a Flexible Mask Attention Decoder to enhance the contextual relationship and tiny boundary information essential for mask generation. Moreover, our fast online meta-learning based optimizer facilitates high performance even without extensive training data and can be plugged into other frameworks effortlessly. Experiments demonstrate that SSM-SAM outperforms the previous state-of-the-art approach by as much as 10% in terms of DSC. Furthermore, the proposed SSM-SAM can produce segmentations in just 50 seconds per organ, indicating its potential for real-time applications in medical settings.
\begin{table}
\begin{tabular}{|c|c c c c|c|} \hline Prompt & Spleen & Kidney L & Kidney R & Liver & Mean \\ \hline no prompt & 91.83 & 93.65 & 92.55 & 96.21 & 93.56 \\ w/ position encoding & 93.05 & 95.75 & **96.87** & 97.27 & 95.74 \\ w/ self-sampling & **95.15** & **96.09** & 96.79 & **97.61** & **96.41** \\ \hline \end{tabular}
\end{table}
Table 3: Ablation Study on no prompt, position encoding (pe) of points and self-sampling
Figure 6: Examples of predictions of SAM, SSM-SAM without FMAM and SSM-SAM with FMAM |
2309.16997 | Expressive Power of Infinitary Logic and Absolute co-Hopfianity | Paolini and Shelah have constructed absolutely Hopfian torsion-free abelian
groups of any given size. In contrast, we show that this is not necessarily the
case for absolutely co-Hopfian groups. We apply the infinitary logic to show a
dichotomy behavior of co-Hopfian groups, by showing that the first beautiful
cardinal is the turning point for this property. Namely, we prove that there
are no absolute co-Hopfian abelian groups above the first beautiful cardinal.
An extension of this result to the category of modules over a commutative ring
is given. | Mohsen Asgharzadeh, Mohammad Golshani, Saharon Shelah | 2023-09-29T05:50:44Z | http://arxiv.org/abs/2309.16997v1 | # Expressive power of infinitary logic and absolute co-Hopfianity
###### Abstract.
Recently, Paolini and Shelah have constructed absolutely Hopfian torsion-free abelian groups of any given size. In contrast, we show that this is not necessarily the case for absolutely co-Hopfian groups. We apply the infinitary logic to show a dichotomy behavior of co-Hopfian groups, by showing that the first beautiful cardinal is the turning point for this property. Namely, we prove that there are no absolute co-Hopfian abelian groups above the first beautiful cardinal. An extension of this result to the category of modules over a commutative ring is given.
Key words and phrases:Absolutely co-Hopfian groups; almost isomorphic; beautiful cardinal; infinitary logic; elimination of quantifiers; forcing techniques.
The third author thanks an individual who wishes to remain anonymous for generously funding typing services. Research partially supported by the grant "Independent Theories" NSF-BSF, (BSF 3013005232). References like [Sh:950, Th0.2=Ly5] mean that the internal label of Th0.2 is y5 in Sh:950. The reader should note that the version in the author's website is usually more up-to-date than the one in arXiv. This is publication number 1246 in Saharon Shelah's list.
## SS 1. Introduction
An abelian group \(G\) is called _Hopfian_ (resp. _co-Hopfian_) if its surjective (resp. injective) endomorphisms are automorphisms. In other words, co-Hopfian property of groups is, in some sense, dual to the Hopfian groups. These groups were first considered by Baer in [2], under different names. Hopf [17] himself showed that the fundamental group of closed two-dimensional orientable surfaces are hopfian.
There are a lot of interesting research papers in this area. Here, we recall only a short list of them. Following the book [24], Hopf in 1932, raised the question as to whether a finitely generated group can be isomorphic to a proper factor of itself. For this and more observations, see [24]. Beaumont [4] proved that if \(G\) is an abelian group of finite rank all of whose elements have finite order, then \(G\) has no proper isomorphic subgroups. Kaplansky [18] extended this to modules over a commutative principal ideal ring \(R\) such that every proper residue class ring of \(R\) is finite. Beaumont and Pierce [3] proved that if \(G\) is co-Hopfian, then the torsion part of \(G\) is of size at most continuum, and further that \(G\) cannot be a \(p\)-groups of size \(\aleph_{0}\). This naturally left open the problem of the existence of co-Hopfian \(p\)-groups of uncountable size \(\leq 2^{\aleph_{0}}\), which was later solved by Crawley [9] who proved that there exist \(p\)-groups of size \(2^{\aleph_{0}}\). One may attempt to construct (co-)Hopfian groups of large size by taking a huge direct sum of (co-)Hofian groups. In this regard, Baumslag [7] asked when is the direct sum of two (co-)Hopfian groups again (co)-Hopfian? Corner [8] constructed two torsion-free abelian Hopfian groups which have non-Hopfian direct sum. See [15], for more on this and its connections with the study of algebraic entropy.
Despite its long history, only very recently the problem of the existence of uncountable (co-)Hopfian abelian groups was solved, see [1] and [22].
The usual construction of such groups is not absolute, in the sense that they may loose their property in some generic extension of the universe, and the problem of giving an explicit and constructive construction of such groups has raised some attention in
the literature. For example, while it is known from the work of Shelah [25] that there are indecomposable abelian groups of any infinite size, the problem of the existence of arbitrary large absolutely indecomposable groups is still open, see [21]. It is worth noticing that indecomposability implies the Hopfian property.
In order to be more explicit, call a group \(G\)_absolutely co-Hopfian_ if it is co-Hopfian in any further generic extension of the universe. Similarly, one may define _absolutely Hopfian_ groups. An open problem in this area was as follows:
**Problem 1.1**.:
1. Is it possible to construct absolutely Hopfian torsion-free groups of a given size?
2. Is it possible to construct absolutely co-Hopfian torsion-free groups of a given size?
Recently, Paolini and Shelah [22, Theorem 1.3] constructed absolutely Hopfian torsion-free groups of any given size \(\lambda\), thereby confirming Problem 1.1(i) in positive direction. On the one hand, and as Hopfian and co-Hopfian groups are dual to each ether, one may predict that there is a connection between Problem 1.1 (i) and (ii). But, any such dual functor, may enlarge or collapse the cardinality of the corresponding groups, hence we can not use the ideas behind the duality to answer Problem 1.1(ii). For example, Braun and Strungmann [6] showed that the existence of infinite abelian \(p\)-groups of size \(\aleph_{0}<|G|<2^{\aleph_{0}}\) of the following types are independent of ZFC:
1. both Hopfian and co-Hopfian,
2. Hopfian but not co-Hopfian,
3. co-Hopfian but not Hopfian.
Also, they proved that the above three types of groups of size \(2^{\aleph_{0}}\) exist in ZFC.
On the other hand, there are some partial positive results in the direction of Problem 1.1 (ii). Here, we recall some of these. In [29], Shelah studied and coined the concept of a _beautiful cardinal_, denoted by \(\kappa_{\rm beau}\), which is a kind of Ramsey cardinal (see Definition 3.12). This cardinal has an essential role in the study of absolutely co-Hopfian groups.
Indeed, according to [16], for any infinite cardinal \(\lambda<\kappa_{\rm beau}\), there is an absolutely endo-rigid abelian group, but if \(|G|\geq\kappa_{\rm beau}\), then by [12] it has non-trivial monomorphisms. Let us state the remaining part of Problem 1.1:
**Problem 1.2**.: Is it possible to construct absolutely co-Hopfian torsion-free groups of size \(\geq\kappa_{\rm beau}\)?
The Hopfian and co-Hopfian property easily can be extended to the context of modules over commutative rings and even to the context of sheaves over schemes. However, compared to the case of abelian groups, and up to our knowledge, there are very few results for modules. For example, a funny result of Vasconcelos [31, 1.2] says that any surjective \(f:M\to M\) is an isomorphism, where \(M\) is a noetherian module over a commutative and noetherian ring \(R\). As a geometric sample, let \(X\) be an algebraic variety over an algebraically closed field. If a morphism \(f:X\to X\) is injective then a result of Ax and Grothendieck indicates that \(f\) is bijective, see Jean-Pierre Serre's exposition [23].
In contrast to the the case of Hopfian groups, we prove the following dichotomy property of co-Hopfian groups. Indeed, we prove a more general result in the context of additive \(\tau\)-models (see Definition 2.3), which includes in particular the cases of abelian groups and \(R\)-modules, for an arbitrary commutative ring \(R\):
**Theorem 1.3**.: _The following assertions are valid:_
1. _If_ \(M\) _is an abelian group of cardinality greater or equal_ \(\kappa:=\kappa_{\rm beau}\)_, then_ \(M\) _is not absolutely co-Hopfian, indeed, after collapsing the size of_ \(M\) _into_ \(\omega\)_, there is a one-to-one endomorphism_ \(\varphi\in{\rm End(M)}\) _which is not onto._
2. _If_ \(M\) _is an_ \(R\)_-module of cardinality greater or equal_ \(\kappa=\kappa_{\rm beau}(R,\aleph_{0})\)_, then_ \(M\) _is not absolutely co-Hopfian._
3. _If_ \(M\) _is an additive_ \(\tau\)_-model of cardinality greater or equal_ \(\kappa=\kappa_{\rm beau}(\tau)\)_, then_ \(M\) _is not absolutely co-Hopfian._
Putting things together, we conclude that for every infinite cardinal \(\lambda\), there is an absolutely co-Hopfian abelian group of size \(\lambda\) if and only if \(\lambda\) is less than the first beautiful cardinal.
The organization of this paper is as follows.
Szmielew [30] developed first order theory of abelian groups, see also [28] for further discussions. In Section 2 we review infinitary languages and develop some parts of abelian group theory in this context. For this, we use the concept of \(\theta\)-models from [27]. We also introduce some sub-languages of the infinitary languages, which play some role in our later investigation.
For Section 3, let us fix a pair \((\lambda,\theta)\) of regular cardinals and let \(\kappa\) be as Theorem 1.3. The new object studied here is called the general frame and its enrichment the _additive frame_. Such a frame is of the form
\[\mathbf{f}:=(M,\mathscr{L},\lambda,\kappa,\theta,\Omega),\]
where \(M\) comes from Theorem 1.3, and it has an additive \(\tau_{M}\)-model of cardinality \(\geq\kappa\) and \(\mathscr{L}\) is a class of certain formulas in the vocabulary \(\tau_{M}\). For more details, see Definition 3.1. The main result of Section 3 is Theorem 3.14. This gives us an additive frame \(\mathbf{f}\).
Section 4 is about the concept of _algebraic closure_ in a frame (see Definition 4.1). This enables us to improve Theorem 3.14, which is needed in the squeal.
In Section 5 we put all things together and present the proof of Theorem 1.3. Let \(\chi:=|M|\geq\kappa\), and let \(\mathbb{P}:=\operatorname{Col}(\aleph_{0},\chi)\). Forcing with \(\mathbb{P}\) enables us to collapse \(|M|\) into \(\aleph_{0}\), i.e., for any \(\mathbb{P}\)-generic filter \(G_{\mathbb{P}}\) over \(V\), we have
\[V[G_{\mathbb{P}}]\models\text{``}M\text{ is countable''}.\]
We show in \(V[G_{\mathbb{P}}]\), there exists a 1-1 map \(\pi:M\to M\) which is not surjective.
Finally, as another application of infinitary logic, we present a new construction of Hopfian groups.
We close the introduction by noting that all groups (resp. rings) are abelian (resp. commutative), otherwise specialized, and our notation is standard and follows that in Fuchs books [14] and [13] and Eklof-Mekler [10].
## SS 2. Infinitary languages
In the first subsection, we briefly review infinitary languages and the concept of additive \(\theta\)-models, introduced in [27]. In the second subsection, we present basic properties of affine subsets, i.e., ones closed under \(x-y+z\). The main result is Theorem 2.22.
(\lx@sectionsign\) \(2(\mathrm{A})\). **A review of infinitary languages.** In this subsection we briefly review the infinitary logic, and refer to [5] and [27] for more information.
**Convention 2.1**.: Given a model \(M\), by \(\tau_{M}\) we mean the language (or vocabulary) of the model \(M\).
_Notation 2.2_.:
1. By \(\mathrm{AB}\) we mean the class of abelian groups.
2. Given a vocabulary \(\tau\) which contains two place functions \(+,-\), we define the affine operation \(\mathrm{Affine}(x,y,z)\) as the three place function \(\mathrm{Affine}(x,y,z):=x-y+z\).
**Definition 2.3**.: Let \(M\) be a model of vocabulary \(\tau_{M}\).
1. We say \(M\) is an _additive \(\theta\)-model_ when: 1. the two place function symbols \(+,-\) and the constant symbol \(0\) belong to \(\tau_{M}\), 2. \(G_{M}=(|M|,+^{M},-^{M},0^{M})\in\mathrm{AB}\), 3. \(R^{M}\) is a subgroup of \({}^{n}(G_{M})\), for any predicate symbol \(R\in\tau_{M}\) with \(\mathrm{arity(R)}=\mathrm{n}\), 4. \(F^{M}\) is a homomorphism from \({}^{n}(G_{M})\) into \(G_{M}\), for any function symbol \(F\in\tau_{M}\) with arity \(n\), 5. \(\tau_{M}\) has cardinality \(\leq\theta\).
2. For an additive \(\theta\)-model \(M\), we say \(X\subseteq M\) is _affine if \(X\) is closed under the affine operation \(\mathrm{Affine}(x,y,z)\). In other words, \(a-b+c\in X\) provided that \(a,b,c\in X\)._
3. _We say_ \(M\) _is an affine_ \(\theta\)_-model provided:_ 1. _we do not necessarily have_ \(+,-,0\) _in the vocabulary, but only the three place function_ \(\mathrm{Affine}(x,y,z)\)
_,_
2. _if_ \(R\in\tau_{M}\) _is an_ \(n\)_-place predicate and_ \(\overline{a}_{l}=\langle a_{l,i}:i<n\rangle\in R^{M}\) _for_ \(l=0,1,2\) _and_ \[\overline{b}:=\mathrm{Affine}(\overline{a}_{0},\overline{a}_{1},\overline{a}_{2 })=\big{\langle}\mathrm{Affine}(a_{0,i},a_{1,i},a_{2,i}):i<n\big{\rangle},\] _then_ \(\overline{b}\in R^{M}\)_,_ 3. _for any_ \(n\)_-place function symbol_ \(F\in\tau_{M}\) _and_ \(\overline{a}_{l}=\langle a_{l,i}:i<n\rangle\in{}^{n}M\)_, for_ \(l=0,1,2\)_, we have_ \[F^{M}(\mathrm{Affine}(\overline{a}_{0},\overline{a}_{1},\overline{a}_{2}))= \mathrm{Affine}(F^{M}(\overline{a}_{0}),F^{M}(\overline{a}_{1}),F^{M}( \overline{a}_{2})),\] _(d)_ \(\tau_{M}\) _has cardinality_ \(\leq\theta\)_._
2. _Suppose_ \(M\) _is an affine_ \(\theta\)_-model. We say_ \(M\) _is truly affine provided for some fixed_ \(a\in M\) _and for the following interpretation_ * \(x+y:=\mathrm{Affine}(x,a,y)=x-a+y,\)__ * \(x-y:=\mathrm{Affine}(x,y,a)=x-y+a,\)__ * \(0:=a\)_,_ _then we get an abelian group, and hence an additive_ \(\theta\)_-model._
We may omit \(\theta\) if it is clear from the context.
_Remark 2.4_.: i) A natural question arises: Is any affine \(\theta\)-model truly affine? Not necessarily this holds, see Example 2.5, below.
ii) More generally, we can replace \(\{+,-,\mathrm{Affine}\}\) for a set \(\tau_{f}\) of beautiful function from [26]. The corresponding result holds in this frame.
**Example 2.5**.: Let \(G\) be an abelian group, \(H\) be a proper subgroup of it and \(a\in G\setminus H\). Define \(M\) as follows:
* the universe of \(M\) is \(a+H\),
* \(\tau_{M}:=\{+,-,\mathrm{Affine}\}\),
* \(+^{M}\) and \(-^{M}\) are \(+^{G}\upharpoonright M\) and \(-^{G}\upharpoonright M\) respectively,
* \(\mathrm{Affine}^{M}:=\mathrm{Affine}^{G}\upharpoonright M\), where \(\mathrm{Affine}^{G}=\{x-y+z:x,y,z\in G\}\).
Then the following two assertions hold:
1. \(M\) is an affine \(\aleph_{0}\)-model, isomorphic to \(H\).
b) \(M\) is not an abelian group.
**Definition 2.6**.: (1) We say a class \(K\) of models is an _additive \(\theta\)-class_, when \(M\) is an additive \(\theta\)-model for all \(M\in K\), and
\[\tau_{M}=\tau_{N}\quad\forall M,N\in K.\]
We denote the resulting common language by \(\tau_{K}\).
(2) Similarly, one can define affine \(\theta\)-classes.
**Hypothesis 2.7**.: Let \(\Omega\) be a set of cardinals with \(1\in\Omega\) and members of \(\Omega\setminus\{1\}\) are infinite cardinals.
_Notation 2.8_.:
1. By \(\overline{x}_{[u]}\) or \(\overline{x}_{u}\) we mean \(\langle x_{\alpha}:\alpha\in u\rangle.\) So, with no repetition.
2. Suppose \(\varphi(\overline{x}_{u})\) is a formula. By \(\varphi(M)\) we mean \(\{\overline{a}\in{}^{u}M:M\models\varphi[\overline{a}]\}.\)
3. For a formula \(\varphi(\overline{x}_{[u]},\overline{y}_{[v]})\) and \(\overline{b}\in{}^{v}M,\) we let \[\varphi(M,\overline{b}):=\big{\{}\overline{a}\in{}^{u}M:M\models\varphi[ \overline{a},\overline{b}]\big{\}}.\]
4. Given a sequence \(t\), by \(\lg(t)\) we mean the length of \(t\).
**Definition 2.9**.: Suppose \(\kappa\) and \(\mu\) are infinite cardinals, which we allow to be \(\infty\). The infinitary language \(\mathcal{L}_{\mu,\kappa}(\tau)\) is defined so as its vocabulary is the same as \(\tau,\) it has the same terms and atomic formulas as in \(\tau,\) but we also allow conjunction and disjunction of length less than \(\mu\), i.e., if \(\phi_{j},\) for \(j<\beta<\mu\) are formulas, then so are \(\bigvee_{j<\beta}\phi_{j}\) and \(\bigwedge_{j<\beta}\phi_{j}\). Also, quantification over less than \(\kappa\) many variables (i.e., if \(\phi=\phi((v_{i})_{i<\alpha})\), where \(\alpha<\kappa,\) is a formula, then so are \(\forall_{i<\alpha}v_{i}\phi\) and \(\exists_{i<\alpha}v_{i}\phi\)).
Note that \(\mathcal{L}_{\omega,\omega}(\tau)\) is just the first order logic with vocabulary \(\tau.\) Given \(\kappa,\)\(\mu\) and \(\tau\) as above, we are sometimes interested in some special formulas from \(\mathcal{L}_{\mu,\kappa}(\tau).\)
**Definition 2.10**.: Suppose \(\kappa\) and \(\lambda\) are infinite cardinals or possibly \(\infty\). We define the logic \(\mathscr{L}_{\lambda,\kappa,\Omega}\) as follows:
1. For a vocabulary \(\tau\), the language \(\mathscr{L}_{\lambda,\kappa,\Omega}(\tau)\) is defined as the set of formulas with \(<\theta\) free variables (without loss of generality they are subsets of \(\{x_{\zeta}:\zeta<\theta\}\), see Discussion 2.11) which is the closure of the set of basic formulas, i.e., atomic and the negation of atomic formulas, under:
1. conjunction of \(<\lambda\) formulas, 2. disjunction of \(<\lambda\) formulas, 3. For any \(\sigma\in\Omega\), 1. \(\varphi(\overline{x}):=(\exists^{\sigma}\overline{x}^{\prime})\psi(\overline{x},\overline{x}^{\prime})\), or 2. \(\varphi(\overline{x}):=(\forall^{\sigma}\overline{x}^{\prime})\psi(\overline{x},\overline{x}^{\prime})\), where \(\psi(\overline{x},\overline{x}^{\prime})\) is a formula. We usually omit \(\sigma\), if \(\sigma=1\). We usually omit \(\Omega\) if it is clear from the context.
2. Satisfaction is defined as usual, where for the formulas \(\varphi(\overline{x}):=(\exists^{\sigma}\overline{x}^{\prime})\psi(\overline {x},\overline{x}^{\prime})\) and \(\varphi(\overline{x}):=(\forall^{\sigma}\overline{x}^{\prime})\psi(\overline {x},\overline{x}^{\prime})\), it is defined as: 1. If \(\varphi(\overline{x}):=(\exists^{\sigma}\overline{x}^{\prime})\psi(\overline {x},\overline{x}^{\prime})\), \(M\) is a \(\tau\)-model, and \(\overline{a}\in{}^{\lg(\overline{x})}M\), then \[M\models\varphi(\overline{a})\] if and only if there are \(\overline{b}_{\varepsilon}\in{}^{\lg(\overline{x}^{\prime})}M\) for all \(\varepsilon<\sigma\) pairwise distinct such that \(M\models\psi(\overline{a},\overline{b}_{\varepsilon})\) for all \(\varepsilon<\sigma\). 2. If \(\varphi(\overline{x}):=(\forall^{\sigma}\overline{x}^{\prime})\psi(\overline {x},\overline{x}^{\prime})\), then \[M\models\varphi(\overline{x})\iff M\models\neg\big{[}\exists^{\sigma}\overline {x}^{\prime}\neg\big{(}\psi(\overline{x},\overline{x}^{\prime})\big{)}\big{]}.\] Note that \(\neg(\psi(\overline{x},\overline{x}^{\prime}))\) is not necessarily in \(\mathscr{L}_{\lambda,\kappa,\Omega}(\tau)\).
**Discussion 2.11**.: Given a formula \(\varphi\) in \(\mathcal{L}_{\infty,\theta}(\tau)\) with free variables \(\overline{x}_{\varphi}\), we can always assume that \(\overline{x}_{\varphi}=\langle x_{\zeta}:\zeta\in u_{\varphi}\rangle\), for some \(u_{\varphi}\in[\theta]^{<\theta}\). The key point is that if \(\varphi=\varphi(\overline{x})\), where \(\overline{x}=\langle x_{\zeta}:\zeta\in w\rangle\), where \(w\) is a set of ordinals of size less than \(\theta\), and if \(f:w\leftrightarrow u\) is a bijection where \(u\in[\theta]^{<\theta}\), and
\[\psi(\overline{x})\equiv\operatorname{Sub}_{f}^{\overline{x}}(\varphi),\]
where \(\operatorname{Sub}_{f}^{\overline{x}}(\varphi)\) is essentially the formula obtained from \(\varphi\) by replacing the variable \(x_{\zeta}\) by \(x_{f(\zeta)}\), then if \(\bar{a}\in^{w}\!M\), \(\bar{b}\in^{u}\!M\) and \(a_{\zeta}=b_{f(\zeta)}\), for \(\zeta\in w\), then
\[M\models\varphi(\bar{a})\iff M\models\psi(\bar{b}).\]
We can similarly assume that all bounded variables are from \(\{x_{i}:i<\theta\}\).
**Convention 2.12**.: In what follows, saying closed under \(\exists\) (resp. \(\forall\)) means under all \(\exists^{\sigma}\) (resp. \(\forall^{\sigma}\)).
In the next definition, we consider some classes of infinitary formulas that we will work with them latter.
**Definition 2.13**.: Suppose \(\theta\) is an infinite cardinal, or \(\infty\), and suppose \(\tau\) is a language. Here, we collect some infinitary sub-classes of the language \(\mathscr{L}_{\infty,\theta}(\tau)\):
1. \(\mathscr{L}_{\infty,\theta}^{\mathrm{cop}}(\tau)\) is the class of conjunction-positive formulas, i.e., the closure of atomic formulas under \(\bigwedge,\exists,\forall\).
2. \(\mathscr{L}_{\infty,\theta}^{\mathrm{cpe}}(\tau)\) is the class of conjunction-positive existential formulas, i.e., the closure of atomic formulas under \(\bigwedge\) and \(\exists\).
3. \(\mathscr{L}_{\infty,\theta}^{\mathrm{co}}(\tau)\) is the closure of atomic formulas and \(x_{i}\neq x_{j}\) under \(\bigwedge\), \(\exists\) and \(\forall\).
4. \(\mathscr{L}_{\infty,\theta}^{\mathrm{ce}}(\tau)\) is the closure of atomic formulas and \(x_{i}\neq x_{j}\) under \(\bigwedge\) and \(\exists\).
We shall use freely the following simple fact.
**Fact 2.14**.: \(\mathscr{L}_{\infty,\theta}^{\mathrm{co}}(\tau)\supseteq\mathscr{L}_{\infty, \theta}^{\mathrm{cop}}(\tau)\cup\mathscr{L}_{\infty,\theta}^{\mathrm{ce}}( \tau)\supseteq\mathscr{L}_{\infty,\theta}^{\mathrm{cop}}(\tau)\cap\mathscr{L} _{\infty,\theta}^{\mathrm{ce}}(\tau)\supseteq\mathscr{L}_{\infty,\theta}^{ \mathrm{cpe}}(\tau)\)_._
The following lemma is easy to prove.
**Lemma 2.15**.: _Assume \(M\) is an additive \(\theta\)-model, \(\tau=\tau_{M}\) and \(\varphi(\overline{x}_{u})\in\mathscr{L}_{\infty,\infty}(\tau)\) with \(\varepsilon=\lg(\overline{x})\). The following assertions are valid:_
1. _If_ \(\varphi(\overline{x}_{u})\in\mathscr{L}_{\infty,\infty}^{\mathrm{cop}}(\tau)\)_, then_ \(\varphi(M)\) _is a sub-group of_ \({}^{u}M\)_._
2. _If_ \(\varphi(\overline{x}_{u})\in\mathscr{L}_{\infty,\infty}^{\mathrm{cpe}}(\tau)\)_,_ \(f\in\mathrm{End}(M)\) _and_ \(M\models\varphi[\overline{a}]\)_, then_ \(M\models\varphi[f(\overline{a})]\)_._
3. _If_ \(\varphi(\overline{x}_{u})\in\mathscr{L}_{\infty,\theta}^{\mathrm{cpe}}(\tau)\)_,_ \(M,N\) _are_ \(\tau\)_-models and_ \(f:M\to N\) _is a homomorphism, then_ \(f\) _maps_ \(\varphi(M)\) _into_ \(\varphi(N)\)_._
4. _If_ \(\varphi(\overline{x}_{u})\in\mathscr{L}_{\infty,\theta}^{\mathrm{ce}}(\tau)\)_,_ \(M,N\) _are_ \(\tau\)_-models and_ \(f:M\to N\) _is a 1-1 homomorphism, then_ \(f\) _maps_ \(\varphi(M)\) _into_ \(\varphi(N)\)_._
5. _If_ \(\varphi(\overline{x}_{u})\in\mathscr{L}_{\infty,\theta}^{\mathrm{co}}(\tau)\)_,_ \(M,N\) _are_ \(\tau\)_-models and_ \(f:M\to N\) _is a bijection, then_ \(f\) _is an isomorphism from_ \(\varphi(M)\) _onto_ \(\varphi(N)\)_._
6. _Assume_ \(\psi(\bar{y})\) _is obtained from_ \(\varphi(\bar{x})\) _by adding dummy variables, permuting the variables and substitution not identifying variables. Then_ \[\psi(\bar{y})\in\mathscr{L}_{\infty,\theta}^{*}(\tau)\iff\varphi(\bar{x})\in \mathscr{L}_{\infty,\theta}^{*}(\tau),\] _where_ \(*\in\{\mathrm{cop},\mathrm{cpe},\mathrm{ce},\mathrm{co}\}\)_._
Proof.: The proof is by induction on the complexity of the formulas. For completeness, we sketch the proof. If the formula is an atomic formula, then it is evident that all of the above items are satisfied. It is also easy to see that each item is preserved under \(\bigwedge\), in the sense that if \(\psi=\bigwedge_{i\in I}\varphi_{i}\) is well-defined and the lemma holds for each \(\varphi_{i}\), then it holds for \(\psi\).
We now consider the case where \(\psi(\bar{x})=(\exists^{\sigma}\bar{y})\varphi(\bar{x},\bar{y})\), and assume the induction hypothesis holds for \(\varphi.\) We consider each clause separately, assuming in each case, the formula \(\varphi\) is in the assumed language.
**Clause (1)**: Suppose \(\varphi(M)\) is a subgroup of \({}^{\lg(\bar{x})+\lg(\bar{y})}M\). We show that \(\psi(M)\) is a subgroup of \({}^{\lg(\bar{x})}M\). To see this, let \(\bar{a}_{0},\bar{a}_{1}\in\psi(M)\). Then for some \(\bar{b}_{0}\) and \(\bar{b}_{1}\) we have
\[M\models\text{``}\varphi[\bar{a}_{0},\bar{b}_{0}]\text{ and }\varphi[\bar{a}_{1}, \bar{b}_{1}]^{\prime\prime}.\]
By induction, \(M\models\)\(\varphi[\bar{a}_{0}\) - \(\bar{a}_{1},\bar{b}_{0}\) - \(\bar{b}_{1}]\), hence
\[M\models\psi[\bar{a}_{0}-\bar{a}_{1}].\]
Thus \(\bar{a}_{0}\)-\(\bar{a}_{1}\in\psi(M)\).
**Clause (2)**: Suppose \(M\models\)\(\psi[\bar{a}]\). Then for some \(\bar{b}\), we have \(M\models\varphi[\bar{a},\bar{b}]\). By the induction, \(M\models\varphi[f(\bar{a}),f(\bar{b})]\), and hence \(M\models\psi[f(\bar{a})]\), as requested.
**Clause (3)**: As in clause (2), we can show that if \(M\models\)\(\psi[\bar{a}]\), then \(N\models\)\(\psi[f(\bar{a})]\), and this gives the required result.
**Clause (4)**: As in clause (3). The assumption of \(f\) being 1-1 is used to show that if \(x_{i}\neq x_{j}\), then \(f(x_{i})\neq f(x_{j})\).
**Clause (5)**: As in clause (4).
**Clause (6)**: This is easy.
Finally, suppose that \(\psi(\bar{x})=(\forall^{\sigma}\bar{y})\varphi(\bar{x},\bar{y})\), and assume the induction hypothesis holds for \(\varphi\). We only have to consider items (1), (5) and (6).
**Clause (1)**: Suppose \(\varphi(M)\) is a subgroup of \({}^{\lg(\bar{x})+\lg(\bar{y})}M\). We show that \(\psi(M)\) is a subgroup of \({}^{\lg(\bar{x})}M\). To see this, let \(\bar{a}_{0},\bar{a}_{1}\in\psi(M)\). We have to show that \(\bar{a}_{0}\)-\(\bar{a}_{1}\in\psi(M)\). Thus let \(\bar{b}\in^{\lg(\bar{y})}M\). By the induction hypothesis,
\[M\models\text{``}\varphi[\bar{a}_{0},\bar{b}]\text{ and }\varphi[\bar{a}_{1}, \bar{0}]^{\prime\prime}.\]
Thanks to induction, \(M\models\)\(\varphi[\bar{a}_{0}\) - \(\bar{a}_{1},\bar{b}\) - \(\bar{0}]\). As this holds for all \(\bar{b}\), we have
\[M\models\psi[\bar{a}_{0}-\bar{a}_{1}].\]
Thus \(\bar{a}_{0}\)-\(\bar{a}_{1}\in\psi(M)\), as requested.
**Clause (5)**: As before, we can easily show that \(f\) maps \(\psi(M)\) into \(\psi(N).\) To see it is onto, let \(\bar{c}\in\psi(N)\). Then \(N\models\psi(\bar{c})\). As \(f\) is onto, for some \(\bar{a}\) we have \(\bar{c}=f(\bar{a})\). We have to show that \(\bar{a}\in\psi(M)\). Thus let \(\bar{b}\in^{\lg(\bar{y})}M\). Then \(\bar{d}=f(\bar{b})\in^{\lg(\bar{y})}N\), and by our assumption, \(N\models\varphi(\bar{c},\bar{d})\). As \(f\) is an isomorphism, \(M\models\varphi(\bar{a},\bar{b})\). As \(\bar{b}\) was arbitrary, \(M\models\psi(\bar{a})\), i.e., \(\bar{a}\in\psi(M)\).
**Clause (6)**: This is easy.
The lemma follows.
Let us repeat the above result in the context of \(R\)-modules:
**Corollary 2.16**.: _Let \(M\) be an \(R\)-module._
1. _If_ \(\varphi(\overline{x}_{u})\in\mathscr{L}_{\infty,\theta}^{\rm{cep}},\) _then_ \(\varphi(M)\) _is an abelian subgroup of_ \((^{u}M,+).\)__
2. _Similar result holds for formulas_ \(\varphi(\overline{x}_{u})\in\mathscr{L}_{\infty,\theta}^{\rm{cop}}.\)__
_Furthermore, if \(R\) is commutative, then in the above, \(\varphi(M)\) becomes a sub-module of \((^{u}M,+).\)_
_Remark 2.17_.: If \(R\) is not commutative, then \(\varphi(M)\) is not necessarily a submodule. To see this, suppose \(a,b,c\in R\) are such that \(abc\neq bac\), and suppose \(M\) is a left \(R\)-module. Define \(F_{a}:M\to M\) as \(F_{a}(x)=ax\). Now note that \((c,ac)\in F_{a}(M).\) If \(F_{a}(M)\) is a submodule, then we must have \((bc,bac)\in F_{a}(M).\) By definition, \((bc,bac)=F_{a}(x)=(x,ax)\) for some \(x\in M\). It then follows that \(x=bc\) and hence
\[bac=ax=abc,\]
which contradicts \(abc\neq bac.\)
**SS 2(B)**. **More on affineness.** In this subsection we try replacing subgroups by affine subsets. The main result of this subsection is Proposition 2.22. First, we fix the hypothesis and present the corresponding definitions. Here, affinity demand relates only to the formulas, not the content.
**Hypothesis 2.18**.:
1. \(R\) is a ring,
2. \(M\) is an \(R\)-module,
3. \(\kappa,\theta\) are as in Definition 3.1.
**Definition 2.19**.: Let \(\mathrm{Affine}_{1}\) be the set of all formulas \(\varphi(\overline{x})\in\mathscr{L}_{\infty,\theta}(\tau_{M})\) so that \(\lg(\overline{x})<\theta\) and \(\varphi(M)\) is closed under \(\overline{x}-\overline{y}+\overline{z}\). In other words, \(\overline{a}-\overline{b}+\overline{c}\in\varphi(M)\) provided that \(\overline{a},\overline{b},\overline{c}\in\varphi(M)\).
We now define another class \(\mathrm{Affine}_{2}\) of formulas of \(\mathscr{L}_{\infty,\theta}(\tau_{M})\), and show that it is included in \(\mathrm{Affine}_{1}\). To this end, we first make the following definition.
**Definition 2.20**.: Suppose \(\alpha_{*}\) is an ordinal. Let \(\varphi(\overline{x},\overline{y})\) and \(\overline{\psi}(\overline{x},\overline{y})=\langle\psi_{\alpha}(\overline{x},\overline{y}):\alpha<\alpha_{*}\rangle\) be a sequence of formulas from \(\mathscr{L}_{\infty,\theta}(\tau_{M})\). Let \(\overline{b}\in{}^{\lg(\overline{x})}M\) and \(\overline{a}\in{}^{\lg(\overline{y})}M\). Then we set
1. \(\mathrm{set}_{\overline{\psi}}(\overline{b},\overline{a})\) stands for the following set \[\mathrm{set}_{\overline{\psi}}(\overline{b},\overline{a}):=\big{\{}\alpha\in \alpha_{*}:(\widehat{b}\ \overline{a}\in\psi_{\alpha}(M))\big{\}}.\]
2. By \(\mathrm{Set}_{\varphi,\overline{\psi}}(\overline{a})\) we mean \[\mathrm{Set}_{\varphi,\overline{\psi}}(\overline{a}):=\big{\{}u\subseteq \alpha_{*}:\text{for some }\overline{c}\in\varphi(M,\overline{a})\text{ we have }u=\mathrm{set}_{\overline{\psi}}( \overline{c},\overline{a})\big{\}}.\]
3. By \(\mathrm{inter}_{\varphi,\overline{\psi}}(\overline{a})\) we mean \[\bigg{\{}(w_{0},w_{1}):w_{0}\subseteq w_{1}\subseteq\alpha_{*}\text{ and }\exists\ u_{0},u_{1}\in\mathrm{Set}_{\varphi,\overline{\psi}}(\overline{a}) \text{ s.t. }w_{1}\subseteq u_{1}\text{ and }u_{0}\cap w_{1}=w_{0}\bigg{\}}.\]
In particular, we have the following flowchart:
We are now ready to define the class \(\mathrm{Affine}_{2}\) of formulas:
**Definition 2.21**.: Let \(\mathrm{Affine}_{2}\) be the closure of the set of atomic formulas by:
1. arbitrary conjunctions,
2. existential quantifier \(\exists\overline{x}\), and
3. suppose for a given ordinal \(\alpha_{*}\), the formulas \(\varphi(\overline{x},\overline{y}),\langle\psi_{\alpha}(\overline{x},\overline {y}):\alpha<\alpha_{*}\rangle\) are taken from Affine\({}_{2}\) such that \(\varphi(\overline{x},\overline{y})\geq\psi_{\alpha}(\overline{x},\overline{y})\) for all \(\alpha<\alpha_{*}\). Also suppose that \[\Upsilon\subseteq\{(w_{0},w_{1}):w_{0}\subseteq w_{1}\subseteq\alpha_{*}\}.\] Then \(\vartheta(\overline{y})=\Theta_{\varphi,\overline{\psi},\Upsilon}(\overline {y})\in\text{Affine}_{2}\), where \(\vartheta(\overline{y})\) is defined such that \[M\models\vartheta[\overline{a}]\iff\Upsilon\subseteq\text{inter}_{\varphi, \overline{\psi}}(\overline{a}).\]
The main result of this section is the following.
**Proposition 2.22**.: _Adopt the previous notation. Then \(\text{Affine}_{2}\subseteq\text{Affine}_{1}.\)_
Proof.: We prove the theorem in a sequence of claims. We proceed by induction on the complexity of the formula \(\vartheta\) that if \(\vartheta\in\text{Affine}_{2}\), then \(\vartheta\in\text{Affine}_{1}\). This is clear if \(\vartheta\) is an atomic formula. Suppose \(\vartheta=\bigwedge_{i\in I}\vartheta_{i}\), and the claim holds for all \(\vartheta_{i},i\in I\). It is then clear from inductive step that \(\vartheta\in\text{Affine}_{1}\) as well. Similarly, if \(\vartheta=\exists\overline{x}\varphi(\overline{x})\), and if the claim holds for \(\varphi(\overline{x})\), then clearly it holds for \(\vartheta\).
Now suppose that \(\alpha_{*}\) is an ordinal, \(\varphi(\overline{x},\overline{y}),\langle\psi_{\alpha}(\overline{x}, \overline{y}):\alpha<\alpha_{*}\rangle\) are in Affine\({}_{2}\), such that \(\varphi(\overline{x},\overline{y})\geq\psi_{\alpha}(\overline{x},\overline{y})\) for all \(\alpha<\alpha_{*}.\) Also, suppose that
\[\Upsilon\subseteq\{(w_{0},w_{1}):w_{0}\subseteq w_{1}\subseteq\alpha_{*}\}.\]
Assume by the induction hypothesis that the formulas \(\varphi(\overline{x},\overline{y})\) and \(\psi_{\alpha}(\overline{x},\overline{y})\), for \(\alpha<\alpha_{*}\), are in Affine\({}_{1}.\) We have to show that \(\vartheta(\overline{y})=\Theta_{\varphi,\overline{\psi},\Upsilon}(\overline {y})\in\text{Affine}_{1}\) as well.
Now, we bring the following claim:
**Claim 2.23**.: _Adopt the above notation. Assume \(\overline{a}_{l}\in{}^{\text{\rm 1g}(\overline{\nu})}M\) for \(l=0,1,2,3\) and \(\overline{a}_{3}=\overline{a}_{0}-\overline{a}_{1}+\overline{a}_{2}.\) If \(u_{j}\in\text{Set}(\overline{\text{a}_{\rm j}})\) for \(j=0,1\) and \(u=u_{0}\cap u_{1}\), then_
\[\big{\{}w\cap u:w\in\text{Set}_{\varphi,\overline{\psi}}(\overline{a}_{2}) \big{\}}=\big{\{}w\cap u:w\in\text{Set}_{\varphi,\overline{\psi}}(\overline{a }_{3})\big{\}}.\]
Proof.: Let \(\overline{b}_{0}\in\varphi(M,\overline{a}_{0})\) and \(\overline{b}_{1}\in\varphi(M,\overline{a}_{1})\) be such that \(u_{0}=\text{set}_{\overline{\psi}}(\overline{b}_{0},\overline{a}_{0})\) and \(u_{1}=\text{set}_{\overline{\psi}}(\overline{b}_{1},\overline{a}_{1})\). Suppose that \(w\in\text{Set}_{\varphi,\overline{\psi}}(\overline{a}_{2})\). Then for some \(\overline{b}_{2}\in\varphi(M,\overline{a}_{2})\) we have \(w=\text{set}_{\overline{\psi}}(\overline{b}_{2},\overline{a}_{2})\). We have to find \(w^{\prime}\in\text{Set}_{\varphi,\overline{\psi}}(\overline{a}_{3})\) such that \(w^{\prime}\cap u=w\cap u\).
Set \(\overline{b}_{3}:=\overline{b}_{0}-\overline{b}_{1}+\overline{b}_{2}\). Note that
\[l=0,1,2\implies\widehat{b_{l}}\ \overline{a}_{l}\in\varphi(M),\]
hence, as \(\varphi\in\mathrm{Affine}_{1}\), we have
\[\widehat{b_{0}}\ \overline{a}_{0}-\widehat{b_{1}}\ \overline{a}_{1}+\widehat{b_{2}}\ \overline{a}_{2}\in\varphi(M).\]
Clearly,
\[\widehat{b_{3}}\ \overline{a}_{3}=\widehat{b_{0}}\ \overline{a}_{0}-\widehat{b_{1}} \ \overline{a}_{1}+\widehat{b_{2}}\ \overline{a}_{2}\in\varphi(M).\]
According to its definition, \(\overline{b}_{3}\in\varphi(M,\overline{a}_{3})\).
Let \(w^{\prime}=\mathrm{set}_{\overline{\psi}}(\overline{b}_{3},\overline{a}_{3})\). We show that \(w^{\prime}\cap u=w\cap u.\) Suppose \(\alpha\in u.\) Then, we have \(\alpha\in u_{0}\cap u_{1},\) and hence \(\widehat{b_{j}}\ \overline{a}_{j}\in\psi_{\alpha}(M)\), for \(j=0,1\). Thus as \(\psi_{\alpha}\in\mathrm{Affine}_{1}\), we have
\[\alpha\in w^{\prime} \iff\widehat{b_{3}}\ \overline{a}_{3}\in\psi_{\alpha}(M)\] \[\iff\widehat{b_{2}}\ \overline{a}_{2}\in\psi_{\alpha}(M)\] \[\iff\alpha\in w.\]
Suppose \(w^{\prime}\in\mathrm{Set}_{\varphi,\overline{\psi}}(\overline{a}_{3}).\) By symmetry, \(w\cap u=w^{\prime}\cap u\) for some \(w\in\mathrm{Set}_{\varphi,\overline{\psi}}(\overline{a}_{2}).\) The claim follows.
Let us apply the previous claim and observe that:
**Claim 2.24**.: _Let \(\varepsilon=\lg(\overline{y})<\theta\) and \(\overline{a}_{l}\in{}^{\varepsilon}M\) for \(l=0,1,2\), and set \(\overline{a}_{3}:=\overline{a}_{0}-\overline{a}_{1}+\overline{a}_{2}\). If \(\Upsilon\subseteq\bigcap_{l\leq 2}\mathrm{inter}_{\varphi,\overline{\psi}}( \overline{a}_{l})\), then \(\Upsilon\subseteq\mathrm{inter}_{\varphi,\overline{\psi}}(\overline{a}_{3}).\)_
Proof.: Let \((w_{0},w_{1})\in\Upsilon\). We shall prove that \((w_{0},w_{1})\in\mathrm{inter}(\overline{a}_{3}).\) For \(j\leq 2,\) as \((w_{0},w_{1})\in\Upsilon\subseteq\mathrm{inter}_{\varphi,\overline{\psi}}( \overline{a}_{j}),\) there is a pair \(u_{j,0},u_{j,1}\in\mathrm{set}_{\varphi,\overline{\psi}}(\overline{a}_{j})\) witnessing it. Namely, we have
\[w_{1}\subseteq u_{j,1}\ \mathrm{and}\ u_{j,0}\cap w_{1}=w_{0}.\]
Now, we can find \(\overline{b}_{j,0},\overline{b}_{j,1}\) such that \(\mathrm{set}(\overline{b}_{\mathrm{j},0},\overline{a}_{\mathrm{j}})=\mathrm{u }_{\mathrm{j},0}\) and \(\mathrm{set}(\overline{b}_{\mathrm{j},1},\overline{a}_{\mathrm{j}})=\mathrm{u }_{\mathrm{j},1}.\) Set
* \(\overline{b}_{3,0}:=\overline{b}_{0,0}-\overline{b}_{1,0}+\overline{b}_{2,0},\)
* \(\overline{b}_{3,1}:=\overline{b}_{0,1}-\overline{b}_{1,1}+\overline{b}_{2,1}.\)
In the light of Claim 2.23, there exist \(u_{3,1}\in\mathrm{set}_{\varphi,\overline{\psi}}(\overline{b}_{3})\) and \(u_{3,0}\in\mathrm{set}_{\varphi,\overline{\psi}}(\overline{b}_{3})\) such that the following two equalities are valid:
* \(u_{3,1}\cap(u_{0,1}\cap u_{1,1})=u_{2,1}\cap(u_{0,1}\cap u_{1,1})\), and
* \(u_{3,0}\cap(u_{0,0}\cap u_{1,0})=u_{2,0}\cap(u_{0,0}\cap u_{1,0})\).
By clause (1), we have \(u_{3,1}\supseteq w_{1}\). Hence
\[w_{0} \subseteq u_{3,1}\cap w_{1}\] \[=u_{3,1}\cap(u_{0,1}\cap u_{1,1})\cap w_{1}\] \[=\big{(}u_{3,1}\cap(u_{0,1}\cap u_{1,1})\big{)}\cap(u_{0,1}\cap u _{1,1})\cap w_{1}\] \[\stackrel{{(2)}}{{=}}\big{(}u_{2,1}\cap(u_{0,1}\cap u _{1,1})\big{)}\cap(u_{0,1}\cap u_{1,1})\cap w_{1}\] \[\subseteq w_{0},\]
and so
\[u_{3,1}\cap w_{1}=w_{0}.\]
The claim follows.
Now, we are ready to complete the proof of Proposition 2.22. To this end, we fix the following data:
* \(\vartheta(\overline{y})=\theta_{\varphi,\overline{\psi},\Upsilon}(\overline{y})\),
* \(\overline{a}_{0},\overline{a}_{1},\overline{a}_{2}\in\vartheta(M)\),
* \(\overline{a}_{3}:=\overline{a}_{0}-\overline{a}_{1}+\overline{a}_{2}\).
This gives us \(\Upsilon\subseteq\text{inter}_{\varphi,\overline{\psi}}(\overline{a}_{l})\) for \(l\leq 2\). Thanks to Claim 2.24, we know \(\Upsilon\subseteq\text{inter}_{\varphi,\overline{\psi}}(\overline{a}_{3})\). According to Definition 2.21(c) one has \(\overline{a}_{3}\in\vartheta(M)\). Consequently, \(\vartheta(\overline{y})\in\text{Affine}_{1}\), and the proposition follows.
## SS 3. Additive frames
In this section we introduce the concept of an additive frame. Each additive frame contains, among other things, an abelain group. We will show that each abelian group can be realized in this way. In particular, the main result of this section is Theorem 3.14.
The following is one of our main and new frameworks:
**Definition 3.1**.:
* We say \[\mathbf{f}:=(M_{\mathbf{f}},\mathscr{L}_{\mathbf{f}},\lambda_{\mathbf{f}}, \kappa_{\mathbf{f}},\theta_{\mathbf{f}},\Omega_{\mathbf{f}})=(M,\mathscr{L}, \lambda,\kappa,\theta,\Omega)\] is a _general frame_ if:
1. \(M\) is a \(\tau_{M}\)-model. 2. \(\mathscr{L}\) is a class or set of formulas in the vocabulary \(\tau_{M}\), such that each \(\varphi\in\mathscr{L}\) has the form \(\varphi(\overline{x}),\ \overline{x}\) of length \(<\theta\). 3. For every \(\overline{a}\in{}^{\varepsilon}M,\ \varepsilon<\theta\), there is a formula \(\varphi_{\overline{a}}(\overline{x})\in\mathscr{L}\) such that: 1. \(\overline{a}\in\varphi_{\overline{a}}(M)\), 2. (the minimality condition) if \(\psi(\overline{x})\in\mathscr{L}\) and \(\overline{a}\in\psi(M)\), then \(\varphi_{\overline{a}}(M)\subseteq\psi(M)\). 4. 1. if \(\varphi_{\alpha}(\overline{x})\in\mathscr{L}\) for \(\alpha<\kappa\), then for some \(\alpha<\beta<\kappa\), we have \(\varphi_{\alpha}(M)\supseteq\varphi_{\beta}(M)\), 2. if \(\varphi_{\alpha,\beta}(\overline{x},\overline{y})\in\mathscr{L}\) for \(\alpha<\beta<\lambda\), then for some \(\alpha_{1}<\alpha_{2}<\alpha_{3}<\lambda\) we have \(\varphi_{\alpha_{1},\alpha_{2}}(M)\supseteq\varphi_{\alpha_{1},\alpha_{3}}(M), \varphi_{\alpha_{2},\alpha_{3}}(M)\). 5. \(\lambda,\kappa\) are regular and \(\lambda\geq\kappa\geq\theta\geq|\Omega|+|\tau_{M}|\), where \(\Omega\) is a set of cardinals such that \(1\in\Omega\) and all other cardinals in it are infinite.
2. We say a general frame \(\mathbf{f}\) is an _additive frame_ if in addition, it satisfies: 1. \((|M|,+^{M},-^{M},0^{M})\) is an abelian group. Moreover, \(M\) is an additive \(\theta\)-model. 2. If \(\varphi(\overline{x}_{u})\in\mathscr{L}\), then \(\varphi(M)\subseteq{}^{u}M\) is a sub-group of \(M\).
3. An additive frame \(\mathbf{f}\) is an _additive\({}^{+}\) frame_ if \(M_{\mathbf{f}}\) has cardinality greater or equal to \(\lambda\).
_Remark 3.2_.: Given a general frame \(\mathbf{f}\) as above, we always assume that the language \(\mathscr{L}\) is closed under permutation of variables, adding dummy variables and finite conjunction.
The next lemma is a criterion for an additive frame to be additive\({}^{+}\).
**Lemma 3.3**.: _Suppose \(\mathbf{f}=(M,\mathscr{L},\lambda,\kappa,\theta,\Omega)\) is an additive frame. Then it is an additive\({}^{+}\) frame if and only if for each \(\varepsilon\in(0,\theta)\), there exists some \(\bar{a}\in^{\varepsilon}M\) such that \(\varphi_{\bar{a}}\) has cardinality \(\geq\lambda\)._
Proof.: he assumption clearly implies \(\mathbf{f}\) is additive\({}^{+}\). To see the other direction, suppose \(\mathbf{f}\) is an additive\({}^{+}\) frame and \(\varepsilon\in(0,\theta).\) Suppose by the way of contradiction, \(|\varphi_{\bar{a}}|<\lambda\) for all \(\bar{a}\in^{\varepsilon}\!M\). By induction on \(\alpha<\kappa\) we can find a sequence \(\langle\bar{a}_{\beta}:\beta<\kappa\rangle\) such that for each \(\beta<\kappa\), \(\bar{a}_{\beta}\notin\bigcup_{\alpha<\beta}\varphi_{\bar{a}_{\alpha}}\). This contradicts Definition 3.1(4)(a).
The following defines a partial order relation on formulas of a frame.
**Definition 3.4**.: Assume \(\mathbf{f}\) is a general frame, and let \(\psi(\overline{x}),\varphi(\overline{x})\) be in \(\mathscr{L}_{\mathbf{f}}\).
1. We say \(\psi(\overline{x})\leq\varphi(\overline{x})\) if \(\varphi(M)\supseteq\psi(M)\).
2. We say \(\psi(\overline{x})\) and \(\varphi(\overline{x})\) are equivalent, denoted by \(\psi(\overline{x})\equiv\varphi(\overline{x})\), if \(\varphi(M)=\psi(M)\).
3. Suppose \(\overline{a},\overline{b}\in{}^{\varepsilon}M\). We say \(\overline{a}\leq\overline{b}\) (resp. \(\overline{a}\equiv\overline{b}\)) if \(\varphi_{\overline{a}}\leq\varphi_{\overline{b}}\) (resp. \(\varphi_{\overline{a}}\equiv\varphi_{\overline{b}}\)).
_Notation 3.5_.: Assume \(\mathbf{f}=(M,\lambda,\kappa,\theta,\Omega)\) is an additive frame. Let \(\overline{a}_{l}=\langle a_{l,\zeta}:\zeta<\varepsilon\rangle\in{}^{ \varepsilon}M\) for \(l<n\). We set:
* \(-\overline{a}_{l}:=\langle-a_{l,\zeta}:\zeta<\varepsilon\rangle\),
* \(\overline{a}_{1}+\overline{a}_{2}:=\langle a_{1,\zeta}+a_{2,\zeta}:\zeta< \varepsilon\rangle\),
* \(\sum_{l<n}\overline{a}_{l}:=\langle\sum_{l<n}a_{l,\zeta}:\zeta<\varepsilon\rangle\),
* \(\overline{a}-\overline{b}:=\overline{a}+(-\overline{b})\).
**Lemma 3.6**.: _Suppose \(\mathbf{f}=(M,\lambda,\kappa,\theta,\Omega)\) is an additive frame, \(\overline{a}\in{}^{\varepsilon}M,\varphi=\varphi_{\overline{a}}\) and \(\overline{a}_{\alpha}\in\varphi(M)\) for \(\alpha<\lambda\). Then for some \(\overline{\beta},\overline{\gamma}\) we have:_
_(a) \(\overline{\beta}=\langle\beta_{i}:i<\lambda\rangle\in{}^{\lambda}\lambda\) is increasing,_
_(b) \(\overline{\gamma}=\langle\gamma_{i}:i<\lambda\rangle\in{}^{\lambda}\lambda\) is increasing,_
_(c) \(\beta_{i}<\gamma_{i}<\beta_{i+1},\) for all \(i<\lambda\),_
_(d) \(\overline{a}-\overline{a}_{\beta_{i}}+\overline{a}_{\gamma_{i}}\) is equivalent to \(\overline{a}\), for all \(i<\lambda\)._
Proof.: First, we reduce the lemma to the following claim:
\((*)\) It is suffice to prove, for each sequences \(\overline{a},\langle\overline{a}_{\alpha}:\alpha<\lambda\rangle\) as above, there are \(\beta<\gamma<\lambda\) such that \(\overline{a}-\overline{a}_{\beta}+\overline{a}_{\gamma}\) is equivalent to \(\overline{a}\).
To see this, suppose \((*)\) holds. By induction on \(i<\lambda\), we define the increasing sequences \(\langle\beta_{i}:i<\lambda\rangle\) and \(\langle\gamma_{i}:i<\lambda\rangle\) as requested. Thus suppose that \(i<\lambda\), and we have defined \(\langle\gamma_{j},\beta_{j}:j<i\rangle\). In order to define \((\beta_{i},\gamma_{i})\), we let
\[\alpha_{*}:=\sup\{\gamma_{j}+\beta_{j}+1:j<i\}.\]
Since \(\lambda\) is regular, \(\alpha_{*}<\lambda\). Now, apply \((*)\) to \(\overline{a}\) and \(\langle\overline{a}_{\alpha_{*}+\alpha}:\alpha<\kappa\rangle\). This gives us \(\beta<\gamma<\kappa\) such that
\[\overline{a}-\overline{a}_{\alpha_{*}+\beta}+\overline{a}_{\alpha_{*}+\gamma }\equiv\overline{a}.\]
Thus it suffices to set \(\beta_{i}=\alpha_{*}+\beta\) and \(\gamma_{i}=\alpha_{*}+\gamma.\)
So, things are reduced in showing \((*)\) holds. To see this, we define the formula \(\varphi_{\beta\gamma}\) as
\[(+)\hskip 28.452756pt\varphi_{\beta\gamma}:=\varphi_{\overline{a}-\overline{a}_ {\beta}+\overline{a}_{\gamma}},\]
where \(\beta<\gamma<\lambda\). Note that \(\overline{a},\overline{a}_{\beta},\overline{a}_{\gamma}\in\varphi_{\overline {a}}(M)\), hence as \(\varphi_{\overline{a}}(M)\) is a submodel, \(\overline{a}-\overline{a}_{\beta}+\overline{a}_{\gamma}\in\varphi_{\overline {a}}(M).\) Thanks to the minimality condition from Definition 3.1(3)(b), this implies that
\[\varphi_{\overline{a}}\geq\varphi_{\beta,\gamma}.\]
Thus, it is sufficient to find \(\beta<\gamma<\kappa\) such that \(\varphi_{\overline{a}}\leq\varphi_{\beta,\gamma}\). By the property presented in Definition 3.1(4)(b) there are \(\alpha_{1}<\alpha_{2}<\alpha_{3}<\kappa\) such that
\[\varphi_{\alpha_{1},\alpha_{2}}\geq\varphi_{\alpha_{1},\alpha_{3}},\varphi_{ \alpha_{2},\alpha_{3}}.\]
So,
1. \(\overline{a}-\overline{a}_{\alpha_{1}}+\overline{a}_{\alpha_{2}}\in\varphi_{ \alpha_{1},\alpha_{2}}(M),\)
2. \(\overline{a}-\overline{a}_{\alpha_{1}}+\overline{a}_{\alpha_{3}}\in\varphi_{ \alpha_{1},\alpha_{3}}(M)\subseteq\varphi_{\alpha_{1},\alpha_{2}}(M),\)
3. \(\overline{a}-\overline{a}_{\alpha_{2}}+\overline{a}_{\alpha_{3}}\in\varphi_{ \alpha_{2},\alpha_{3}}(M)\subseteq\varphi_{\alpha_{1},\alpha_{2}}(M).\)
Hence
\[\overline{a} =(1)-(2)+(3)\in\varphi_{\alpha_{1},\alpha_{2}}(M).\]
Combining this with the minimality property from Definition 3.1(3)(b) we observe that \(\varphi_{\overline{a}}\leq\varphi_{\alpha_{1},\alpha_{2}}\). Thus it suffices to take \(\beta=\alpha_{1}\) and \(\gamma=\alpha_{2}\).
**Corollary 3.7**.: _Suppose \(\mathbf{f}=(M,\lambda,\kappa,\theta,\Omega)\) is an additive frame. The following assertions are valid:_
1. _Suppose_ \(\varphi_{\overline{a}}(M)\) _has cardinality_ \(\geq\lambda\)_. Then there is_ \(\overline{b}\in\varphi_{\overline{a}}(M)\) _such that_ \(\overline{b}\neq\overline{a}\) _and_ \(\overline{b}\) _is equivalent to_ \(\overline{a}\)_._
2. _If for some_ \(\overline{c}\in^{\varepsilon}\!M\)_,_ \(\varphi_{\overline{c}}(M)\) _has cardinality_ \(\geq\lambda,\) _then for all_ \(\overline{a}\in^{\varepsilon}\!M\) _the set_ \[\{\overline{b}\in\varphi_{\overline{a}}(M):\overline{b}\text{ is equivalent to }\overline{a}\}\] _has cardinality_ \(\geq\lambda.\)__
3. _If_ \(\mathbf{f}\) _ia an additive_\({}^{+}\) _frame and_ \(\varepsilon\in(0,\theta)\)_, then for all_ \(\overline{a}\in^{\varepsilon}\!M\)_, the set_ \(\{\overline{b}\in\varphi_{\overline{a}}(M):\overline{b}\text{ is equivalent to }\overline{a}\}\) _has cardinality_ \(\geq\lambda.\)__
Proof.: We first show that \(\varphi_{\overline{a}}(M)\) is equivalent to \(\overline{a}\). By the definition of \(\varphi_{\overline{a}}(M)\), we have that \(\varphi_{\overline{a}}(M)\) is equivalent to \(\overline{a}\).
Proof.: (1) Since \(\varphi_{\overline{a}}(M)\) has cardinality \(\geq\lambda\), we can take a sequence \(\langle\overline{a}_{\alpha}:\alpha<\lambda\rangle\) of length \(\lambda\) of pairwise distinct elements of \(\varphi_{\overline{a}}(M)\) with no repetition. We apply Lemma 3.6 to find increasing sequences \(\overline{\beta}=\langle\beta_{i}:i<\lambda\rangle\) and \(\overline{\gamma}=\langle\gamma_{i}:i<\lambda\rangle\) such that for all \(i<\lambda\)
1. \(\beta_{i}<\gamma_{i}\),
2. \(\overline{a}-\overline{a}_{\beta_{i}}+\overline{a}_{\gamma_{i}}\) is equivalent to \(\overline{a}\).
Set \(\overline{b}_{i}:=\overline{a}-\overline{a}_{\beta_{i}}+\overline{a}_{\gamma_{ i}}\). Since \(\overline{a}_{\alpha}\)'s are distinct, we deduce that \(\overline{a}\neq\overline{b}_{i}\), for at least one \(i<\lambda\). Thanks to (b), we know \(\overline{b}_{i}\) is equivalent to \(\overline{a}\).
(2) Let \(X=\{\overline{b}\in\varphi_{\overline{a}}(M):\overline{b}\text{ is equivalent to }\overline{a}\}\), and let \(\mu=|X|.\) Suppose towards contradiction that \(\mu<\lambda\), and let \(\langle\overline{b}_{i}:i<\mu\rangle\) be enumerates \(X\). Let also \(\langle\overline{c}_{\alpha}:\alpha<\lambda\rangle\) be a sequence of length \(\lambda\) of pairwise distinct elements of \(\varphi_{\overline{c}}(M)\) with no repetition. By induction on \(\alpha<\lambda\) we can find \(\xi_{\alpha}<\lambda\) such that
\((*)_{\alpha}\): \(\overline{a}_{\xi_{\alpha}}\notin\{\overline{b}_{i}-\overline{c}+\overline{c} _{\xi_{\beta}}:\beta<\alpha,i<\mu\}.\)
By the argument of clause (1), applied to the sequence \(\langle\overline{c}_{\xi_{\alpha}}:\alpha<\lambda\rangle\), we can find some \(\beta_{\iota}<\gamma_{\iota}<\lambda\) such that \(\overline{a}-\overline{c}_{\xi_{\beta_{\iota}}}+\overline{c}_{\xi_{\gamma_{ \iota}}}\) is equivalent (but not equal) to \(\overline{c}.\) Thus for some \(i<\mu,\overline{a}-\overline{c}_{\xi_{\beta_{\iota}}}+\overline{c}_{\xi_{ \gamma_{\iota}}}=\overline{b}_{i}\). But then
\[\overline{c}_{\xi_{\gamma_{\iota}}}=\overline{b}_{i}-\overline{a}+\overline{ c}_{\xi_{\beta_{\iota}}},\]
which contradicts \((*)_{\gamma_{\iota}}\).
(3) By Lemma 3.3 and clause (2).
**Lemma 3.8**.:
1. _Suppose_ \(\mathbf{f}=(M,\lambda,\kappa,\theta,\Omega)\) _is a general frame,_ \(\varepsilon<\theta\) _and_ \(\overline{a}_{\alpha}\in{}^{\varepsilon}M\) _for_ \(\alpha<\kappa\)_. Then there is some_ \(\alpha<\kappa\) _such that the set_ \(\{\beta<\kappa:\overline{a}_{\beta}\in\varphi_{\overline{a}_{\alpha}}(M)\}\) _is unbounded in_ \(\kappa.\)__
2. _In clause (1), we can replace_ \(\kappa\) _by any cardinal_ \(\kappa^{\prime}\geq\kappa.\)__
Proof.: We prove clause (2). Suppose on the way of contradiction that, for each \(\alpha<\kappa^{\prime}\), the set \(X_{\alpha}=\{\beta<\kappa^{\prime}:\overline{a}_{\beta}\in\varphi_{\overline{ a}_{\alpha}}(M)\}\) is bounded in \(\kappa^{\prime}.\) So,
\((*)_{1}\)\(\forall\alpha<\kappa^{\prime},\exists\alpha<\beta_{\alpha}<\kappa^{\prime}\) such that \(\forall\beta\geq\beta_{\alpha}\) we have \(\varphi_{\alpha}\nsubseteq\varphi_{\beta}\).
We define an increasing and continuous sequence \(\langle\zeta_{\alpha}:\alpha<\kappa\rangle\) of ordinals less than \(\kappa^{\prime}\), by induction on \(\alpha\) as follows:
* \(\zeta_{0}:=0\),
* \(\zeta_{\alpha+1}:=\beta_{\zeta_{\alpha}}\),
* \(\zeta_{\delta}:=\lim_{\alpha<\delta}\zeta_{\alpha}\) for limit ordinal \(\delta\).
Consider the sequence \(\{\varphi_{\zeta_{\alpha}}:\alpha<\kappa\}\), and apply the property presented in Definition 3.1(4)(a) to find \(\gamma<\delta<\kappa\) such that
\((*)_{2}\ \varphi_{\zeta_{\gamma}}\geq\varphi_{\zeta_{\delta}}\).
Since \(\gamma<\delta\), \(\zeta_{\delta}\geq\zeta_{\gamma+1}=\beta_{\zeta_{\gamma}}\). We apply \((*)_{1}\) for \(\alpha:=\zeta_{\gamma}\) and \(\beta:=\zeta_{\delta}\geq\beta_{\alpha}\). This gives us \(\varphi_{\zeta_{\gamma}}\ngeq\varphi_{\zeta_{\delta}}\), which contradicts \((*)_{2}\).
In what follows we need to use a couple of results from [29]. To make the paper more self contained, we borrow some definitions and results from it.
**Definition 3.9**.:
* By a _tree_ we mean a partially-ordered set \((\mathscr{T},\leq)\) such that for all \(t\in\mathscr{T}\), \[pred(t):=\{s\in\mathscr{T}:s<t\},\] is a well-ordered set; moreover, there is only one element \(r\) of \(\mathscr{T}\), called the _root_ of \(\mathscr{T}\), such that \(pred(r)\) is empty.
* The order-type of \(pred(t)\) is called the height of \(t\), denoted by \(\operatorname{ht}(t)\).
* The height of \(\mathscr{T}\) is \(\sup\{\operatorname{ht}(t)+1:t\in\mathscr{T}\}\).
**Definition 3.10**.:
* A _quasi-order_\(\mathcal{Q}\) is a pair \((\mathcal{Q},\leq_{\mathcal{Q}})\) where \(\leq_{\mathcal{Q}}\) is a reflexive and transitive binary relation on \(\mathcal{Q}\).
* \(\mathcal{Q}\) is called \(\kappa\)_-narrow_, if there is no _antichain_ in \(\mathcal{Q}\) of size \(\kappa\), i.e., for every \(f:\kappa\to\mathcal{Q}\) there exist \(\nu\neq\mu\) such that \(f(\nu)\leq_{\mathcal{Q}}f(\mu)\).
* For a quasi-order \(\mathcal{Q}\), a \(\mathcal{Q}\)_-labeled tree_ is a pair \((\mathscr{T},\Phi_{\mathscr{T}})\) consisting of a tree \(\mathscr{T}\) of height \(\leq\omega\) and a function \(\Phi_{\mathscr{T}}:\mathscr{T}\to\mathcal{Q}\).
* \(\mathcal{Q}\) is \(\kappa\)-well ordered if for every sequence \(\langle q_{i}:i<\kappa\rangle\) of elements of \(\mathcal{Q}\), there are \(i<j<\kappa\) such that \(q_{i}\leq_{\mathcal{Q}}q_{j}\).
_Remark 3.11_.: On any set of \(\mathcal{Q}\)-labeled trees we define a quasi-order by: \((\mathscr{T}_{1},\Phi_{1})\preceq(\mathscr{T}_{2},\Phi_{2})\) if and only if there is a function \(\nu:\mathscr{T}_{1}\to\mathscr{T}_{2}\) equips with the following properties:
* for all \(t\in\mathscr{T}_{1}\), \(\Phi_{1}(t)\leq_{\mathcal{Q}}\Phi_{2}(\nu(t))\),
* \(t\leq_{\mathscr{T}_{1}}t^{\prime}\Longrightarrow\nu(t)\leq_{\mathscr{T}_{2}}\nu(t^ {\prime})\),
* for all \(t\in\mathscr{T}_{1}\), \(\text{ht}_{\mathscr{T}_{1}}(t)=\)ht\({}_{\mathscr{T}_{2}}(\nu(t))\).
**Definition 3.12**.:
* Given infinite cardinals \(\kappa\) and \(\mu\), the notation \(\kappa\longrightarrow(\omega)_{\mu}^{<\omega}\) means that: for every function \(f:[\kappa]^{<\omega}\rightarrow\mu\), there exists an infinite subset \(X\subseteq\kappa\) and a function \(g:\omega\rightarrow\mu\) such that \(f(Y)=g(|Y|)\) for all finite subsets \(Y\) of \(X\).
* Let \(\kappa_{\text{beau}}\) denote the first beautiful1 cardinal. This is defined as the smallest cardinal \(\kappa\) such that \(\kappa\longrightarrow(\omega)_{2}^{<\omega}\). Footnote 1: This also is called the first \(\omega\)-Erdós cardinal.
* Given a ring \(R\) and an infinite cardinal \(\theta\), let \(\kappa_{\text{beau}}(R,\theta)\) denote the least cardinal \(\kappa\) such that \(\kappa\longrightarrow(\omega)_{|R|+\theta^{<\theta}}^{<\omega}\).
* Given a vocabulary \(\tau\), let \(\kappa_{\text{beau}}(\tau,\theta)\) denote the least cardinal \(\kappa\) such that \(\kappa\longrightarrow(\omega)_{|\tau|+\theta^{<\theta}}^{<\omega}\). If \(\theta=\aleph_{0}\), we may omit it.
Now, we can state:
**Fact 3.13**.: (Shelah, [29, theorems 5.3+ 2.10]) Let \(\mathcal{Q}\) be a quasi-order of cardinality \(<\kappa_{\text{beau}}\), and \(\mathcal{S}\) be a set of \(\mathcal{Q}\)-labeled trees with \(\leq\omega\) level. Then \(\mathcal{S}\) is \(\kappa_{\text{beau}}\)-narrow, and even \(\kappa\)-well ordered.
We are now ready to state and prove the main result of this section.
**Theorem 3.14**.:
* _Assume_ \(M\) _is an_ \(R\)_-module and_ \(\kappa=\kappa_{\text{beau}}(R,\theta)\) _and_ \(\Omega\) _is such that_ \(1\in\Omega\) _and_ \(|\Omega|\leq\theta\)_. Also assume that_ \(\lambda\geq\kappa\) _is regular and satisfies_ \(\lambda\rightarrow(\kappa+1)_{4}^{3}\)_. The following hold:_
* \(\mathbf{f}=(M,\lambda,\kappa,\theta,\Omega)\) _is an additive frame, whenever_ \[\mathscr{L}\subseteq\{\varphi(\overline{x}):\varphi\in\mathscr{L}_{\infty, \theta}^{\text{cop}}(\tau_{M}),\lg(\overline{\mathbf{x}})<\theta\}\] _is closed under arbitrary conjunctions._
* \(\mathbf{f}=(M,\lambda,\kappa,\theta,\Omega)\) _is an additive frame, whenever_ \[\mathscr{L}\subseteq\{\varphi(\overline{x}):\varphi\in\mathscr{L}_{\infty, \theta}^{\text{co}}(\tau_{M}),\lg(\overline{\mathbf{x}})<\theta\}\] _is closed under arbitrary conjunctions._
_._
2. _If_ \(M\) _is a_ \(\tau\)_-model,_ \(\kappa=\kappa_{\mathrm{beau}}(\tau,\theta)\) _and_ \(\lambda\geq\kappa\) _is regular and satisfies_ \(\lambda\to(\kappa+1)_{4}^{3}\)_. Then_ \(\mathbf{f}=(M,\lambda,\kappa,\theta,\Omega)\) _is a additive frame, whenever_ \[\mathscr{L}\subseteq\{\varphi(\overline{x}):\varphi\in\mathscr{L}_{\infty, \theta}^{\mathrm{co}}(\tau_{M}),\lg(\overline{x})<\theta\}\] _is closed under arbitrary conjunctions._
3. _Furthermore, if_ \(|M|\geq\lambda\)_, then in items (i) and (ii),_ \(\mathbf{f}\) _becomes an additive_\({}^{+}\) _frame._
Proof.: (i): We have to show that \(\mathbf{f}\) satisfies the relevant items of Definition 3.1. Items (1), (2) and (5) are true by definition. As \(M\) is an \(R\)-module, clause (6) is valid. The validity of (7) follows from Lemma 2.15.
Let us consider clause (3). Thus suppose that \(\overline{a}\in{}^{\varepsilon}M\), where \(\varepsilon<\theta\). We are going to find some \(\varphi\in\mathscr{L}\) such that:
* \(\overline{a}\in\varphi(M)\),
* if \(\psi\in\mathscr{L}\) and \(\overline{a}\in\psi(M)\), then \(\varphi(M)\subseteq\psi(M)\).
For any \(\overline{b}\in{}^{\varepsilon}M,\) if there is some formula \(\varphi(\overline{x})\) such that
\[(\dagger)_{1}\hskip 36.135ptM\models\varphi(\overline{a})\wedge\neg\varphi( \overline{b}),\]
then let \(\varphi_{\overline{b}}\in\mathscr{L}\) be such a formula. Otherwise, let \(\varphi_{\overline{b}}\in\mathscr{L}\) be any true formula such that \(\varphi_{\overline{b}}(M)={}^{\varepsilon}M\). Finally set
\[(\dagger)_{2}\hskip 36.135pt\varphi:=\bigwedge\{\varphi_{\overline{b}}: \overline{b}\in{}^{\varepsilon}M\}.\]
Now, we claim that \(\varphi\) is as desired. First, we check (3)(a), that is \(\overline{a}\in\varphi(M).\) As
\[\varphi(M)=\bigcap_{\overline{b}\in{}^{\varepsilon}M}\varphi_{\overline{b}}(M),\]
it suffices to show that \(\overline{a}\in\varphi_{\overline{b}}(M)\), for \(\overline{b}\in{}^{\varepsilon}M\). Fix \(\overline{b}\) as above. If there is no formula \(\varphi(\overline{x})\) as in \((\dagger)\), then \(\overline{a}\in{}^{\varepsilon}M=\varphi_{\overline{b}}(M)\), and we are done. Otherwise, by its definition, \(M\models\varphi_{\overline{b}}(\overline{a})\), and hence, again we have \(\overline{a}\in\varphi_{\overline{b}}(M)\).
To see (3)(b) holds, let \(\psi(\overline{x})\) be such that \(\overline{a}\in\psi(M).\) We have to show that \(\varphi(M)\subseteq\psi(M).\) Suppose by the way of contradiction that \(\varphi(M)\nsubseteq\psi(M).\) Take \(\overline{b}\in\varphi(M)\setminus\psi(M).\)
Now, \(M\models\neg\psi(\overline{b})\), and by our assumption \(M\models\psi(\overline{a}).\) In particular, by our construction, the formula \(\varphi_{\overline{b}}\) satisfies
\[(\dagger)_{3}\qquad\quad M\models\varphi_{\overline{b}}(\overline{a})\wedge\neg \varphi_{\overline{b}}(\overline{b}).\]
Now,
\(\bullet_{1}\)\(\overline{b}\in\varphi(M)\)\(\stackrel{{(+)}}{{\subseteq}}\varphi_{\overline{b}}(M),\)
\(\bullet_{2}\) by \((\dagger)_{3}\), \(M\models\neg\varphi_{\overline{b}}(\overline{b})\), and hence \(\overline{b}\notin\varphi_{\overline{b}}(M).\)
By \(\bullet_{1}\) and \(\bullet_{2}\) we get a contradiction.
Now we turn to clause (4). First let us consider (4)(a). Thus suppose that \(\varphi_{\alpha}(\overline{x})\in\mathscr{L}\), for \(\alpha<\kappa\), are given. We should find some \(\alpha<\beta<\kappa\) such that \(\varphi_{\alpha}\geq\varphi_{\beta}\), i.e., \(\varphi_{\alpha}(M)\supseteq\varphi_{\beta}(M).\) To this end, first note that we can restrict ourselves to those formulas such that both free and bounded variables appearing in them are among \(\{x_{i}:i<\theta\}\), the set of free variables of \(\varphi\) has the form \(\{x_{\zeta}:\zeta<\varepsilon\}\), and the quantifiers have the form \(\exists\bar{x}_{[\varepsilon_{0},\varepsilon_{1})}\) and \(\forall\bar{x}_{[\varepsilon_{0},\varepsilon_{1})}\), where \(\varepsilon_{0}<\varepsilon_{1}<\theta\), and \(\bar{x}_{[\varepsilon_{0},\varepsilon_{1})}=\langle x_{\xi}:\varepsilon_{0} \leq\xi<\varepsilon_{1}\rangle\). In what follows, writing a formula as \(\varphi(\bar{x})\), we mean \(\bar{x}\) lists the free variables appearing in \(\varphi\) in increasing order.
We can consider a formula \(\varphi(\bar{x})\) as a type \((\mathscr{T},c)\) such that
* \(\mathscr{T}\) is a tree with \(\leq\omega\) levels with no infinite branches,
* \(c\) is a function with domain \(\mathscr{T}\),
* if \(t\in\mathscr{T}\setminus\max(\mathscr{T})\), then \(c(t)\) is in the following set \[\biggl{\{}\wedge,\exists\bar{x},\forall\bar{x}:\bar{x}\text{ { has form }}\bar{x}_{[\varepsilon_{0},\varepsilon_{1})}\text{ for some }\varepsilon_{0}< \varepsilon_{1}<\theta\biggr{\}},\]
* if \(t\in\max(\mathscr{T})\) then \(c(t)\) is an atomic formula in \(\tau_{M}\).
Clearly, \(|\text{Rang}(c)|\leq\theta+|R|\). For each \(\alpha<\kappa\) set \(\varphi_{\alpha}(\bar{x})=(\mathscr{T}_{\alpha},c_{\alpha})\).
Let \(\mathcal{Q}\) be the range of the function \(c\). Note that it is a quasi-order under the \(\leq\) relation defined in Definition 3.4, and clearly, it has cardinality
\[|\mathcal{Q}|\leq|\tau_{M}|+\theta^{<\theta}<\kappa_{\text{beau}}.\]
In particular, by Fact 3.13, applied to the sequence
\[\langle(\mathscr{T}_{\alpha},c_{\alpha}):\alpha<\kappa\rangle.\]
we get some \(\alpha<\beta\) and a function \(f\) equipped with the following property:
* \(f\) is a 1-1 function from \(\mathscr{T}_{\alpha}\) into \(\mathscr{T}_{\beta}\), which is level, order and non-order preserving and such that \[\boldsymbol{t}\in\boldsymbol{c_{\alpha}}\Rightarrow\big{(}\boldsymbol{c_{ \beta}}(\boldsymbol{f}(\boldsymbol{t}))=\boldsymbol{c_{\alpha}}(\boldsymbol{ t})\big{)}.\]
Let \(\mathscr{T}_{\alpha}^{\prime}:=\text{Rang}(f)\) and \(c_{\alpha}^{\prime}:=c_{\beta}\upharpoonright\mathscr{T}_{\alpha}^{\prime}\). By this notation, \(\varphi_{\alpha}(\bar{x})\) can also be represented by \((\mathscr{T}_{\alpha}^{\prime},c_{\alpha}^{\prime})\). For any \(t\in\mathscr{T}_{\alpha}^{\prime}\) we define \(\varphi_{t}^{1}\) to be the formula represented by
\[\bigg{(}\boldsymbol{\{s}\in\mathscr{T}_{\alpha}^{\prime}:\boldsymbol{t}\leq _{\mathscr{T}_{\alpha}^{\prime}}\boldsymbol{s}\},\boldsymbol{c_{\beta}} \upharpoonright\boldsymbol{\{s}\in\mathscr{T}_{\alpha}^{\prime}:\boldsymbol{t }\leq_{\mathscr{T}_{\alpha}^{\prime}}\boldsymbol{s}\}\bigg{)},\]
and define \(\varphi_{t}^{2}\) to be the formula represented via
\[\bigg{(}\boldsymbol{\{s}\in\mathscr{T}_{\beta}:\boldsymbol{t}\leq_{\mathscr{ T}_{\beta}^{\prime}}\boldsymbol{s}\},\boldsymbol{c_{\beta}}\upharpoonright\boldsymbol{\{s}\in \mathscr{T}_{\beta}:\boldsymbol{t}\leq_{\mathscr{T}_{\beta}^{\prime}} \boldsymbol{s}\}\bigg{)}.\]
Note that the formula \(\varphi_{t}^{1}\) may have fewer free variables than \(\varphi_{t}^{2}\), but we can add the remaining variables. So, we may and do assume that
\[\varphi_{t}^{\ell}=\varphi_{t}^{\ell}(\bar{x}_{t})\quad\ell=1,2.\]
Now we are going to prove that for every \(t\),
\[\varphi_{t}^{1}(M)\supseteq\varphi_{t}^{2}(M).\]
This is done by induction on the depth of \(t\) inside \(\mathscr{T}_{\alpha}^{\prime}\) which is possible, as \(\mathscr{T}_{\alpha}^{\prime}\) is well-founded. By the way we defined our trees, it suffices to deal with the following cases:
* Case 1): \(c_{\beta}(t)\) is a basic formal, i.e., an atomic formula or its negation.
* Case 2): \(c_{\beta}(t)\) is \(\wedge\).
* Case 3): \(c_{\beta}(t)\) is \(\exists\overline{x}^{\prime}\) or just \(c_{\beta}(t)\) is \(\exists^{\sigma}\overline{x}^{\prime}\).
* Case 4) \(c_{\beta}(t)\) is \(\forall\overline{x}^{\prime}\) or just \(c_{\beta}(t)\) is \(\forall^{\sigma}\overline{x}^{\prime}\).
Let discuss each cases separably:
Case 1): Here, \(t\) is a maximal node of the tree \(\mathscr{T}_{\beta}\). Hence, necessarily a maximal node of \(\mathscr{T}_{\alpha}^{\prime}\), and recall that
\[\varphi_{t}^{1}=c_{\beta}(t)=\varphi_{t}^{2}.\]
Consequently, the conclusion is obvious.
Case 2): Here, we have
* \(\varphi_{t}^{2}:=\bigwedge\{\varphi_{s}^{2}:s\in\operatorname{suc}_{\mathscr{T}_{ \beta}}(t)\}\),
* \(\varphi_{t}^{1}:=\bigwedge\{\varphi_{s}^{1}:s\in\operatorname{suc}_{\mathscr{T}_{ \alpha}^{\prime}}(t)\}\).
By the choice of the \(\mathscr{T}_{\alpha}^{\prime}\) and the function \(f\), we have
\[\operatorname{suc}_{\mathscr{T}_{\alpha}^{\prime}}(t)\subseteq\operatorname{suc }_{\mathscr{T}_{\beta}}(t).\]
Now, due to the induction hypotheses
\[s\in\operatorname{suc}_{\mathscr{T}_{\alpha}^{\prime}}(t)\Longrightarrow \varphi_{s}^{1}(M)\supseteq\varphi_{s}^{2}(M).\]
According to the definition of sanctification for \(\wedge\) we are done, indeed:
\[\varphi_{t}^{1}(M) =\bigcap\{\varphi_{s}^{1}(M):s\in\operatorname{suc}_{\mathscr{T} _{\alpha}^{\prime}}(t)\}\] \[\supseteq\bigcap\{\varphi_{s}^{2}(M):s\in\operatorname{suc}_{ \mathscr{T}_{\alpha}^{\prime}}(t)\}\] \[\supseteq\bigcap\{\varphi_{s}^{s}(M):s\in\operatorname{suc}_{ \mathscr{T}_{\beta}}(t)\}.\]
Case 3): Let \(\overline{x}_{t}^{\prime}=\overline{x}_{s}^{-}\overline{x}^{\prime}\), and recall that \(\operatorname{suc}_{\mathscr{T}_{\beta}}(t)\) is singleton, and also \(\operatorname{suc}_{\mathscr{T}_{\alpha}}(f^{(-1)}(t))\) is singleton, because \(c_{\beta}(t)=c_{\alpha}(f^{-1}(t))\). This implies that \(\operatorname{suc}_{\mathscr{T}_{\alpha}^{\prime}}(t)\) is singleton, say \(\operatorname{suc}_{\mathscr{T}_{\alpha}^{\prime}}(t)=\{s\}=\operatorname{suc }_{\mathscr{T}_{\beta}}(t)\). In order to see \(\varphi_{t}^{1}(M)\supseteq\varphi_{t}^{2}(M)\), we take \(\overline{a}\in{}^{\operatorname{lg}(\overline{x})}M\) be such that \(M\models\varphi_{t}^{2}(\overline{a})\) and shall show that \(M\models\varphi_{t}^{1}(\overline{a})\). Indeed,
\[\varphi_{t}^{\ell}(\overline{x}):=(\exists^{\sigma}\overline{x}^{\prime}) \varphi_{s}^{\ell}(\overline{x}_{s},\overline{x}^{\prime})\quad\ell=1,2,\]
and since
\[M\models(\exists^{\sigma}\overline{x}^{\prime})\varphi_{s}^{2}(\overline{a}, \overline{x}^{\prime}),\]
necessarily, for some pairwise disjoint \(\overline{b}_{\zeta}\in{}^{\operatorname{lg}(\overline{x}^{\prime})}M\) we have
\[M\models\varphi_{s}^{2}(\overline{a},\overline{b}_{\zeta}).\]
Thanks to the inductive hypothesis, we know
\[M\models\varphi_{s}^{1}(\overline{a},\overline{b}_{\zeta}).\]
According to the definition of sanctification, we have
\[M\models(\exists^{\sigma}\overline{x}^{\prime})\varphi_{s}^{1}(\overline{a}, \overline{x}^{\prime}),\]
which means that \(M\models\varphi_{t}^{1}(\overline{a})\), as promised.
Case 4): Let \(\overline{x}^{\prime}_{t}=\widehat{x_{s}}\cdot\overline{x}^{\prime}\), and recall that \(\operatorname{suc}_{\mathscr{P}_{\beta}}(t)\) is singleton, and also \(\operatorname{suc}_{\mathscr{P}_{\alpha}}(f^{-1}(t))\) is singleton, because \(c_{\alpha}(f^{-1}(t))\subseteq c_{\beta}(t)\). This implies that \(\operatorname{suc}_{\mathscr{P}^{\prime}_{\alpha}}(t)\) is singleton, say \(\operatorname{suc}_{\mathscr{P}^{\prime}_{\alpha}}(t)=\{s\}=\operatorname{suc }_{\mathscr{P}_{\beta}}(t)\). In order to see \(\varphi^{1}_{t}(M)\supseteq\varphi^{2}_{t}(M)\), we take \(\overline{a}\in{}^{\lg(\overline{x})}M\) be such that \(M\models\varphi^{2}_{t}(\overline{a})\) and shall show that \(M\models\varphi^{1}_{t}(\overline{a})\). Similar to the Case (3), we can write
\[\varphi^{\ell}_{t}(\overline{x}):=(\forall^{\sigma}\overline{x}^{\prime}) \varphi^{\ell}_{s}(\overline{x}_{s},\overline{x}^{\prime})\quad\ell=1,2.\]
Suppose it is not the case that \(M\models\varphi^{1}_{t}(\overline{a})\). This means, by Definition 2.10(2)(b), that there are pairwise disjoint \(\overline{b}_{\zeta}\in{}^{\lg(\overline{x}^{\prime})}M\), for \(\zeta<\sigma\) such that
\[M\models\neg\varphi^{1}_{s}(\overline{a},\overline{b}_{\zeta}).\]
By the induction hypothesis, we have \(\varphi^{1}_{s}(M)\supseteq\varphi^{2}_{s}(M)\), hence for each \(\zeta<\sigma\),
\[M\models\neg\varphi^{2}_{s}(\overline{a},\overline{b}_{\zeta}).\]
The later means that \(M\models\varphi^{2}_{t}(\overline{a})\) is not true, which contradicts our initial assumption. This completes the proof of clause (4)(a).
Finally, let us turn to check the property presented in clause (4)(b) from Definition 3.1. Suppose \(\varphi_{\alpha,\beta}(\overline{x})\in\mathscr{L}\) for \(\alpha<\beta<\lambda\) are given. We need to find some \(\alpha_{1}<\alpha_{2}<\alpha_{3}<\kappa\) such that
\[\varphi_{\alpha_{1},\alpha_{2}}\geq\varphi_{\alpha_{1},\alpha_{3}},\varphi_{ \alpha_{2},\alpha_{3}}.\]
To see this, we define a coloring \(\mathbf{c}:[\lambda]^{3}\to 2\times 2\) as follows. Fix \(\alpha<\beta<\gamma<\lambda\), and define the following pairing function
\[\mathbf{c}(\{\alpha,\beta,\gamma\}):=\bigg{(}\text{truth value of }\varphi_{\alpha,\beta}\geq\varphi_{\alpha,\gamma},\text{ truth value of }\varphi_{\alpha,\gamma}\geq\varphi_{\beta,\gamma}\bigg{)}.\]
By the assumption \(\lambda\to(\kappa+1)^{3}_{4}\), there is \(X\subseteq\lambda\) of order type \(\kappa+1\) such that \(\mathbf{c}\upharpoonright[X]^{3}\) is constant. Let \(\alpha_{i}\in X\), for \(i\leq\kappa\), be an increasing enumeration of \(X\). Consider the sequence
\[\{\varphi_{\alpha_{0},\alpha_{i}}:i<\kappa\},\]
and applying clause (4)(a) to it. This gives us \(i<j<\kappa\) such that
\[\varphi_{\alpha_{0},\alpha_{i}}\geq\varphi_{\alpha_{0},\alpha_{j}}.\]
Note that this implies that \(\mathbf{c}(\alpha_{0},\alpha_{i},\alpha_{j})=(1,\iota)\), for some \(\iota\in\{0,1\}.\) Since \(\mathbf{c}\upharpoonright[X]^{3}\) is constant, it follows that for any \(\alpha<\beta<\gamma\) from \(X\), we have \(\mathbf{c}(\alpha,\beta,\gamma)=(1,\iota)\), in particular
\((*)_{1}\)\(\varphi_{\alpha,\beta}\geq\varphi_{\alpha,\gamma}\), for all \(\alpha<\beta<\gamma\) in \(X\).
Again, applying clause (4)(a) to the sequence \(\{\varphi_{\alpha_{i},\alpha_{\kappa}}:i<\kappa\}\), we can find some \(i<j<\kappa\) such that
\[\varphi_{\alpha_{i},\alpha_{\kappa}}\geq\varphi_{\alpha_{j},\alpha_{\kappa}}.\]
It follows that \(\mathbf{c}(\alpha_{i},\alpha_{j},\alpha_{\kappa})=(1,1)\), hence as \(\mathbf{c}\upharpoonright[X]^{3}\) is constant, we have \(\mathbf{c}(\alpha,\beta,\gamma)=(1,1)\), for all \(\alpha<\beta<\gamma\) from \(X\). In particular,
\((*)_{2}\)\(\varphi_{\alpha,\gamma}\geq\varphi_{\beta,\gamma}\), for any \(\alpha<\beta<\gamma\) in \(X\).
Now, combining \((*)_{1}\) along with \((*)_{2}\), we get that for all \(\alpha<\beta<\gamma\) from \(X\)
\[\varphi_{\alpha,\beta}\geq\varphi_{\alpha,\gamma}\geq\varphi_{\beta,\gamma},\]
and this completes the proof of (i).
(ii): This is similar to case (i).
_Remark 3.15_.: By the Erdos-Rado partition theorem, see [11], it suffices to take \(\lambda=\underline{\mathfrak{l}}_{2}(\kappa)^{+}\).
## 4. \(\kappa\)-algebraic closure
In this section, and among other things, we define the concept of closure of a sequence inside an additive model and present some properties of it.
**Definition 4.1**.: Suppose \(\mathbf{f}=(M,\mathscr{L},\lambda,\kappa,\theta,\Omega)\) is an additive frame, \(\varepsilon<\theta\) and \(\overline{a}\in{}^{\varepsilon}M\). We define the closure of \(\overline{a}\) in \(M\) as the following:
\[\mathrm{cl}(\overline{a},\mathrm{M}):=\{\mathrm{b}\in\mathrm{M}:\varphi_{ \overline{a}\,\,\,\,\backslash\!\!\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
1. _the set_ \(\operatorname{cl}(\overline{\operatorname{a}},\operatorname{M})\) _has cardinality_ \(<\kappa.\)__
2. \(\{a_{i}:i<\varepsilon\}\subseteq\operatorname{cl}(\overline{\operatorname{a}}, \operatorname{M}).\)__
3. _Assume_ \(b\in\operatorname{cl}(\overline{\operatorname{a}},\operatorname{M})\)_. Then_ \(\operatorname{cl}(\overline{\operatorname{a}}^{\frown}\langle\operatorname{ b}\rangle,\operatorname{M})\subseteq\operatorname{cl}(\overline{\operatorname{a}}, \operatorname{M}).\)__
Proof.: (a). Suppose not, and let \(\overline{a}\in{}^{\varepsilon}M\) be such that the set \(\operatorname{cl}(\overline{\operatorname{a}},\operatorname{M})\) has cardinality \(\geq\kappa.\) This gives us a family \(\{b_{\alpha}:\alpha<\kappa\}\subseteq\operatorname{cl}(\overline{ \operatorname{a}},\operatorname{M})\). Define \(\overline{a}_{\alpha}:=\overline{a}^{\frown}\langle b_{\alpha}\rangle\). In the light of Lemma 3.8, there is \(\alpha_{*}<\kappa\) such that the set
\[X:=\{\beta<\kappa:\overline{a}_{\beta}\in\varphi_{\overline{a}_{\alpha_{*}}}(M)\}\]
is unbounded in \(\kappa.\) So, \(\{\overline{a}_{\beta}:\beta\in X\}\subseteq\varphi_{\overline{a}_{\alpha_{*} }}(M,\overline{a})\), which implies that
\[|\varphi_{\overline{a}_{\alpha_{*}}}(M,\overline{a})|\geq|X|=\kappa.\]
This contradicts the fact that \(b_{\alpha_{*}}\in\operatorname{cl}(\overline{\operatorname{a}},\operatorname{ M})\).
(b). Let \(i<\varepsilon\), and define the formula \(\psi(\overline{x},y)\) as
\[\psi(\overline{x},y):=(y=x_{i}).\]
Then \(\psi(\overline{a},M)=\{a_{i}\}\). It is also clear that \(\psi(\overline{x},y)\) is minimal with this property. This implies that
\[\varphi_{\overline{a}^{\frown}\langle a_{i}\rangle}(\overline{a},M)=\psi( \overline{a},M).\]
In particular, \(|\varphi_{\overline{a}^{\frown}\langle a_{i}\rangle}(\overline{a},M)|=1<\kappa\), and consequently \(a_{i}\in\operatorname{cl}(\overline{\operatorname{a}},\operatorname{M})\).
(c). Suppose \(d\in\operatorname{cl}(\overline{\operatorname{a}}^{\frown}\langle \operatorname{b}\rangle,\operatorname{M})\). Thanks to Definition 4.1, \(\varphi_{\overline{a}^{\frown}\langle b\rangle^{\frown}\langle d\rangle}( \overline{a},b,M)\) has cardinality less that \(\kappa.\) As \(b\in\operatorname{cl}(\overline{\operatorname{a}},\operatorname{M})\), clearly
\[(*)_{1}\qquad\qquad\text{\it the set $B:=\varphi_{\overline{a}^{\frown} \langle b\rangle}(\overline{a},M)$ has cardinality }<\kappa.\]
For \(b_{1}\in B\) let \(A_{b_{1}}:=\varphi_{\overline{a}^{\frown}\langle b\rangle^{\frown}\langle d \rangle}(\overline{a},b_{1},M).\)
We now show that
\[(*)_{2}\qquad\qquad\qquad\text{\it if $b_{1}\in B$ then $A_{b_{1}}$ has cardinality }<\kappa.\]
Assume towards a contradiction that \(|A_{b_{1}}|\geq\kappa\) for some \(b_{1}\in B\). Reformulating this, means that:
\[M\models\varphi_{\overline{a}^{\frown}\langle b\rangle}(\overline{a},b_{1}) \wedge\exists^{\kappa}z\ \varphi_{\overline{a}^{\frown}\langle b\rangle^{\frown} \langle d\rangle}(\overline{a},b_{1},z).\]
Let
\[\psi(\overline{x},y):=\varphi_{\overline{a}^{\frown}\langle b\rangle}(\overline{x},y )\wedge\exists^{\kappa}z\ \varphi_{\overline{a}^{\frown}\langle b\rangle^{\frown} \langle d\rangle}(\overline{x},y,z),\]
and recall that \(\psi\in\mathscr{L}\) and \(\overline{a}^{\frown}\langle b_{1}\rangle\in\psi(M)\). Note that \(\overline{a}^{\frown}\langle b\rangle\notin\psi(M)\), as
\[|\varphi_{\overline{a}^{\frown}\langle b\rangle^{\frown}\langle d\rangle}( \overline{a},b,M)|<\kappa.\]
Next we bring the following claim:
\[(*)_{2.1}\qquad\qquad\overline{a}^{\frown}\langle b_{1}\rangle\notin\varphi_{ \overline{a}^{\frown}\langle b\rangle}(M).\]
To see this we argue by the way of contradiction that \(\overline{a}^{\frown}\langle b_{1}\rangle\in\varphi_{\overline{a}^{\frown} \langle b\rangle}(M)\). This implies following the minimality condition that
\[\varphi_{\overline{a}^{\frown}\langle b\rangle}(M)\subseteq\varphi_{ \overline{a}^{\frown}\langle b_{1}\rangle}(M)\subseteq\psi(M).\]
Consequently, \(\overline{a}^{\frown}\langle b\rangle\in\psi(M)\). This contradiction completes the proof of \((*)_{2.1}.\) But, we have \(\langle b_{1}\rangle\in\varphi_{\overline{a}^{\frown}\langle b\rangle}( \overline{a},M)\). This yields that
\[(*)_{2.2}\qquad\qquad\overline{a}^{\frown}\langle b_{1}\rangle\in\varphi_{ \overline{a}^{\frown}\langle b\rangle}(M).\]
But \((*)_{2.1}\) and \((*)_{2.2}\) together lead to a contradiction. In sum, the desired property \((*)_{2}\) is valid.
Recalling that \(\kappa\) is regular, it follows that
\[(*)_{3}\qquad\quad\bigcup_{b_{1}\in\beta}\varphi_{\overline{a}^{\frown} \langle b\rangle^{\frown}\langle d\rangle}(\overline{a},b_{1},M)\text{ has cardinality }<\kappa.\]
We will show that
\[(\dagger)\qquad\qquad\varphi_{\overline{a}^{\frown}\langle d\rangle}( \overline{a},M)\subseteq\bigcup_{b_{1}\in B}\varphi_{\overline{a}^{\frown} \langle b\rangle^{\frown}\langle d\rangle}(\overline{a},b_{1},M),\]
from which it will follow that \(\varphi_{\overline{a}^{\frown}\langle d\rangle}(\overline{a},M)\) has cardinality less that \(\kappa\), and hence by definition, \(d\in\operatorname{cl}(\overline{\operatorname{a}},\operatorname{M})\). Let us prove \((\dagger)\). To this end, let \(d_{1}\in\varphi_{\overline{a}^{\frown}\langle d\rangle}(\overline{a},M)\). This implies that \(\overline{a}^{\frown}\langle d_{1}\rangle\in\varphi_{\overline{a}^{\frown} \langle d\rangle}(M)\). Clearly,
\[M\models\exists y\varphi_{\overline{a}^{\frown}\langle b\rangle^{\frown} \langle d\rangle}(\overline{a},y,d),\]
hence,
\[M\models\exists y\varphi_{\overline{a}^{\frown}\langle b\rangle^{\frown} \langle d\rangle}(\overline{a},y,d_{1}).\]
This gives us some \(b_{1}\) so that
\[M\models\varphi_{\overline{a}^{\frown}\langle b\rangle^{\frown}\langle d\rangle}( \overline{a},b_{1},d_{1}).\]
Then \(b_{1}\in B\), and \(d_{1}\in\varphi_{\overline{a}^{\frown}\langle b\rangle^{\frown}\langle d\rangle }(\overline{a},b_{1},M)\), and consequently, \((\dagger)\) holds. We are done.
**Definition 4.3**.: Suppose \(\mathbf{f}=(M,\mathscr{L},\lambda,\kappa,\theta,\Omega)\) is a general frame. For \(\overline{a}\in{}^{<\theta}M\), we introduce the following:
1. \(\operatorname{afn}(\mathrm{b},\overline{\mathrm{a}})=\{\mathrm{c}\in\mathrm{ cl}(\overline{\mathrm{a}},\mathrm{M}):\varphi_{\overline{\mathrm{a}}^{\frown} \langle b\rangle}\leq\varphi_{\overline{\mathrm{a}}^{\frown}\langle c\rangle}\}\).
2. Suppose \(\mathbf{f}\) is abelian. Then \(\operatorname{grp}(\mathrm{b},\overline{\mathrm{a}})=\{\mathrm{c}_{1}- \mathrm{c}_{2}:\mathrm{c}_{1},\mathrm{c}_{2}\in\operatorname{afn}(\mathrm{b}, \overline{\mathrm{a}})\}\).
**Hypothesis 4.4**.: In what follows, and up to the end of this section, let us assume that \(\mathbf{f}=(M,\mathscr{L},\lambda,\kappa,\theta,\Omega)\) is an additive frame, \(\kappa\in\Omega\) and \(\mathscr{L}\) is closed under \(\exists^{\kappa}x\).
**Lemma 4.5**.: _Let \(\overline{a}\in{}^{<\theta}M\) and \(b\in\mathrm{cl}(\overline{\mathrm{a}},\mathrm{M})\). Then \(\operatorname{afn}(\mathrm{b},\overline{\mathrm{a}})\) is a subset of \(\mathrm{cl}(\overline{\mathrm{a}})\) and it is affine._
Proof.: First, recall that "\(\operatorname{afn}(\mathrm{b},\overline{\mathrm{a}})\subseteq\mathrm{cl}( \overline{\mathrm{a}},\mathrm{M})\)" holds by the definition. For the second phrase, we have to show that \(\operatorname{afn}(\mathrm{b},\overline{\mathrm{a}})\) is closed under \(x-y+z\). To see this, let \(c_{i}\in\operatorname{afn}(\mathrm{b},\overline{\mathrm{a}})\) for \(i=1,2,3\) and set \(c:=c_{1}-c_{2}+c_{3}.\) Since \(\varphi_{\overline{a}^{\frown}\langle b\rangle}(M)\) is affine-closed,
\[\overline{a}^{\frown}\langle c\rangle=\overline{a}^{\frown}\langle c_{1} \rangle-\overline{a}^{\frown}\langle c_{2}\rangle+\overline{a}^{\frown} \langle c_{3}\rangle\in\varphi_{\overline{a}^{\frown}\langle b\rangle}(M).\]
According to the minimality, \(\varphi_{\overline{a}^{\frown}\langle b\rangle}\leq\varphi_{\overline{a}^{ \frown}\langle c\rangle}\). Thanks to Lemma 4.2, \(c\in\mathrm{cl}(\overline{\mathrm{a}},\mathrm{M})\). Hence \(c\in\operatorname{afn}(\mathrm{b},\overline{\mathrm{a}})\), and we are done.
**Lemma 4.6**.: _The following holds:_
1. \(\operatorname{grp}(\mathrm{b},\overline{\mathrm{a}})\) _is a subgroup of_ \(M.\)__
2. \(\operatorname{afn}(\mathrm{b},\overline{\mathrm{a}})=\{\mathrm{b}+\mathrm{d}: \mathrm{d}\in\operatorname{grp}(\mathrm{b},\overline{\mathrm{a}})\}.\)__
Proof.: For clause (1), let \(c_{i}\in\operatorname{grp}(\mathrm{b},\overline{\mathrm{a}})\), where \(i=1,2\). Following definition, there are some \(b_{i,1},b_{i,2}\in\operatorname{afn}(\mathrm{b},\overline{\mathrm{a}})\) such that \(c_{i}=b_{i,1}-b_{i,2}\). So,
\[c_{1}-c_{2} =(b_{1,1}-b_{1,2})-(b_{2,1}-b_{2,2})\] \[=(b_{1,1}-b_{1,2}+b_{2,2})-b_{2,1}.\]
According to Lemma 4.5, we know \(b_{2,1}^{*}=b_{1,1}-b_{1,2}+b_{2,2}\in\mathrm{afn}(\mathrm{b},\overline{\mathrm{a}})\), and hence by definition of \(\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}})\),
\[c_{1}-c_{2}=b_{2,1}^{*}-b_{2,1}\in\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}}).\]
To prove clause (2), let \(c\in\mathrm{afn}(\mathrm{b},\overline{\mathrm{a}})\). As clearly \(b\in\mathrm{afn}(\mathrm{b},\overline{\mathrm{a}})\), by clause (1), \(-b+c\in\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}})\), hence
\[c=b+(-b+c)\in\big{\{}b+d:d\in\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}}) \big{\}}.\]
Conversely, suppose \(d\in\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}})\). Due to its definition, there are for some \(c_{1},c_{2}\in\mathrm{afn}(\mathrm{b},\overline{\mathrm{a}})\) so that \(d=c_{1}-c_{2}\). Consequently, in view of Lemma 4.5, we see
\[b+d=b-c_{2}+c_{1}\in\mathrm{afn}(\mathrm{b},\overline{\mathrm{a}}).\]
The equality follows.
**Definition 4.7**.: Let \(\mathbf{f}=(M,\mathscr{L},\lambda,\kappa,\theta,\Omega)\) be an additive frame such that \(\kappa\in\Omega\) and \(\mathscr{L}\) is closed under \(\exists^{\kappa}x\). We say \(\mathbf{f}\) is very nice, if it satisfies the following extra properties:
1. \(\mathscr{L}_{\mathbf{f}}=\mathscr{L}_{\infty,\theta}^{\mathrm{pe}}(\tau_{M})\), or just \(\mathscr{L}_{\mathbf{f}}\) is closed under \(\exists\overline{x}_{u}\), up to equivalence, where \(|u|<\theta\).
2. For every \(X\subseteq{}^{\varepsilon}M\), there is a formula \(\varphi_{X}(\overline{x})\), such that \(X\subseteq\varphi_{X}(M)\), and if \(\psi(\overline{x})\) is such that \(X\subseteq\psi(M)\), then \(\varphi_{X}(M)\subseteq\psi(M)\).
**Discussion 4.8**.: By a repetition of the argument presented in the proof of Theorem 3.14 we know Definition 4.7(2) holds, when \(M\) is an \(R\)-module and \(\mathbf{f}\) is defined as in 3.14.
**Proposition 4.9**.: _Suppose the additive frame \(\mathbf{f}=(M,\mathscr{L},\lambda,\kappa,\theta,\Omega)\) is very nice. The following conditions hold._
1. _Assume_ \(b\in M\) _and_ \(\overline{a}\in{}^{\varepsilon}M\)_. There exists some formula_ \(\varphi(\overline{x},y)\) _with_ \(\lg(\overline{x})=\varepsilon\) _such that_ \(\varphi(\overline{a},M)=\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}}).\)__
2. _If_ \(c\in\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}})\) _and_ \(b\in\mathrm{cl}(\overline{\mathrm{a}})\) _then_ \(c\in\mathrm{cl}(\overline{\mathrm{a}}).\) _Moreover,_ \(\mathrm{cl}(\overline{\mathrm{a}})\) _is a subgroup of_ \(M\)_._
3. _Let_ \(b\in\mathrm{cl}(\overline{\mathrm{a}})\)_. Then_ \(\varphi_{\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}})}(M)\) _is of cardinality_ \(<\kappa.\)__
Proof.: (1): Define the formula \(\varphi\) as
\[\varphi(\overline{x},y)=(\exists y^{1},y^{2})\bigg{[}\varphi_{\overline{a}^{ \frown}\langle b\rangle}(\overline{x},y^{1})\wedge\varphi_{\overline{a}^{ \frown}\langle b\rangle}(\overline{x},y^{2})\wedge y=y_{2}-y_{1}\bigg{]}.\]
We show that \(\varphi\) is as required. By Definition 4.7, \(\varphi(\overline{x},y)\in\mathscr{L}\).
First, suppose that \(b\in\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}}),\) and let \(b_{1},b_{2}\in\mathrm{afn}(\mathrm{b},\overline{\mathrm{a}})\) be such that \(b=b_{2}-b_{1}.\) Then \(b_{1},b_{2}\) witness \(M\models\varphi(\overline{a},b).\) Hence
\[(*)_{1}\qquad\qquad\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}})\subseteq \varphi(\overline{\mathrm{a}},\mathrm{M}).\]
In order to prove the reverse inclusion, suppose that \(b\in\varphi(\overline{a},M).\) This implies that \(M\models\varphi(\overline{a},b)\). Take \(b_{1},b_{2}\) be witness it, i.e., \(b=b_{2}-b_{1}\) and
\[M\models\varphi_{\overline{a}^{\sim}\langle b\rangle}(\overline{a},b_{1}) \wedge\varphi_{\overline{a}^{\sim}\langle b\rangle}(\overline{a},b_{2}).\]
On the other hand, for \(l=1,2\) we have
\[M\models\varphi_{\overline{a}^{\sim}\langle b\rangle}(\overline{a},b_{l}) \Rightarrow\varphi_{\overline{a}^{\sim}\langle b\rangle}\leq\varphi_{ \overline{a}^{\sim}\langle b_{l}\rangle},\]
hence \(b_{l}\in\mathrm{afn}(\mathrm{b},\overline{\mathrm{a}})\). Consequently, \(b=b_{2}-b_{1}\in\mathrm{grp}(\mathrm{b},\overline{\mathrm{a}}).\) As \(b\) was arbitrary, we conclude that
\[(*)_{2}\qquad\qquad\varphi(\overline{a},M)\subseteq\mathrm{afn}(\mathrm{b}, \overline{\mathrm{a}}).\]
By \((*)_{1}\) and \((*)_{2}\), we have \(\varphi(M,\overline{a})=\mathrm{afn}(\mathrm{b},\overline{\mathrm{a}}),\) and we are done.
(2): In the light of Lemma 4.5, it suffices to show that \(\mathrm{cl}(\overline{\mathrm{a}})\) is a subgroup of \(M\). To this end, let \(b_{1},b_{2}\in\mathrm{cl}(\overline{\mathrm{a}}),\)\(\varepsilon=\mathrm{lg}(\overline{\mathrm{a}})\) and let \(\varphi(\overline{x},y)\) be as in clause (1). It is easily seen that:
* \(M\models\varphi(\overline{a},b_{2}-b_{1}),\)
* \(\varphi(\overline{a},M)=\{b^{\prime\prime}-b^{\prime}:b^{\prime\prime}\in \varphi_{\overline{a}^{\sim}\langle b_{2}\rangle}(\overline{a},M)\text{ and }b^{\prime}\in\varphi_{\overline{a}^{\sim} \langle b_{1}\rangle}(\overline{a},M)\},\)
* \(\varphi(\overline{a},M)\) has cardinality \(<\kappa.\)
Thanks to the minimality condition,
\[\varphi_{\overline{a}^{\sim}\langle b_{2}-b_{1}\rangle}(M)\subseteq\varphi( \overline{a},M).\]
In other words, \(|\varphi_{\overline{a}^{\sim}\langle b_{2}-b_{1}\rangle}(M)|<\kappa,\) which implies that \(b_{2}-b_{1}\in\mathrm{cl}(\overline{\mathrm{a}}).\) We have proved
\[b_{1},b_{2}\in\mathrm{cl}(\overline{\mathrm{a}})\Rightarrow\mathrm{b}_{2}- \mathrm{b}_{1}\in\mathrm{cl}(\overline{\mathrm{a}}).\]
Therefore, \(\mathrm{cl}(\overline{\mathrm{a}})\) is a subgroup of \(M.\)
(3): The proof is similar to the proof of clause (2).
## SS 5. Dichotomy behavior of absolutely co-Hopfian property
This section is devoted to the proof of Theorem 1.3 from the introduction. We start by recalling the definition of (co)-Hopfian modules.
**Definition 5.1**.: Let \(M\) be an \(R\)-module.
* \(M\) is called _Hopfian_ if its surjective \(R\)-endomorphisms are automorphisms.
* \(M\) is called _co-Hopfian_ if its injective \(R\)-endomorphisms are automorphisms.
This can be extended to:
**Definition 5.2**.: Let \(M\) be a \(\tau\)-model.
* \(M\) is called _Hopfian_ if its surjective \(\tau\)-morphisms are \(\tau\)-automorphisms.
* \(M\) is called _co-Hopfian_ if its injective \(\tau\)-morphisms are \(\tau\)-automorphisms.
For the convenience of the reader, we present the definition of potentially isomorphic, and discuss some basic facts about them which are used in the paper, and only sketch the proofs in most instances.
**Definition 5.3**.: Let \(M,N\) be two structures of our vocabulary. Recall that \(M\) and \(N\) are called _potentially isomorphic provided they are isomorphic in some forcing extension_.
Recall that a group \(G\) is called absolutely co-Hopfian (resp. Hopfian) if it is co-Hopfian (resp. Hopfian) in any further generic extension of the universe.
**Discussion 5.4**.: Suppose \(M\) and \(N\) are potentially isomorphic. According to [19] and [20] this is holds iff for every \(\mathscr{L}_{\infty,\aleph_{0}}\)-sentence \(\varphi\),
\[(M\models\varphi)\Longleftrightarrow(N\models\varphi).\]
We denote this property by \(M\equiv_{\mathscr{L}_{\infty,\aleph_{0}}}N\).
The following is a simple variant of Discussion 5.4. We state it in our context.
**Lemma 5.5**.: _Assume \(M\) and \(N\) are two \(\tau\)-structures._
1. _Suppose for every sentence_ \(\varphi\in\mathscr{L}_{\infty,\aleph_{0}}(\tau)\)_,_ \[(M\models\varphi)\Longrightarrow(N\models\varphi).\] _Then there is an embedding of_ \(M\) _into_ \(N\) _in_ \(V[G_{\mathbb{P}}]\)_, where_ \(\mathbb{P}\) _collapses_ \(|M|+|N|\) _into_ \(\aleph_{0}\)_._
2. _In clause (_1_), it suffices to consider sentences_ \(\varphi\) _in the closure of base formulas under arbitrary conjunctions and_ \(\exists x\)_._
Proof.: We give a proof for completeness. Let \(M=\{a_{n}:n<\omega\}\) be an enumeration of \(M\) in \(V[G_{\mathbb{P}}]\). By induction on \(n\) we define a sequence \(\langle b_{n}:n<\omega\rangle\) of members of \(N\) such that for each formula \(\varphi(x_{0},\cdots,x_{n})\) from \(\mathscr{L}_{\infty,\aleph_{0}}\),
\[(*)_{n}\qquad\quad\big{(}M\models\varphi(a_{0},\cdots,a_{n})\big{)} \Rightarrow\big{(}N\models\varphi(b_{0},\cdots,b_{n})\big{)}.\]
Let
\[\Phi_{0}(x_{0})=\bigwedge\{\varphi(x_{0}):M\models\varphi(a_{0})\}\in \mathscr{L}_{\infty,\aleph_{0}}.\]
Then \(M\models\Phi_{0}(a_{0})\), and by our assumption, there exists some \(b_{0}\in N\) such that \(N\models\Phi_{0}(b_{0})\). Now suppose that \(n<\omega\) and we have defined \(b_{0},\cdots,b_{n}.\) We are going to define \(b_{n+1}\). Let
\[\Phi_{n+1}(x_{0},\ldots,x_{n+1})=\bigwedge\big{\{}\varphi(x_{0},\ldots,x_{n+1} ):M\models\varphi(a_{0},\cdots,a_{n+1})\big{\}}.\]
Clearly, \(\Phi_{n+1}(x_{0},\cdots,x_{n+1})\in\mathscr{L}_{\infty,\aleph_{0}}\). Also,
\[M\models\exists x_{n+1}\Phi_{n+1}(a_{0},\cdots,a_{n},x_{n+1}).\]
According to the induction hypothesis \((*)_{n}\), we have
\[N\models\varphi(b_{0},\cdots,b_{n+1})\]
for some \(b_{n+1}\in N\). This completes the construction of the sequence \(\langle b_{n}:n<\omega\rangle\). The assignment \(a_{n}\mapsto b_{n}\) defines a map \(f:M\to N\) which is an embedding of \(M\) into \(N\).
**Fact 5.6**.:
1. Let \(\lambda\geq\kappa_{\text{beau}}\) and let \(\langle G_{\alpha}:\alpha<\lambda\rangle\) be a sequence of \(\tau\)-models with \(|\tau|<\kappa_{\text{beau}}(\tau)\),. Then in some forcing extension \(\mathbf{V}^{\mathbb{P}},G_{\alpha}\) is embeddable into \(G_{\beta}\), for some \(\alpha<\beta<\lambda.\) Here, \(\mathbb{P}\) collapses \(|G_{\alpha}|+|G_{\beta}|\) into \(\aleph_{0}\). Moreover, if \(x_{\gamma}\in G_{\gamma}\)
for \(\gamma<\lambda\) then for some \(\alpha<\beta<\lambda\), in some \(\mathbf{V}^{\mathbb{P}}\) there is an embedding of \(G_{\alpha}\) into \(G_{\beta}\) mapping \(x_{\alpha}\) to \({x_{\beta}}\)2. Footnote 2: This explains why [14] gets only indecomposable abelian groups (not endo-rigid).
2. Suppose \(M\) and \(N\) are abelian groups, or \(R\)-modules, and for every sentence \(\varphi\in\mathscr{L}_{\infty,\theta}^{\mathrm{ce}}(\tau)\), we have \[(M\models\varphi)\Longrightarrow(N\models\varphi).\] Then there is an embedding of \(M\) into \(N\) in \(V[G_{\mathbb{P}}]\), where \(\mathbb{P}\) collapses \(|M|+|N|\) into \(\aleph_{0}\).
3. Moreover, we can strengthen the conclusion of part (2) to the following: 1. there is a \(\mathbb{P}\)-name \(\pi\) satisfying: 1. If \(\overline{a}\in(^{<\theta}M)\cap V\) then \(\pi(\overline{a})\in(^{<\theta}N)\cap V\), 2. \(\Vdash_{\mathbb{P}}\pi\) maps \(\overline{a}\in(^{<\theta}M)\cap V\) onto \(\{\overline{b}\in{}^{<\theta}M:\mathrm{rang}(\overline{b})\subseteq\mathrm{ rang}(\pi)\}\).
Proof.: For (1), see [12]. Parts (2) and (3) are standard, see for example [19].
Now, we are ready to prove:
**Theorem 5.7**.: _The following assertions are valid:_
1. _If_ \(M\) _is an abelian group of cardinality_ \(\geq\kappa:=\kappa_{\mathrm{beau}}\)_, then_ \(M\) _is not absolutely co-Hopfian, indeed, after collapsing the size of_ \(M\) _into_ \(\omega\)_, there is a one-to-one endomorphism_ \(\varphi\in\mathrm{End}(\mathrm{M})\) _which is not onto._
2. _If_ \(M\) _is an_ \(R\)_-module of cardinality_ \(\geq\kappa=\kappa_{\mathrm{beau}}(R)\)_, then_ \(M\) _is not absolutely co-Hopfian._
3. _If_ \(M\) _is an_ \(\tau\)_-model of cardinality_ \(\geq\kappa=\kappa_{\mathrm{beau}}(\tau)\)_, then_ \(M\) _is not absolutely co-Hopfian._
Proof.: Let \(M\) be an abelian group or an \(R\)-module of size \(|M|\geq\kappa\). Thanks to Theorem 3.14, there exists an additive frame \(\mathbf{f}\) as there such that \(M:=M_{\mathbf{f}}\), \(\mathscr{L}_{\mathbf{f}}=\mathscr{L}_{\infty,\theta}^{\mathrm{ce}}(\tau), \kappa_{\mathbf{f}}=\kappa\), \(\lambda_{\mathbf{f}}=\overline{\lambda}_{2}(\kappa)^{+}\) and \(\theta_{\mathbf{f}}=\aleph_{0}\).
The proof splits into two cases:
**Case 1: for some \(\varepsilon<\theta\), and \(\overline{a},\overline{b}\in{}^{\varepsilon}M\), \(\varphi_{\overline{a}}(M)\supsetneqq\varphi_{\overline{b}}(M)\).**
Consider the \(\tau\)-models \((M,\overline{b})\) and \((M,\overline{a})\). Let \(\varphi\in\mathscr{L}_{\infty,\theta}^{\rm ce}(\tau)\) be a sentence, and suppose that \((M,\overline{a})\models\varphi.\) As \(\varphi_{\overline{a}}(M)\supsetneqq\varphi_{\overline{b}}(M)\), it follows that \((M,\overline{b})\models\varphi.\) Thus by Fact 5.6(2), working in the generic extension by \(\mathbb{P}={\rm Col}(\aleph_{0},|M|)\), there exists a one-to-one endomorphism \(\pi\in{\rm End}({\rm M})\) such that \(\pi(\overline{a})=\overline{b}.\)
There is nothing to prove if \(\pi\) is not onto. So, without loss of generality we may and do assume that \(\pi\) is onto. Then \(\pi,\pi^{-1}\in{\rm Aut}({\rm M})\) and \(\pi^{-1}(\overline{b})=\overline{a}\). We claim that \(\varphi_{\overline{a}}\leq\varphi_{\overline{b}}\). Due to the minimality condition for \(\varphi_{\overline{a}}\), it is enough to show that \(\overline{a}\in\varphi_{\overline{b}}(M)\). To this end, recall that \(\overline{b}\in\varphi_{\overline{b}}(M)\). By definition, \(M\models\varphi_{\overline{b}}[\overline{b}]\). In the light of Lemma 2.15(2) we observe that
\[M\models\varphi_{\overline{b}}[\overline{a}^{-1}(\overline{b})]=\varphi_{ \overline{b}}[\overline{a}].\]
By definition, this means that \(\overline{a}\in\varphi_{\overline{b}}(M)\), as requested. Consequently, \(\varphi_{\overline{a}}(M)\subseteq\varphi_{\overline{b}}(M)\), which contradicts our assumption.
**Case 2: not case \(1\)**.
Given two sets \(A,B\subseteq M\), by \(A\|B\) we mean that \(A\subseteq B\) or \(B\subseteq A\). So in this case, the following holds:
\[(*)\colon\qquad\forall\varepsilon<\kappa\ {\rm and}\ \forall\overline{a}, \overline{b}\in{}^{\varepsilon}M\ {\rm we\ have}\ \biggl{(}\varphi_{\overline{a}}(M)\|\varphi_{ \overline{b}}(M)\Rightarrow\varphi_{\overline{a}}(M)=\varphi_{\overline{b}}(M) \biggr{)}.\]
Set \(\Gamma=\{\varphi_{\overline{a}}:\overline{a}\in{}^{<\theta}M\}\). Now, we have the following easy claim.
**Claim 5.8**.: \(\Gamma\) _is a set of cardinality \(<\kappa\)._
Proof.: To see this, set \(\Gamma_{\varepsilon}:=\{\varphi_{\overline{a}}:\overline{a}\in{}^{\varepsilon} M\}\). Clearly, \(\Gamma=\bigcup_{\varepsilon<\theta}\Gamma_{\varepsilon}\), and since \(\theta<\kappa\) and \(\kappa\) is regular, it suffices to show that \(|\Gamma_{\varepsilon}|<\kappa\) for all \(\varepsilon<\theta\). Suppose not and search for a contradiction. Take \(\varepsilon<\theta\) be such that \(|\Gamma_{\varepsilon}|\geq\kappa\). This enables us to find a sequence \(\langle\overline{a}_{\alpha}:\alpha<\kappa\rangle\) in \({}^{\varepsilon}M\) such that
* \(\forall\alpha<\kappa,\ \varphi_{\overline{a}_{\alpha}}\in\Gamma_{\varepsilon}\)
* \(\forall\alpha\neq\beta,\)\(\varphi_{\overline{a}_{\alpha}}(M)\neq\varphi_{\overline{a}_{\beta}}(M).\)
We apply the property presented in Definition 3.1(5)(a) to the family \(\{\varphi_{\overline{a}_{\alpha}}\}_{\alpha<\kappa}\), to find some \(\alpha<\beta<\kappa\) such that \(\varphi_{\overline{a}_{\beta}}(M)\supseteq\varphi_{\overline{a}_{\alpha}}(M).\) By \((*)\), this implies that \(\varphi_{\overline{a}_{\beta}}(M)=\varphi_{\overline{a}_{\alpha}}(M)\), which contradicts \(\bullet_{2}\)
Let \(\chi:=|M|\geq\kappa\), and let \(\mathbb{P}:=\operatorname{Col}(\aleph_{0},\chi)\). Forcing with \(\mathbb{P}\), collapses \(|M|\) into \(\aleph_{0}\), i.e., for any \(\mathbb{P}\)-generic filter \(G_{\mathbb{P}}\) over \(V\), we have
\[V[G_{\mathbb{P}}]\models\text{``}M\text{ is countable''}.\]
We are going to show there is in \(V[G_{\mathbb{P}}]\), there exists a 1-1 map \(\pi:M\to M\) which is not surjective. To this end, we define approximations to the existence of such \(\pi\):
* Let \(\mathrm{AP}\) be the set of all triples \((\overline{a},\overline{b},c)\) such that:
* \(\overline{a},\overline{b}\in\)\({}^{\varepsilon}M\) for some \(\varepsilon<\theta\),
* \(\varphi_{\overline{a}}(\overline{x})\equiv\varphi_{\overline{b}}(\overline{x})\) (in \(M\)),
* \(c\in M\) is such that \(c\notin\operatorname{cl}(\overline{\mathrm{b}},\mathrm{M})\), i.e., \(\varphi_{\overline{b}^{\,\,\sim}\langle c\rangle}(M)\) has cardinality \(\geq\kappa\).
**Claim 5.9**.: \(\mathrm{AP}\neq\emptyset\)_._
Proof.: According to Lemma 4.2(a), \(\operatorname{cl}(\overline{\overline{a}})\) has cardinality \(<\kappa\). In particular, \(|\operatorname{cl}(\emptyset)|<\kappa\), and hence as \(|M|\geq\kappa\), we can find some \(c\in M\setminus\operatorname{cl}(\emptyset)\), and consequently, \((\langle\rangle\langle\rangle,c)\in\mathrm{AP}\). The claim follows.
Next, we bring the following claim, which plays the key role in our proof.
**Claim 5.10**.: _Suppose \((\overline{a},\overline{b},c)\in\mathrm{AP}\) and \(d_{1}\in M\). Then there is some \(d_{2}\in M\) such that_
\[\left(\overline{a}^{\,\,\sim}\langle d_{1}\rangle,\overline{b}^{\,\,\sim} \langle d_{2}\rangle,c\right)\in\mathrm{AP}.\]
Proof.: Recall that in \(M\) we have \(\varphi_{\overline{a}}(\overline{x})\equiv\varphi_{\overline{b}}(\overline{x})\). First, we use this to find \(d\in M\) such that
\[(\dagger)\qquad\quad M\models\varphi_{\overline{a}^{\,\,\sim}\langle d_{1} \rangle}(\overline{b},d).\]
Indeed, we look at the formula
\[\psi(\overline{x}):=\exists y\varphi_{\overline{a}^{\,\,\sim}\langle d_{1} \rangle}(\overline{x},y).\]
Since \(\overline{a}\in\psi(M)\), and due to the minimality of \(\varphi_{\overline{a}}\) with respect to this property, we should have
\[\overline{b}\in\varphi_{\overline{b}}(M)=\varphi_{\overline{a}}(M)\subseteq \psi(M).\]
In other words,
\[M\models\exists y\varphi_{\overline{a}^{\,\,\sim}\langle d_{1}\rangle}( \overline{b},y).\]
Hence, for some \(d\in M\) we must have \(M\models\varphi_{\overline{a}^{\,\cdot}\langle d_{1}\rangle}(\overline{b},d).\) So, \((\dagger)\) is proved. From this, \(\overline{b}^{\,\cdot}\langle d\rangle\in\varphi_{\overline{a}^{\,\cdot}\langle d _{1}\rangle}(M)\). This implies, using the minimality condition on formulas, that \(\varphi_{\overline{a}^{\,\cdot}\langle d_{1}\rangle}(M)\subseteq\varphi_{ \overline{b}^{\,\cdot}\langle d\rangle}(M)\). Combining this along with \((*)\) yields that
\[\varphi_{\overline{a}^{\,\cdot}\langle d_{1}\rangle}(M)=\varphi_{\overline{b}^ {\,\cdot}\langle d\rangle}(M).\]
First, we deal with the case \(d\in\mathrm{cl}(\overline{\mathrm{b}},\mathrm{M})\). Thanks to Lemma 4.2(c), and recalling that \(\Omega\) contains \(\{1,\kappa\}\), we know \(\mathrm{cl}(\overline{\mathrm{b}^{\,\cdot}\langle d\rangle})=\mathrm{cl}( \overline{\mathrm{b}})\). Consequently, \(c\notin\mathrm{cl}(\overline{\mathrm{b}^{\,\cdot}\langle d\rangle})\). Following definition, one has
\[\big{(}\overline{a}^{\,\cdot}\langle d_{1}\rangle,\overline{b}^{\,\cdot} \langle\rangle,c\big{)}\in\mathrm{AP},\]
and we are done by taking \(d_{2}=d.\) So, without loss of generality let us assume that \(d\notin\mathrm{cl}(\overline{\mathrm{b}},\mathrm{M}).\) It follows that the set
\[\mathbf{I}:=\left\{c^{\prime}\in M:M\models\varphi_{\overline{b}^{\,\cdot} \langle d\rangle}[\overline{b},c^{\prime}]\right\}\]
has cardinality \(\geq\kappa.\) Therefore, there is \(c^{\prime}\in\mathbf{I}\setminus\mathrm{cl}(\overline{\mathrm{b}^{\,\cdot} \langle d\rangle},\mathrm{M}).\) By the minimality condition on \(\varphi_{\overline{b}^{\,\cdot}\langle d\rangle}\) and since \(\overline{b}^{\,\cdot}\langle c^{\prime}\rangle\in\varphi_{\overline{b}^{\, \cdot}\langle d\rangle}(M)\), we have \(\varphi_{\overline{b}^{\,\cdot}\langle d\rangle}(M)\subseteq\varphi_{ \overline{b}^{\,\cdot}\langle c^{\prime}\rangle}(M)\). Now, we use this along with \((*)\) and deduce that
\[\varphi_{\overline{b}^{\,\cdot}\langle d\rangle}(M)=\varphi_{\overline{b}^{\, \cdot}\langle c^{\prime}\rangle}(M).\]
Then, in the same vein as above, we can find some \(d^{\prime}\) such that \(\varphi_{\overline{b}^{\,\cdot}\langle c,d^{\prime}\rangle}(M)=\varphi_{ \overline{b}^{\,\cdot}\langle c^{\prime},d\rangle}(M).\) This yields that \(c\notin\mathrm{cl}(\overline{\mathrm{b}^{\,\cdot}\langle d^{\prime}\rangle}, \mathrm{M})\), hence \(d_{2}=d^{\prime}\) is as required.
In \(V[G_{\mathbb{P}}]\), \(M\) is countable. Let us list \(M\) as \(\{a_{i}:i<\omega\}\). We define \(\pi:M\to M\) by evaluating \(\pi\) at \(a_{i}\). We do this by induction, in such a way that for some fixed \(c\in M\) and all \(n<\omega\), if \(\pi(a_{i})=b_{i}\), then
\[(\dagger\dagger)_{n}\qquad\quad\big{(}\langle a_{i}:i<n\rangle,\langle b_{i}:i< n\rangle,c\big{)}\in\mathrm{AP}.\]
Recall from Claim 5.9 and its proof that there is some \(c\) in \(M\) such that \((\langle\rangle\langle\rangle,c)\in\mathrm{AP}\). Let us apply Claim 5.10 to
* \(\overline{a}:=\langle\rangle\)
* \(\overline{b}:=\langle\rangle\)
* \(d_{1}:=a_{0}\).
This gives us an element \(b_{0}\in M\) such that \(\big{(}\langle a_{0}\rangle,\langle b_{0}\rangle,c\big{)}\in\text{AP}.\) Let \(f(a_{0})=b_{0}\). Now, suppose inductively we have defined \(\pi(a_{i})=b_{i}\) for all \(i<n\) such that \((\dagger\dagger)_{n}\) is true. Let us apply Claim 5.10 to
* \(\overline{a}:=\langle a_{i}:i<n\rangle\),
* \(\overline{b}:=\langle b_{i}:i<n\rangle\),
* \(d_{1}:=a_{n}\).
This gives us an element \(b_{n}\in M\) such that
\[\big{(}\overline{a}\diagdot\langle a_{n}\rangle,\overline{b}\diagdot\langle b _{n}\rangle,c\big{)}\in\text{AP}.\]
Let \(\pi(a_{n})=b_{n}\), and note that \((\dagger\dagger)_{n+1}\) holds as well. This completes the inductive definition of \(\pi\).
The proof becomes complete if we can show the following three items are satisfied:
1. \(\pi\) is a homomorphism.
2. \(\pi\) is 1-to-1.
3. \(\pi\) is not surjective.
Let us check these:
1. Suppose \(\varphi=\varphi_{\overline{a}}\) is a first order formula, hence, without loss of generality, the length of \(\overline{a}\) is finite and we can enlarge \(\overline{a}\) to \(\langle a_{i}:i<n\rangle\) for some \(n<\omega\). Recall that \(b_{i}:=\pi(a_{i})\). By the construction \[\big{(}\langle a_{j}:j<n\rangle,\langle b_{j}:j<n\rangle,c\big{)}\in\text{AP}.\] Assume \(M\models\varphi(\overline{a})\). We show that \(M\models\varphi(\overline{b})\). We have \(\overline{b}\in\varphi_{\overline{b}}(M)\) and by our construction, \(\varphi_{\overline{b}}(M)=\varphi_{\overline{a}}(M)=\varphi(M)\), hence \(\overline{b}\in\varphi(M)\), which means that \(M\models\varphi(\overline{b})\). From this, it immediately follows that \(\pi\) is a homomorphism.
2. Following the construction given by Claim 5.10, we can always find \(b_{n}\) so that \(b_{n}\neq b_{i}\) for all \(i<n\). So, \(f\) is 1-to-1.
3. Suppose by the way of contradiction that \(\pi\) is surjective. In particular, we can find some \(n<\omega\) such that \(c=\pi(a_{n})=b_{n}\). In the light of Lemma 4.2(b) we observe that \[c=b_{n}\in\text{cl}\big{(}\langle b_{i}:i<n+1\rangle,\text{M}\big{)},\]
which contradicts the following fact
\[\big{(}\langle a_{i}:i<n+1\rangle,\langle b_{i}:i<n+1\rangle,c\big{)}\in\text{AP}.\]
The proof is now complete.
(3): The proof of this item is similar to the first part.
## SS 6. A new construction of Hopfian groups
In this section we give a new construction of absolutely Hopfian groups. Let us start with the following well-known definition.
**Definition 6.1**.: A group \(G\) is pure in a torsion-free abelian group \(H\) if \(G\subseteq H\) and \(nG=nH\cap G\) for every \(n\in\mathbb{Z}\).
_Notation 6.2_.: i) Let \(G\) be an abelian torsion free group and \(Y\) be a subset of \(G\). By \(\text{PC}_{G}(Y)\) we mean the minimal pure subgroup of \(G\) which includes \(Y.\)
ii) The notation \(\mathbb{Q}_{R}\) stands for the field of fractions of the integral domain \(R\).
**Lemma 6.3**.: _Suppose \(\mathbf{f}\) is an endomorphism of \(M\) such that:_
\[\begin{array}{ll}(a)&R\text{ is a commutative torsion free ring with }1_{R},\\ (b)&\mathbf{f}\text{ maps }\varphi_{0}(M)\text{ onto }\varphi_{0}(M)\text{,}\\ (c)&M\text{ is a torsion free }R\text{-module,}\\ (d)&\varphi_{\varepsilon}(x)\in\mathscr{L}_{\infty,\aleph_{0}}^{ep}(\tau_{R}) \text{, for }\varepsilon\leq\varepsilon_{*}\text{ where }ep\text{ stands for existential positive (or}\\ &\text{ just generated from the atomic formulas by }\exists\text{ and }\wedge)\text{,}\\ (e)&\langle\varphi_{\varepsilon}(M):\varepsilon\leq\varepsilon_{*}\rangle \text{ is }\subseteq\text{-decreasing continuous of sub-modules of }M\text{,}\\ (f)&\varphi_{\varepsilon_{*}}(M)=\{0_{M}\}\text{,}\\ (g)&\varphi_{\varepsilon}(M)/\varphi_{\varepsilon+1}(M)\text{ is torsion free of rank 1, so isomorphic to some sub-}R\text{-module of }\mathbb{Q}_{R}\text{ considered as an }R\text{-module,}\\ (h)&x_{\varepsilon}\in\varphi_{\varepsilon}(M)\backslash\varphi_{\varepsilon+ 1}(M)\text{ for }\varepsilon<\varepsilon_{*}\text{,}\\ (i)&\varphi_{0}(M)=\text{PC}_{M}(\{x_{\varepsilon}:\varepsilon<\varepsilon_{*} \})\text{.}\end{array}\]
_Then \(\mathbf{f}\upharpoonright\varphi_{0}(M)\) is one-to-one._
Proof.: For \(\varepsilon<\varepsilon_{*}\) let \(y_{\varepsilon}:=\mathbf{f}(x_{\varepsilon})\). As \(\varphi_{\varepsilon}(x)\in\mathscr{L}_{\infty,\aleph_{0}}(\tau_{R})\) is existential positive, clearly \(\mathbf{f}\) maps \(\varphi_{\varepsilon}(M)\) into \(\varphi_{\varepsilon}(M)\), hence \(y_{\varepsilon}\in\varphi_{\varepsilon}(M)\). Set
\[\mathscr{U}:=\{\varepsilon<\varepsilon_{*}:y_{\varepsilon}\in\varphi_{ \varepsilon+1}(M)\}.\]
**Claim 6.4**.: \(\mathscr{U}=\emptyset\)_._
Proof.: Assume towards a contradiction that \(\mathscr{U}\neq\emptyset\) and let \(\zeta\) be the first member of \(\mathscr{U}\). As we are assuming "\(\mathbf{f}\) maps \(\varphi_{0}(M)\) onto \(\varphi_{0}(M)\)" there is \(z\in\varphi_{0}(M)\) such that \(\mathbf{f}(z)=x_{\zeta}\). Let \(R^{+}:=R\setminus\{0\}\). Since \(z\in\varphi_{0}(M)\) by items (i) and (c) of the assumption of the claim we can find
* \(b\in R^{+},n\in\mathbb{N}\)
* \(\varepsilon_{0}<\ldots<\varepsilon_{n}<\varepsilon_{*}\) and
* \(a_{0},\ldots,a_{n}\in R^{+}\),
such that
\[M\models``bz=\sum\limits_{\ell\leq n}a_{\ell}x_{\varepsilon_{\ell}}".\]
Now applying \(\mathbf{f}\) we have
\[M\models``bx_{\zeta}=\sum\limits_{\ell\leq n}a_{\ell}y_{\varepsilon_{\ell}}".\]
We consider two cases:
Case 1: \(\varepsilon_{0}<\zeta\).
To see this, note that \(bx_{\zeta}\in\varphi_{\zeta}(M)\subseteq\varphi_{\varepsilon_{0}+1}(M)\) and for \(\ell>0\) we have \(a_{\ell}y_{\varepsilon_{\ell}}\in\varphi_{\varepsilon_{\ell}}(M)\subseteq \varphi_{\varepsilon_{0}+1}(M)\), so as \(bx_{\zeta}=\sum\limits_{\ell\leq n}a_{\ell}y_{\varepsilon_{\ell}}\) we get \(a_{0}y_{\varepsilon_{0}}\in\varphi_{\varepsilon_{0}+1}(M)\) and as \(a_{0}\in R^{+}\) by clause (f) we get \(y_{\varepsilon_{0}}\in\varphi_{\varepsilon_{0}+1}(M)\). This contradicts \(\varepsilon_{0}<\zeta=\min(\mathscr{U})\).
Case 2: \(\varepsilon_{0}\geq\zeta\).
On the one hand as \(b\in R^{+}\) clearly \(bx_{\zeta}\notin\varphi_{\zeta+1}(M)\). On the other hand
1. \(\varepsilon_{\ell}>\zeta\Rightarrow a_{\ell}y_{\varepsilon_{\ell}}\in\varphi_ {\varepsilon_{\ell}}(M)\subseteq\varphi_{\varepsilon_{0}+1}(M)\subseteq \varphi_{\zeta+1}(M)\) and
2. \(\varepsilon_{\ell}=\zeta\Rightarrow\ell=0\Rightarrow y_{\varepsilon_{\ell}}= y_{\zeta}\in\varphi_{\zeta+1}(M)\).
Hence \(\sum\limits_{\ell\leq n}a_{\ell}y_{\varepsilon_{\ell}}\in\varphi_{\zeta+1}(M)\). Combining these together we get a contradiction to
\[M\models``bx_{\zeta}=\sum\limits_{\ell\leq n}a_{\ell}y_{\varepsilon_{\ell}}",\]
and we are done. The claim follows.
Now, if \(x\in\varphi_{0}(M)\backslash\{0_{M}\}\), then by clause (i) of the assumption for some \(b\in R^{+},n\) and \(\varepsilon_{0}<\ldots<\varepsilon_{n}<\varepsilon_{*}\) and \(a_{\ell}\in R^{+}\) for \(\ell\leq n\) we have
\[M\models\text{``}bx=\sum_{\ell\leq n}a_{\ell}x_{\varepsilon_{\ell}}\text{''}\]
hence
\[b\mathbf{f}(x)=\sum_{\ell\leq n}a_{\ell}\mathbf{f}(x_{\alpha_{\ell}})\in a_{0 }y_{\varepsilon_{0}}+\varphi_{\varepsilon_{0}+1}(M).\]
As \(a_{0}\in R^{+}\), and in the light of clause (h) we observe \(y_{\varepsilon_{0}}\in\varphi_{\varepsilon_{0}}(M_{\varepsilon_{0}})\backslash \varphi_{\varepsilon_{0}+1}(M)\). By items (f) and (g) of the assumption, \(a_{0}y_{\varepsilon_{0}}\notin\varphi_{\varepsilon_{0}+1}(M)\). Due to the previous sentence, we know \(b\mathbf{f}(x)\neq 0_{M}\). Hence \(\mathbf{f}(x)\neq 0_{M}\). So, we have proved that \(\mathbf{f}\) maps any non-zero member of \(\varphi_{0}(M)\) into a non-zero member of \(\varphi_{0}(M)\). In other words, \(\mathbf{f}\upharpoonright\varphi_{0}(M)\) is one-to-one as promised.
|
2309.08571 | A Bayesian Approach to Robust Inverse Reinforcement Learning | We consider a Bayesian approach to offline model-based inverse reinforcement
learning (IRL). The proposed framework differs from existing offline
model-based IRL approaches by performing simultaneous estimation of the
expert's reward function and subjective model of environment dynamics. We make
use of a class of prior distributions which parameterizes how accurate the
expert's model of the environment is to develop efficient algorithms to
estimate the expert's reward and subjective dynamics in high-dimensional
settings. Our analysis reveals a novel insight that the estimated policy
exhibits robust performance when the expert is believed (a priori) to have a
highly accurate model of the environment. We verify this observation in the
MuJoCo environments and show that our algorithms outperform state-of-the-art
offline IRL algorithms. | Ran Wei, Siliang Zeng, Chenliang Li, Alfredo Garcia, Anthony McDonald, Mingyi Hong | 2023-09-15T17:37:09Z | http://arxiv.org/abs/2309.08571v2 | # A Bayesian Approach to Robust Inverse Reinforcement Learning
###### Abstract
We consider a Bayesian approach to offline model-based inverse reinforcement learning (IRL). The proposed framework differs from existing offline model-based IRL approaches by performing _simultaneous_ estimation of the expert's reward function and subjective model of environment dynamics. We make use of a class of prior distributions which parameterizes how accurate the expert's model of the environment is to develop efficient algorithms to estimate the expert's reward and subjective dynamics in high-dimensional settings. Our analysis reveals a novel insight that the estimated policy exhibits robust performance when the expert is believed (a priori) to have a highly accurate model of the environment. We verify this observation in the MuJoCo environments and show that our algorithms outperform state-of-the-art offline IRL algorithms.1
Footnote 1: [https://github.com/rw422scarlet/bmirl_tf](https://github.com/rw422scarlet/bmirl_tf)
Inverse Reinforcement Learning, Bayesian Inference, Robustness
## 1 Introduction
Inverse reinforcement learning (IRL) is the problem of extracting the reward function and policy of a value-maximizing agent from its behavior [1; 2]. IRL is an important tool in domains where manually specifying reward functions or policies is difficult, such as in autonomous driving [3], or when the extracted reward function can reveal novel insights about a target population and be used to device interventions, such as in biology, economics, and human-robot interaction studies [4; 5; 6]. However, wider applications of IRL face two interrelated algorithmic challenges: 1) having access to the target deployment environment or an accurate simulator thereof and 2) robustness of the learned policy and reward function due to the covariate shift between the training and deployment environments or datasets [7; 8; 9].
In this paper, we focus on model-based offline IRL to address challenge 1). A notable class of model-based offline IRL methods estimate the dynamics and reward in a _two-stage_ fashion (see Figure 1) [10; 11; 12; 13]. In the first stage, a dynamics model is estimated from the offline dataset. Then, the dynamics model is fixed and used as a simulator to train the reward and policy in the second stage. To overcome covariate shift in the estimated dynamics, recent methods design density estimation-based "pessimistic" penalties to prevent the learner policy from entering uncertainty regions in the state-action space (i.e., space not covered in the demonstration dataset) [14; 13; 12].
We instead approach IRL from a Bayesian modeling perspective, where we _simultaneously_ estimate the expert's reward function and their _internal_ model of the environment dynamics. The core idea is that expert decisions convey their beliefs about the environment [15; 16] and thus should affect the update direction of the dynamics model. This Bayesian model of perception and action, related to
Figure 1: Objectives of the traditional two-stage IRL and the proposed simultaneous estimation approach of Bayesian model-based IRL.
Bayesian Theory of Mind [15], has been used to understand human biases encoded in the internal dynamics in simple and highly constrained domains [17; 16; 18; 19; 20; 21; 22; 23; 24]. In contrast to these works, we study how a Bayesian model enables learning high-performance and robust policies.
We first propose a class of priors parameterizing how accurate we believe the expert's model of the environment is. We then present our _key observation_ that if the expert is believed a priori to have a highly accurate model, robustness emerges naturally in the Bayesian modeling framework by a _simultaneous_ estimation approach in which planning is performed against the worst-case dynamics outside the offline data distribution. We then analyze how varying the prior affects the performance of the learner agent and pair our analysis with a set of algorithms which extend previous simultaneous estimation approaches [18; 19] to high-dimensional continuous-control settings. We show that the proposed algorithms outperform state-of-the-art (SOTA) offline IRL methods in the MuJoCo environments without the need for designing pessimistic penalties.
## 2 Preliminaries
### Markov Decision Process
We consider modeling agent behavior using infinite-horizon _entropy-regularized_ Markov decision processes (MDP; [25]) defined by tuple \((\mathcal{S},\mathcal{A},P,R,\mu,\gamma)\) with state space \(\mathcal{S}\), action space \(\mathcal{A}\), transition probability distribution \(P(s^{\prime}|s,a)\in\Delta(\mathcal{S})\), reward function \(R(s,a)\in\mathbb{R}\), initial state distribution \(\mu(s_{0})\in\Delta(\mathcal{S})\), and discount factor \(\gamma\in(0,1)\). We denote the discounted occupancy measure as \(\rho_{P}^{\pi}(s,a)=\mathbb{E}_{\mu,P,\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}P( s_{t}=s,a_{t}=a)\right]\) and the marginal state-action distribution as \(d_{P}^{\pi}(s,a)=(1-\gamma)\rho_{P}^{\pi}(s,a)\). We further denote the discounted occupancy measure starting from a specific state-action pair \((s,a)\) with \(\rho_{P}^{\pi}(\tilde{s},\tilde{a}|s,a)\). The agent selects actions from an optimal policy \(\pi(a|s)\in\Delta(\mathcal{A})\) that achieves the maximum expected discounted cumulative rewards and policy entropy \(\mathcal{H}(\pi(\cdot|s))=-\sum_{\tilde{\alpha}}\pi(\tilde{a}|s)\log\pi(\tilde {a}|s)\) in the MDP:
\[\max_{\pi}J_{P}(\pi)=\mathbb{E}_{\mu,P,\pi}\left[\sum_{t=0}^{\infty}\gamma^{t} \Big{(}R(s_{t},a_{t})+\mathcal{H}(\pi(\cdot|s_{t}))\Big{)}\right] \tag{1}\]
The optimal policy satisfies the following conditions [26]:
\[\pi(a|s) \propto\exp\left(Q(s,a)\right) \tag{2}\] \[Q(s,a) =R(s,a)+\gamma\mathbb{E}_{P(s^{\prime}|s,a)}\left[V(s^{\prime})\right]\] \[V(s) =\log\sum_{a^{\prime}}\exp\left(Q(s,a^{\prime})\right)\]
### Inverse Reinforcement Learning
The majority of contemporary IRL approaches have converged on the Maximum Causal Entropy (MCE) IRL framework, which aims to find a reward function \(R_{\theta}(s,a)\) with parameters \(\theta\) such that the entropy-regularized learner policy \(\tilde{\pi}\) has matching state-action feature with the unknown expert policy \(\pi\)[27].
A related formulation casts IRL as maximum _discounted_ likelihood (ML) estimation [28; 29; 30], subject to the constraint that the policy is entropy-regularized. Given a dataset of \(N\) expert trajectories each of length \(T\): \(\mathcal{D}=\{\tau_{i}\}_{i=1}^{N},\tau=(s_{1:T},a_{1:T})\) sampled from the expert policy in environment \(P\) with occupancy measure \(\rho_{\mathcal{D}}:=\rho_{P}^{\pi}\), ML-IRL aims to solve the following optimization problem:
\[\max_{\theta}\mathbb{E}_{(s_{t},a_{t})\sim\mathcal{D}}\left[\sum_{t=0}^{T} \gamma^{t}\log\hat{\pi}_{\theta}(a_{t}|s_{t})\right],\quad\text{s.t. }\hat{\pi}_{\theta}(a|s)=\arg\max_{\tilde{\pi}\in\Pi}\mathbb{E}_{\rho_{P}^{ \pi}}\left[R_{\theta}(s,a)+\mathcal{H}(\tilde{\pi}(\cdot|s)\right] \tag{3}\]
where the policy is implicitly parameterized by the reward parameters \(\theta\).
It can be shown that for sufficiently large \(T\rightarrow\infty\), MCE-IRL and ML-IRL are equivalent under linear reward parameterization [28; 29], however (3) permits non-linear reward parameterization
through the following surrogate optimization problem:
\[\max_{\theta}\mathbb{E}_{\rho_{\mathcal{D}}}\left[R_{\theta}(s,a)\right]-\mathbb{E }_{\rho_{\mathcal{D}}^{*}}\left[R_{\theta}(s,a)\right],\quad\text{s.t. }\hat{\pi}_{\theta}(a|s)=\arg\max_{\hat{\pi}\in\Pi}\mathbb{E}_{\rho_{ \mathcal{D}}^{*}}\left[R_{\theta}(s,a)+\mathcal{H}(\hat{\pi}(\cdot|s)\right] \tag{4}\]
(4) can be efficiently solved via alternating training of the learner policy and the reward function, similar to Generative Adversarial Network (GAN)-based algorithms [31, 32, 33, 34, 35, 36]. However, these methods all require access to the ground truth environment dynamics or a high quality simulator in order to compute or sample from the learner occupancy measure \(\rho_{\mathcal{D}}^{*}\).
### Offline Model-Based IRL & RL
Existing offline model-based IRL algorithms such as [10, 11] adapt (4) using a two-stage process. First, an estimate \(\hat{P}\) of the environment dynamics is obtained from the offline dataset, e.g., using maximum likelihood estimation. Then, \(\hat{P}\) is fixed and used in place of \(P\) to compute \(\rho_{\mathcal{D}}^{*}\) while optimizing (4). However, this simple replacement incurs a gap between (4) and (3) which scales with the dynamics model error and the estimated value [13]. This puts a high demand on the accuracy of the estimated dynamics.
A related challenge is to prevent the policy from exploiting inaccuracies in the estimated dynamics, which can lead to erroneously high value estimates. This has been extensively studied in both online and offline model-based RL literature [37, 38, 39, 40]. The majority of recent offline model-based RL methods combat model-exploitation via a notion of "pessimism", which penalizes the learner policy from visiting states where the model is likely to be incorrect [37]. These pessimistic penalties are often designed based on quantifying uncertainty about transition dynamics through the estimated model [41, 42]. Drawing on these advances, recent offline IRL methods also incorporate pessimistic penalties into their RL subroutine [13, 12, 14]. However, designing pessimistic penalties involves nontrivial decisions to ensure that they can accurately capture out-of-distribution samples [43].
An alternative approach to avoid model-exploitation is to perform policy training against the worst-case dynamics in out-of-distribution states [44], similar to robust MDP [45, 46]. Rigter et al. [47] implemented this idea in the RAMBO algorithm and showed that it is competitive with pessimistic penalty-based approaches while requiring no penalty design and significantly less tuning. We will show that robust MDP corresponds to a sub-problem of IRL under the Bayesian formulation.
We list additional related work in Appendix A.
## 3 A Bayesian Approach to Model-based IRL
We consider model-based IRL under a Bayesian framework (BM-IRL), where the observed expert decisions are the results of the unknown reward function \(R_{\theta_{1}}(s,a)\) and their _internal_ model of the environment dynamics \(\hat{P}_{\theta_{2}}(s^{\prime}|s,a)\).
We denote the concatenated parameters with \(\theta=\{\theta_{1},\theta_{2}\}\) and condition the policy on \(\theta\) as \(\hat{\pi}(a|s;\theta)\) to emphasize that the expert configuration is determined by both the reward and dynamics parameters.
Upon observing a finite set of expert demonstrations \(\mathcal{D}\), BM-IRL aims to compute the posterior distribution \(\mathbb{P}(\theta|\mathcal{D})\) given a choice of a prior distribution \(\mathbb{P}(\theta)\):
\[\mathbb{P}(\theta|\mathcal{D})\propto\mathbb{P}(\mathcal{D}|\theta)\mathbb{P}( \theta)=\prod_{i=1}^{N}\prod_{t=1}^{T}\hat{\pi}(a_{i,t}|s_{i,t};\theta) \mathbb{P}(\theta) \tag{5}\]
where we have omitted \(\prod_{i=1}^{N}\prod_{t=1}^{T}P(s_{i,t+1}|s_{i,t},a_{i,t})\) from the likelihood because it does not depend on \(\theta\).
We consider a class of prior distributions of the form:
\[\mathbb{P}(\theta)\propto\exp\left(\lambda\sum_{i=1}^{N}\sum_{t=1}^{T}\log \hat{P}_{\theta_{2}}(s_{i,t+1}|s_{i,t},a_{i,t})\right) \tag{6}\]
where the prior precision hyperparameter \(\lambda\) represents how accurate we believe the expert's model of the environment is.
Let \(\mathcal{L}(\theta):=\frac{1}{NT}\log\mathbb{P}(\theta|\mathcal{D})\). It can be easily verified that
\[\mathcal{L}(\theta)=\mathbb{E}_{(s,a,s^{\prime})\sim\mathcal{D}}\left[\log\hat{ \pi}(a|s;\theta)+\lambda\log\hat{P}_{\theta_{2}}(s^{\prime}|s,a)\right]\]
In this paper, we consider finding a Maximum A Posterior (MAP) estimate of the BM-IRL model by solving the following bi-level optimization problem:
\[\max_{\theta}\mathcal{L}(\theta),\quad\text{s.t. }\hat{\pi}(a|s;\theta)=\arg \max_{\hat{\pi}\in\Pi}\mathbb{E}_{\rho_{\hat{P}}^{\ast}}\left[R_{\theta}(s,a)+ \mathcal{H}(\hat{\pi}(\cdot|s))\right] \tag{7}\]
Note that this formulation differs from (3) and the two-stage approach (see Figure 1) because it includes the log likelihood of the _internal_ dynamics in the objective (weighted by \(\lambda\)).
It should be noted that obtaining the full posterior distribution (or an approximation) is feasible using popular approximate inference methods (e.g., stochastic variational inference or Langevin dynamics [48; 49]) and does not significantly alter the proposed estimation principles and algorithms.
### Naive Solution
We start by presenting a naive solution to (7) which can be seen as an extension of the tabular simultaneous reward-dynamics estimation algorithms [18; 19] to the high-dimensional setting.
Solving (7) requires: 1) computing the optimal policy with respect to \(\theta\), and 2) computing the gradient \(\nabla_{\theta}\log\hat{\pi}(a|s;\theta)\) which requires inverting the policy optimization process itself. Both operations can be done exactly in the tabular setting as in prior works but are intractable in high-dimensional settings. We propose to overcome the intractability using sample-based approximation.
In this section, we focus on approximating the gradient of the policy \(\nabla_{\theta}\log\hat{\pi}(a|s;\theta)\) and constructing a surrogate objective similar to (4) to perform simultaneous estimation. We can show that \(\nabla_{\theta}\log\hat{\pi}(a|s;\theta)\) has the following form (see Appendix B for all proofs and derivations):
\[\nabla_{\theta}\log\hat{\pi}(a|s;\theta)=\nabla_{\theta}Q_{\theta}(s,a)- \mathbb{E}_{\hat{a}\sim\hat{\pi}(\cdot|s;\theta)}[\nabla_{\theta}Q_{\theta}(s,\hat{a})] \tag{8}\]
where \(\nabla_{\theta}Q_{\theta}(s,a)=[\nabla_{\theta_{1}}Q_{\theta}(s,a),\nabla_{ \theta_{2}}Q_{\theta}(s,a)]\) is the concatenation of reward and dynamics gradients defined as:
\[\nabla_{\theta_{1}}Q_{\theta}(s,a) =\mathbb{E}_{\rho_{\hat{P}}^{\ast}(\tilde{s},\tilde{a}|s,a)} \left[\nabla_{\theta_{1}}R_{\theta_{1}}(\tilde{s},\tilde{a})\right] \tag{9}\] \[\nabla_{\theta_{2}}Q_{\theta}(s,a) =\mathbb{E}_{\rho_{\hat{P}}^{\ast}(\tilde{s},\tilde{a}|s,a)} \left[\gamma\sum_{s^{\prime}}V_{\theta}(s^{\prime})\nabla_{\theta_{2}}\hat{P} _{\theta_{2}}(s^{\prime}|\tilde{s},\tilde{a})\right] \tag{10}\]
Given (9) and (10) are tractable to compute using sample-based approximation of expectations, we construct the following surrogate objective \(\tilde{\mathcal{L}}(\theta)\) with the same gradient as the original MAP estimation problem (7):
\[\tilde{\mathcal{L}}(\theta)=\mathbb{E}_{(s,a)\sim\mathcal{D}}[\mathcal{E}_{ \theta}(s,a)]-\mathbb{E}_{s\sim\mathcal{D},a\sim\hat{\pi}}[\mathcal{E}_{ \theta}(s,a)]+\lambda\mathbb{E}_{(s,a,s^{\prime})\sim\mathcal{D}}[\log\hat{P} _{\theta_{2}}(s^{\prime}|s,a)] \tag{11}\]
where
\[\mathcal{E}_{\theta}(s,a)=\mathbb{E}_{\rho_{\hat{P}}^{\ast}(\tilde{s},\tilde {a}|s,a)}\left[R_{\theta}(\tilde{s},\tilde{a})+\gamma EV_{\theta}(\tilde{s}, \tilde{a})\right],\quad EV_{\theta}(s,a)=\sum_{s^{\prime}}\hat{P}_{\theta_{2} }(s^{\prime}|s,a)V_{\theta}(s^{\prime}) \tag{12}\]
Optimizing (11) (subject to the same policy constraint) is now the same as optimizing (7) but tractable.
An interesting consequence of maximizing the first two terms of (11) alone (excluding the prior) is that we both increase the reward and modify the internal dynamics to generate states with higher expected value (\(EV\)) upon taking expert actions then following the learner policy \(\tilde{\pi}\), and we do the opposite when taking learner actions. Intuitively, reward and dynamics play complementary roles in determining the value of actions and thus should be regularized [50; 16; 51]. Otherwise, one cannot disentangle the effect of truly high reward and falsely optimistic dynamics. Our prior (6) alleviates this unidentifiability to some extent.
### Robust BM-IRL
We now present our main observation that robustness emerges from the BM-IRL formulation under the dynamics accuracy prior (6).
We start by analyzing a discounted, full-trajectory version of the BM-IRL likelihood (7). Note that discounting does not change the optimal solution to (7) under expressive reward and dynamics model class; nor does it require infinite data because we can truncate the summation at \(T=\text{int}\left(\frac{1}{1-\gamma}\right)\) and obtain nearly the same estimator as with infinite sequence length. Using a result from [13], we decompose the discounted likelihood as follows (see Appendix B.2):
\[\mathbb{E}_{P(\tau)}\left[\sum_{t=0}^{\infty}\gamma^{t}\log\hat{ \pi}(a_{t}|s_{t};\theta)\right]=\mathbb{E}_{P(\tau)}\left[\sum_{t=0}^{\infty} \gamma^{t}\left(Q_{\theta}(s_{t},a_{t})-V_{\theta}(s_{t})\right)\right]\] \[=\underbrace{\mathbb{E}_{\rho_{\mathcal{D}}}\bigg{[}R_{\theta_{1 }}(s_{t},a_{t})\bigg{]}-\mathbb{E}_{\mu}\bigg{[}V_{\theta}(s_{0})\bigg{]}}_{ \ell(\theta)}+\underbrace{\gamma\mathbb{E}_{\rho_{\mathcal{D}}}\bigg{[} \mathbb{E}_{s^{\prime}\sim\hat{P}_{\hat{\theta}_{2}}(\cdot|s_{t},a_{t})}V_{ \theta}(s^{\prime})-\mathbb{E}_{s^{\prime\prime}\sim P(\cdot|s_{t},a_{t})}V_{ \theta}(s^{\prime\prime})\bigg{]}}_{\mathbf{T1}} \tag{13}\]
where \(\mathbf{T1}\) corresponds to the value difference under the real and estimated dynamics. We can show that \(\mathbf{T1}\) is negligible if the estimated dynamics is accurate, e.g. \(\mathbb{E}_{(s,a)\sim P(\tau)}D_{KL}(P(\cdot|s,a)||\hat{P}(\cdot|s,a))\leq\epsilon\) for sufficiently small \(\epsilon\), under the _expert_ data distribution, which can be achieved by setting a large \(\lambda\). Then, \(\mathbf{T1}\) can be dropped and the discounted likelihood reduces to \(\ell(\theta)\).
\(\ell(\theta)\) highlights the reason why the proposed BM-IRL approach can be robust to limited dataset. It poses the offline IRL problem as maximizing the cumulative reward of expert trajectories in the real environment, and minimizing the cumulative reward generated by the learner in the estimated dynamics with respect to _both_ reward and dynamics. In other words, it aims to find performance-matching reward and policy under the _worst-case_ dynamics, which is trained adversarially outside the offline data distribution. This connects BM-IRL with the robust MDP approach to offline model-based RL [44; 47]. We refer to this version of BM-IRL as robust model-based IRL (RM-IRL).
### Proposed Algorithms
We now propose two scalable algorithms to find the MAP solution to (7) via the surrogate objective (11). The first algorithm (**BM-IRL**; 1) applies the naive solution, while the second algorithm (**RM-IRL**; 2) exploits the observation in section 3.2 to derive a more efficient algorithm for high \(\lambda\).
The estimation problem (7) has an inherently nested structure where, for each update of parameters \(\theta\) (the outer problem), we have to solve for the optimal policy \(\hat{\pi}(a|s;\theta)\) (the inner problem). Following recent ML-IRL approaches [29; 13], we perform the nested optimization using _two-timescale_ stochastic approximation [52; 53], where the inner problem is solved via stochastic gradient updates on a faster time scale than the outer problem. For both algorithms, we solve the inner problem using Model-Based Policy Optimization (MBPO; [39]) which uses Soft Actor-Critic (SAC; [26]) in a dynamics model ensemble.
**BM-IRL**: For the BM-IRL outer problem, we estimate the expectations in (11) and (12) via sampling and perform coordinate-ascent optimization. Specifically, for each update step, we first sample a mini-batch of state-action pairs \((s,a)\sim\mathcal{D}\) and a mini-batch of (fake) actions \(a_{\text{fake}}\sim\hat{\pi}(\cdot|s;\theta)\) and simulate both \((s,a)\) and \((s,a_{\text{fake}})\) forward in the estimated dynamics \(\hat{P}\) to get the real and fake trajectories \(\tau_{\text{real}}\), \(\tau_{\text{fake}}\). We then optimize the reward function first by taking a single gradient step to optimize the following objective function:
\[\max_{\theta_{1}}\quad\mathbb{E}_{(s,a)\sim\mathcal{D},\rho_{\hat{P}}^{+}( \hat{s},\hat{a}|s,a)}\left[R_{\theta_{1}}(\tilde{s},\tilde{a})\right]-\mathbb{ E}_{s\sim\mathcal{D},a_{\text{fake}}\sim\hat{\pi}(\cdot|s;\theta),\rho_{\hat{P}}^{+}( \tilde{s},\tilde{a}|s,a_{\text{fake}})}\left[R_{\theta_{1}}(\tilde{s},\tilde{a })\right] \tag{14}\]
Lastly, we optimize the dynamics model by taking a few gradient steps (a hyperparameter) to optimize the following objective function using on-policy rollouts branched from mini-batches of
expert state-actions as in RAMBO [47]:
\[\max_{\theta_{2}} \lambda_{1}\mathbb{E}_{(s,a)\sim\mathcal{D},\rho_{p}^{\star}(\tilde{ s},\tilde{a}|s,a)}\left[EV_{\theta_{2}}(\tilde{s},\tilde{a})\right]-\lambda_{1} \mathbb{E}_{s\sim\mathcal{D},a_{\text{delay}}\sim\tilde{\pi}(\cdot|s,\theta), \rho_{p}^{\star}(\tilde{s},\tilde{a}|s,a_{\text{delay}})}\left[EV_{\theta_{2}}( \tilde{s},\tilde{a})\right]\] \[+\lambda_{2}\mathbb{E}_{(s,a,s^{\prime})\sim\mathcal{D}}\left[ \log\hat{P}_{\theta_{2}}(s^{\prime}|s,a)\right] \tag{15}\]
where we have introduced weighting coefficients \(\lambda_{1}\) and \(\lambda_{2}\) to facilitate tuning the prior precision \(\lambda\) and dynamics model learning rate. We estimate the dynamics gradient using the REINFORCE method with baseline.
**RM-IRL**: We adapt the BM-IRL algorithm slightly for the RM-IRL outer problem, where we only simulate a single trajectory for each state in the mini-batch and update the reward using the following objective:
\[\max_{\theta_{1}}\mathbb{E}_{\rho_{\mathcal{D}}}\left[R_{\theta_{1}}(s,a) \right]-\mathbb{E}_{\rho_{p}^{\star}}\left[R_{\theta_{1}}(s,a)\right] \tag{16}\]
For the dynamics update, we set \(\lambda_{2}\gg\lambda_{1}\) and drop the first term in (15):
\[\max_{\theta_{2}} -\lambda_{1}\mathbb{E}_{s\sim\mathcal{D},a_{\text{delay}}\sim \tilde{\pi}(\cdot|s,\theta),\rho_{p}^{\star}(\tilde{s},\tilde{a}|s,a_{\text{ delay}})}\left[EV_{\theta_{2}}(\tilde{s},\tilde{a})\right]+\lambda_{2}\mathbb{E}_{(s,a,s^{\prime})\sim\mathcal{D}}\left[\log\hat{P}_{\theta_{2}}(s^{\prime}|s,a)\right] \tag{17}\]
We provide additional details and pseudo code for the proposed algorithms in Appendix C.
### Performance Guarantees
In this section, we study how policy and dynamics estimation error affect learner performance in the real environment. Using lemma 4.1 from [54] which decomposes the real environment performance gap between the expert and the learner into their policy and model advantages in the estimated dynamics, we arrive at the follow performance bound (see Appendix B.3):
**Theorem 3.1**.: _Let \(\epsilon_{\hat{\pi}}=-\mathbb{E}_{(s,a)\sim d_{p}^{\star}}[\log\hat{\pi}_{ \hat{P}}(a|s)]\) be the policy estimation error and \(\epsilon_{\hat{P}}=\mathbb{E}_{(s,a)\sim d_{p}^{\star}}D_{KL}[P(\cdot|s,a)|| \hat{P}(\cdot|s,a)]\) be the dynamics estimation error. Let \(R_{max}=\max_{s,a}|R_{\theta}(s,a)|+\log|\mathcal{A}|\). Assuming bounded expert-learner marginal state-action density ratio \(\left\|\frac{d_{p}^{\star}(s,a)}{d_{p}^{\star}(s,a)}\right\|_{\infty}\leq C\), we have the following (absolute) performance bound for the IRL agent:_
\[|J_{P}(\hat{\pi})-J_{P}(\pi)|\leq\frac{1}{1-\gamma}\epsilon_{\hat{\pi}}+\frac {\gamma(C+1)R_{\text{max}}}{(1-\gamma)^{2}}\sqrt{2\epsilon_{\hat{P}}} \tag{18}\]
This bound shows that the performance gap between the IRL agent and the expert is linear with respect to the discount factor in the policy estimation error and quadratic in the dynamics estimation error. Thus, ensuring small dynamics estimation error is essential and prior simultaneous estimation approaches such as [18] which do not explicitly encourage dynamics accuracy are not expected to perform well in general.
## 4 Experiments
We aim to answer the following questions with our experiments: 1) How does the dynamics accuracy prior affect agent behavior? 2) How well does BM-IRL and RM-IRL perform compared to SOTA offline IRL algorithms? We investigate Q1 using a Gridworld environment. We investigate Q2 using the standard D4RL dataset on MuJoCo continuous control benchmarks.
### Gridworld Example
To understand the behavior of the BM-IRL algorithm, we use a 5x5 deterministic gridworld environment where the expert travels from the lower left to the upper right corner using a policy with a discount factor of \(\gamma=0.7\). We represent the reward function as the log probability of the target state: \(\log\tilde{P}(s)\), where the upper right corner has a target probability of 1.
We parameterize both reward and dynamics using softmax of logits. Using 100 expert trajectories of length 50, we trained 3 BM-IRL agents with \(\lambda\in\{0.001,0.5,10\}\). As a comparison, we also trained a two-stage IRL agent whose dynamics model was fixed after an initial maximum likelihood pretraining step and its reward was estimated using the same gradient update rule as BM-IRL in (14).
The ground truth and estimated target state probabilities are shown in the first row of Figure 2. All agents correctly estimated that the upper right corner has the highest reward, although not with the same precision as the ground truth sparse reward. BM-IRL agents with \(\lambda=0.5\) and \(\lambda=10\) were able to assign high reward only to states close to the true goal state, where as the BM-IRL agent with \(\lambda=0.001\) and the two-stage IRL agent assigned high rewards to state much further away from the true goal state.
We visualize the estimated dynamics models by sampling 100 imagined rollouts using the estimated policies in the second row of Figure 2. This figure shows that the BM-IRL(\(\lambda=0.001\)) agent, which corresponds roughly to Herman et al.'s simultaneous estimation algorithm where no (or a very weak) prior is used [18], and the two-stage IRL agent would take significantly more illegal transitions (i.e., diagonal transitions) than BM-IRL agent with higher \(\lambda\). Comparing among BM-IRL agents, we see that increasing \(\lambda\) decreases the number of illegal transitions. In contrast to the two-stage IRL agent whose illegal transitions are rather random, the illegal transitions generated by BM-IRL agents with lower \(\lambda\) have a strong tendency to point towards the goal state. This corroborates with our analysis that BM-IRL learns optimistic dynamics under the expert distribution.
### MuJoCo Benchmarks
We study the performance of the proposed algorithms in the MuJoCo continuous control benchmark [55] provided by the D4RL dataset [56]. Following prior IRL evaluation protocols, our IRL agents maintain two datasets: 1) a _transition dataset_ is used to trained the dynamics model and the actor-critic networks and 2) an _expert dataset_ is used to train the reward function. The transition dataset is selected from one of the following: medium, medium-replay, and medium-expert, where medium-expert has the highest quality with the most complete coverage of expert trajectories. We fill the expert dataset with 10 randomly sampled D4RL expert trajectories. For each evaluation environment (HalfCheetah, Hopper, Walker2D) and transition dataset, we train our algorithms offline for a fixed number of epochs and repeat this process for 5 random seeds. After the final epoch, we evaluate the agent for 10 episodes in the corresponding environment. For both BM-IRL and RM-IRL, we set \(\lambda_{1}=0.01,\lambda_{2}=1\) to encourage an accurate dynamics model under the offline data distribution. We provide additional implementation and experimental details in Appendix D.
For the baseline algorithms, we choose Behavior Cloning (BC) and ValueDICE [57], both of which are model-free offline imitation learning algorithms and they do not use the transition dataset. For
Figure 2: Gridworld experiment results. Ground truth and estimated target state distributions (softmax of reward; **Row 1**) and sample path of estimated policy in estimated dynamics (**Row 2**) for two-stage and BM-IRL agents with \(\lambda=[0.001,0.5,10]\). BM-IRL agents with higher \(\lambda\) obtain more accurate reward estimates and commit fewer illegal transitions.
SOTA offline model-based IRL algorithms, we compare with the following: 1) ML-IRL uses a similar two-timescale algorithm but operates in a two-stage fashion with a pessimistic penalty which adapts to the learner policy's state-action visitation [13], 2) CLARE performs marginal density matching with a mixture of expert and transition datasets weighted by the in-distributionness of non-expert state-action pairs [12]. We expect our algorithms to outperform CLARE whose marginal density matching method requires a larger expert dataset to work well as shown in [13; 12]. We also expect to perform similarly to ML-IRL, but better when adversarial model training is more appropriate than ensemble-based pessimistic penalty for a given dataset. Lastly, we expect BM-IRL to perform better than RM-IRL since BM-IRL solves the simultaneous estimation problem exactly.
Figure 3 shows the results of our algorithms and the comparisons reported in [13]. In the Hopper environment, all algorithms performed similarly on all datasets, except for BM-IRL and ML-IRL having lower performance with much larger variance on the medium dataset. In the HalfCheetah and Walker2D environments, our algorithms and ML-IRL performed significantly better. BM-IRL and RM-IRL outperformed other algorithms in 6/9 settings, and in 2/9 settings, the differences from the best algorithm were very small. The only case where the best algorithm, ML-IRL, significantly outperformed our algorithms was HalfCheetah medium-replay. This is mostly likely because in this environment, the medium-replay dataset has much less coverage on expert trajectories so that adversarial model training might have been too conservative.
As we expected, BM-IRL outperformed RM-IRL in 7/9 settings. The only case where its mean performance was significantly lower was Hopper medium where BM-IRL performance had large variance. However, as we explain in the section 5, this is because BM-IRL has higher training instability where its peak performance was in fact on par with RM-IRL.
## 5 Limitations
A limitation of the proposed algorithms is that BM-IRL can have less stable training dynamics than RM-IRL where its evaluation performance may alternate between periods of near optimal performance and periods of medium performance (thus the larger variance in Figure 3). While stability is a known issue for training energy-based models using contrastive divergence objectives (i.e., objective (11)) [58], we believe the current issue is related to BM-IRL's two-sample path method having weaker and noisier learning signal. Another source of instability is likely introduced by simultaneously training the dynamics model, which may be improved in future work by adding Lipschitz regularizations [59].
## 6 Conclusion
We showed that inverse reinforcement learning under a Bayesian simultaneous estimation framework gives rise to robust policies. This yielded a set of novel offline model-based IRL algorithms achieving SOTA performance in the MuJoCo continuous control benchmarks without ad hoc pessimistic penalty design. An interesting future direction is to identify appropriate priors to robustly infer reward and internal dynamics from sub-optimal and biased human demonstrators.
Figure 3: MuJoCo benchmark performance using 10 expert trajectories from the D4RL dataset. Bar heights and error bars represent the means and standard deviations of normalized scores, respectively, over 5 random seeds. Baseline algorithm performances are taken from [13].
#### Acknowledgments
The authors would like to acknowledge partial support from the Army Research Office grant W911NF-22-1-0213. The authors would also like to acknowledge Marc Rigter for answering questions about the RAMBO algorithm.
|
2302.03030 | The emergence of Airy stress function in two-dimensional disordered
packings of particles | The packing of hard-core particles in contact with their neighbors offers the
statically determinate problem which allows analytical investigation of the
stress tensor distribution. We construct the stress probability functional and
derive the complete set of equations for the tensor components in
two-dimensions. | Dmitry Grinev | 2022-12-18T14:01:22Z | http://arxiv.org/abs/2302.03030v1 | # The emergence of Airy stress function in two-dimensional disordered packings of particles.
###### Abstract
The packing of hard-core particles in contact with their neighbors offers the statically determinate problem which allows analytical investigation of the stress tensor distribution. We construct the stress probability functional method and derive the complete set of equations for the macroscopic tensor components in two dimensions. For the isotropic and homogeneous two-dimensional packing the classical Euler-Cauchy and Navier equation are derived. For such packing its macroscopic stress tensor can be expressed via the Airy stress function.
## 1 Introduction
First-principles derivation of equations that govern transmission of stress in disordered particulate media still appears to remain a challenging problem [1]. On one hand there are conventional theoretical approaches that are perfectly adequate for describing macroscopic mechanical behavior of soils driven by external forces [2]. On the other hand there is experimental evidence of rheological behavior [3], that require statistical-mechanical treatment. There have been various attempts to develop statistical-mechanical approaches for such systems [4]. However they have been treated with a degree of healthy skepticism by soil mechanics and geotechnical engineering communities. This is related to the fact that there are classical problems of granular mechanics with well-known solutions. The former should be used for sanity-checks of novel theories and the latter should appear naturally in derivations. Recent attempts combined analytical methods with discrete element modeling to study Janssen effect and stress propagation in hexagonal lattice arrays of discs [5]-[6]. Ordered packings are convenient and popular toy-models [7] but ultimately one must face ubiquitous disorder at various spatial scales and figure out a path leading to continuum equations. This paper offers a first-principles approach that links classical theories for the macroscopic stress distribution in two-dimensional granular media to the interparticle equations of force and torque balance.
## 2 The constitutive relation and Newton's laws
Let us consider a static array of hard-core particles in contact with their neighbors. The packing is assumed to be an assembly of discrete rigid particles whose interactions with their neighbors are localized at point-like contacts. When a static packing of incompressible particles in contact is subjected to external forces
at its boundaries, these forces are transmitted through the contact network. This network is determined by the set of contact points in our model. Therefore the description of the network of interparticle contacts is essential for the understanding of force transmission. We assume that the set of contact points \(C_{i}^{\alpha\beta}\) provides the complete geometrical specification for such static packing. We define the centroid of contacts of particle \(\alpha\) as
\[R_{i}^{\alpha}=\frac{\sum_{\beta}C_{i}^{\alpha\beta}}{z^{\alpha}} \tag{1}\]
where \(i=1,...d\) is the Cartesian index, \(z_{\alpha}\) is the coordination number of particle \(\alpha\). The distance between particles \(\alpha\) and \(\beta\) is defined as the distance between their centroids of contacts
\[R_{i}^{\alpha\beta}=R_{i}^{\beta}-R_{i}^{\alpha}=r_{i}^{\alpha\beta}-r_{i}^{ \beta\alpha} \tag{2}\]
where \(r_{i}^{\alpha\beta}\) is the \(i\)-th component of the vector joining the centroid of contact with the contact point i.e.
\[\sum_{\beta}r_{i}^{\alpha\beta}=0 \tag{3}\]
In \(d\) dimensions Newton's laws of force and couple balance for each particle give us the system of \(\frac{Nd(d+1)}{2}\) equations for the interparticle forces \(f_{i}^{\alpha\beta}\)
\[\sum_{\beta}f_{i}^{\alpha\beta}+g_{i}^{\alpha}=0 \tag{4}\]
\[f_{i}^{\alpha\beta}+f_{i}^{\beta\alpha}=0 \tag{5}\]
\[\sum_{\beta}\epsilon_{ikl}f_{k}^{\alpha\beta}r_{l}^{\alpha\beta}+c_{i}^{ \alpha}=0 \tag{6}\]
where \(g_{i}^{\alpha}\) is the external body force acting on grain \(\alpha\) and \(c_{i}^{\alpha}\) is the external body couple which we take to be zero. Particles are considered to be perfectly hard, perfectly rough and each particle \(\alpha\) has a coordination number \(z_{\alpha}=d+1\). The counting of the total number of equations and the number of unknowns allows us to formulate the simplest statically determinate problem. The force moment \(S_{ij}^{\alpha}\) for grain \(\alpha\) is
\[S_{ij}^{\alpha}=\sum_{\beta}f_{i}^{\alpha\beta}r_{j}^{\alpha\beta} \tag{7}\]
If each particle \(\alpha\) has a coordination number \(z_{\alpha}=d+1\) then the macroscopic volume average of \(S_{ij}^{\alpha}\) is defined by the sets of interparticle forces and contact points. "Coarse-graining" a tensor in a heterogeneous medium is normally a non-trivial task but within the confines of this paper we shall use a simple expression in the form of
\[\sigma_{ij}(\vec{r})=\frac{1}{V}\sum_{\alpha}^{N}S_{ij}^{\alpha}\delta(\vec{r }-\vec{R^{\alpha}}) \tag{8}\]
This symmetric tensor has \(\frac{d(d+1)}{2}\) independent components and the number of readily equations available is \(d\). These are classical Euler-Cauchy equations of the macroscopic stress equilibrium which have their origin in Newton's second law [8]. Thus in \(2D\) there is one missing equation (usually called a constitutive relation). There have been numerous attempts to discover this "missing equation" for static and quasi-static packings of particles [5]-[10]. However this paper will focus on deriving the equations, which
are already well-known. The reason for this undertaking is to offer a simple yet convincing "sanity check" of the previously proposed method [9] and perhaps offer a better understanding of the links between mechanical and structural characteristics at different spatial scales. The main idea of the formalism was to consider the probability functional for the set of force moments
\[\begin{split} P\left\{S_{ij}^{\alpha}\right\}=&\mathcal{ M}\int\prod_{\alpha,\beta}^{N}\delta\left(S_{ij}^{\alpha}-\sum_{\beta}f_{i}^{ \alpha\beta}r_{j}^{\alpha\beta}\right)\\ &\times P\left\{\vec{f}^{\alpha\beta}\right\}\mathcal{D}\vec{f}^{ \alpha\beta}\end{split} \tag{9}\]
where \(\mathcal{M}\) is determined by the contact network. Given the fixed geometry of the contact network the probability distribution of the set of interparticle forces that satisfy the system of Newton's equations of balance is given by
\[\begin{split} P\left\{\vec{f}^{\alpha\beta}\right\}=& \mathcal{N}\prod_{\alpha=1,\beta=n.n.}^{N}\delta\left(\sum_{\beta}f_{i}^{ \alpha\beta}+g_{i}^{\alpha}\right)\\ &\times\delta\left(\sum_{\beta}\epsilon_{ikl}f_{k}^{\alpha\beta} r_{l}^{\alpha\beta}\right)\\ &\times\delta\left(f_{i}^{\alpha\beta}+f_{i}^{\beta\alpha}\right) \end{split} \tag{10}\]
where \(\mathcal{N}\) is given by
\[\mathcal{N}^{-1}=\int\prod_{\alpha,\beta}^{N}P\left\{\vec{f}^{\alpha\beta} \right\}\mathcal{D}\vec{f}^{\alpha\beta} \tag{11}\]
The main idea is to transform \(P\left\{S_{ij}^{\alpha}\right\}\) into the form
\[P\left\{S_{ij}^{\alpha}\right\}=P\left\{S_{ij}^{\alpha}|force\right\}P\left\{S _{ij}^{\alpha}|geometry\right\} \tag{12}\]
Let us exponentiate all delta-functions to obtain
\[P\left\{S_{ij}^{\alpha}\right\}= \mathcal{N}\prod_{\alpha=1,\beta=n.n.}^{N}e^{iA}\mathcal{D}\vec{ f}^{\alpha\beta}\mathcal{D}\zeta_{ij}{}^{\alpha}\mathcal{D}\vec{\gamma}^{ \alpha}\mathcal{D}\vec{\lambda}^{\alpha}\vec{\eta}^{\alpha\beta} \tag{13}\]
where
\[\begin{split} A=&\sum_{\alpha}\zeta_{ij}^{\alpha} \left(S_{ij}^{\alpha}-\sum_{\beta}f_{i}^{\alpha\beta}r_{j}^{\alpha\beta} \right)+\gamma_{i}^{\alpha}\left(\sum_{\beta}f_{i}^{\alpha\beta}-g_{i}^{\alpha }\right)\\ &+\lambda_{i}^{\alpha}\left(\sum_{\beta}\varepsilon_{ikl}f_{k}^{ \alpha\beta}r_{l}^{\alpha\beta}\right)+\eta_{i}^{\alpha\beta}\left(f_{i}^{ \alpha\beta}+f_{i}^{\beta\alpha}\right).\end{split} \tag{14}\]
Integrating out the fields \(\vec{f}^{\alpha\beta},\vec{\lambda}^{\alpha},\vec{\eta}^{\alpha\beta}\) gives us
\[\begin{split} P\left\{S_{ij}^{\alpha}\right\}=& \mathcal{N}\prod_{\alpha=1,\beta=n.n.}^{N}e^{i\left(\sum_{\alpha}^{N}S_{ij}^{ \alpha}\zeta_{ij}^{\alpha}-\gamma_{i}^{\alpha}g_{i}^{\alpha}\right)}\delta \left(\zeta_{ij}^{\alpha}r_{j}^{\alpha\beta}-\gamma_{i}^{\alpha}-\zeta_{ij}^ {\beta}r_{j}^{\beta\alpha}+\gamma_{i}^{\beta}\right)\mathcal{D}\zeta_{ij}{}^ {\alpha}\mathcal{D}\vec{\gamma}^{\alpha}\end{split} \tag{15}\]
Let us observe that the third term in equation (14) gives \(\prod_{\alpha}^{N}\delta\left(S^{\alpha}_{ij}-S^{\alpha}_{ji}\right)\) so that we can interchange labels \(\alpha\) and \(\beta\) in the last term of (14). Using the well-known formula for the product of delta-functions \(\delta\left(x+y+z_{1}\right)\delta\left(x+y+z_{2}\right)=\delta\left(z_{1}-z_{ 2}\right)\delta\left(x+y+\frac{z_{1}+z_{2}}{2}\right)\) we obtain the system of linear algebraic equations for \(\zeta^{\alpha}_{ij}\) and \(\gamma^{\alpha}_{i}\)
\[\zeta^{\alpha}_{ij}r^{\alpha\beta}_{j}-\gamma^{\alpha}_{i}=\zeta^{\beta}_{ij} r^{\beta\alpha}_{j}-\gamma^{\beta}_{i} \tag{16}\]
We shall use this system of equations to derive the system of discrete equations for \(S^{\alpha}_{ij}\) and then use the simplest "coarse-graining" procedure in order to derive the system of equations for the macroscopic stress tensor in two dimensions.
## 3 First coordination shell approximation for the Euler-Cauchy equation
Let us present \(\zeta^{\alpha}_{ij}\) as the sum of two variables
\[\zeta^{\alpha}_{ij}=\zeta^{\alpha\,0}_{ij}+\zeta^{\alpha\,*}_{ij} \tag{17}\]
where \(\zeta^{\alpha\,0}_{ij}\) satisfies the equation
\[\zeta^{\alpha\,0}_{ij}r^{\alpha\beta}_{j}-\gamma^{\alpha}_{i}=\zeta^{\beta\,0 }_{ij}\,r^{\beta\alpha}_{j}-\gamma^{\beta}_{i} \tag{18}\]
Thus \(\zeta^{\alpha\,*}_{ij}\) satisfies the set of \(\frac{zNd}{2}\) equations
\[\zeta^{\alpha\,*}_{ij}r^{\alpha\beta}_{j}-\zeta^{\beta\,*}_{ij}r^{\beta\alpha} _{j}=0 \tag{19}\]
This system of linear equations gives the required number \(\frac{Nd(d-1)}{2}\) of constraints for the set of \(S^{\alpha}_{ij}\). Let us introduce a tensor \(M^{alpha}_{ij}\) as the inverse of \(\sum_{\beta}R^{\alpha\beta}_{i}R^{\alpha\beta}_{j}\) and rewrite (18) in the following form
\[\zeta^{\alpha\,0}_{ij}=M^{\alpha}_{jl}\sum_{\beta}R^{\alpha\beta}_{l}\left( \gamma^{\alpha}_{i}-\gamma^{\beta}_{i}\right)+M^{\alpha}_{jl}\sum_{\beta}R^{ \alpha\beta}_{l}r^{\beta\alpha}_{k}\left(\zeta^{\beta\,0}_{ik}-\zeta^{\alpha\, 0}_{ik}\right) \tag{20}\]
We can iterate this equation further in order to link the left part to the variables corresponding to next nearest neighbours of the reference particle \(\alpha\). The second iteration propagates this equation further through the network of contacts, however if we limit this process to the first coordination shell of particle \(\alpha\) then we obtain the set of \(Nd\) discrete equations
\[\sum_{\beta}S^{\alpha}_{ij}M^{\alpha}_{jl}R^{\alpha\beta}_{l}-\sum_{\beta}S^{ \beta}_{ij}M^{\beta}_{jl}R^{\beta\alpha}_{l}=g^{\alpha}_{i} \tag{21}\]
The simplest "coarse-graining" procedure applied to this set of expressions gives us \(d\) Euler-Cauchy equations for macroscopic stress tensor
\[\nabla_{j}\sigma_{ij}(\vec{r})=g_{i}(\vec{r}) \tag{22}\]
During this derivation we assumed that the granular packing is homogeneous and has a constant density at the macroscopic length-scale. One can think of a more sophisticated scheme of "coarse-graining" method in order to explore effects of heterogeneity at different spatial scales [11].
The Airy stress function and Navier equation
Let us now derive the remaining \(d(d-1)/2\) equations for the macroscopic stress tensor and eventually limit our analysis to the two-dimensional case. In order to obtain \(P\left\{S^{\alpha}_{ij}|geometry\right\}\) let us investigate the set of equations (19). Firstly it is important to observe that (19) appears to be too many equations. Because of the presence of the linear combination \(\sum_{\beta}r^{\alpha\beta}_{i}=0\) there are here are many internal identities and meticulous counting shows that it indeed contain \(dN\) equations. Let us sum equation (19) over \(\beta\) and obtain
\[\sum_{\beta}\zeta^{\beta\,*}_{ij}r^{\beta\alpha}_{j}=0 \tag{23}\]
This expression can be substituted into the integral for \(P\left\{S^{\alpha}_{ij}|geometry\right\}\)
\[P\left\{S^{\alpha}_{ij}|geometry\right\}=\int\prod_{\alpha}^{N}{\rm e}^{{\rm i }\sum_{\alpha}^{N}S^{\alpha}_{ij}S^{\alpha}_{ij}}^{*}\delta\left(\sum_{\beta} \zeta^{\beta\,*}_{ij}r^{\beta\alpha}_{j}\right){\cal D}\zeta^{\alpha\,*} \tag{24}\]
The set of delta-functions containing ((19) can now be exponentiated and after integration out \(\zeta^{\alpha\,*}\) this gives
\[P\left\{S^{\alpha}_{ij}|geometry\right\}=\int\prod_{\alpha}^{N}\delta\left(S^ {\alpha}_{ij}-\frac{1}{2}\sum_{\beta}\left(\phi^{\beta}_{i}r^{\alpha\beta}_{j }+\phi^{\beta}_{j}r^{\alpha\beta}_{i}\right)\right){\cal D}\phi^{\alpha} \tag{25}\]
where the integration over the variable \(\phi^{\alpha}\) gives the required \(Nd(d-1)/2\) constraints for \(S^{\alpha}_{ij}\)
Let us now conduct the promised sanity-check of this method in \(2D\) and construct the simplest interpolation of \(\phi^{\beta}\)
\[\phi^{\beta}_{i}=\phi^{\alpha}_{i}+R^{\alpha\beta}_{j}\nabla_{j}\phi^{\alpha}_ {i} \tag{26}\]
After substituting it into (23) and summing over \(\beta\) we have
\[S^{\alpha}_{ij}=\nabla_{k}\phi^{\alpha}_{i}\sum_{\beta}R^{\alpha\beta}_{k}r^{ \alpha\beta}_{j} \tag{27}\]
The previously employed "coarse-graining" method gives the macroscopic stress tensor
\[\sigma_{ij}=\frac{1}{V}\sum_{\alpha}^{N}\sum_{\beta}r^{\alpha\beta}_{j}R^{ \alpha\beta}_{k}\nabla_{k}\phi^{\alpha}_{i}=F_{jk}\nabla_{k}\phi_{i} \tag{28}\]
where \(F_{ij}=\sum_{\beta}R^{\alpha\beta}_{i}R^{\alpha\beta}_{j}\) is the so called fabric tensor. We can now eliminate \(\phi_{i}\) and obtain for the isotropic packing of particles the well-known Navier equation which imposes kinematic compatibility on the stresses [12]
\[\frac{\partial^{2}\sigma_{xx}}{\partial y^{2}}+\frac{\partial^{2}\sigma_{yy}} {\partial x^{2}}-2\frac{\partial^{2}\sigma_{xy}}{\partial x\partial y}=0 \tag{29}\]
This equation implies that the stress tensor components can be expressed in terms of the Airy function [12]. In 3-D there are mathematical challenges with the "coarse-graining" procedure which yet to be resolved.
Conclusion
The application of our method allows the derivation of the classical Euler-Cauchy and Navier equations for an isotropic and homogeneous two-dimensional packing. This appears to be a much-needed sanity-check of the first-principles approach. Further development of this framework is in progress and have a potential to be applicable to quasi-static and three-dimensional problems of stress transmission in particulate media.
## 6 Conflict of Interest
The author declares that he has no conflict of interest.
## 7 Acknowledgments
The work on sections 3 and 4 was supported by the Russian Science Foundation (grant No. \(21-71-30011\)). The work on sections 1 and 2 was carried out within the framework of a development programme for the Regional Scientific and Educational Mathematical Center of the Yaroslavl State University with financial support from the Ministry of Science and Higher Education of the Russian Federation (Agreement on provision of subsidy from the federal budget No. \(075-02-2022-886\)).
|
2309.11746 | Discrete Spinning Tops -- Difference equations for Euler, Lagrange, and
Kowalevski tops | Several methods of time discretization are examined for integrable rigid body
models, such as Euler, Lagrange, and Kowalevski tops. Problems of Lax-Moser
pairs, conservation laws, and explicit solver algorithms are discussed. New
discretization method is proposed for Kowalevski top, which have properties
$\boldsymbol{\gamma}^2=1$, and the Kowalevski integral $|\xi|^2=\text{const.}$
satisfied exactly. Numerical tests are done successfully. | Kiyoshi Sogo | 2023-09-21T02:49:55Z | http://arxiv.org/abs/2309.11746v1 | # Discrete Spinning Tops - Difference equations for Euler, Lagrange, and Kowalevski tops
###### Abstract
Several methods of time discretization are examined for integrable rigid body models, such as Euler, Lagrange, and Kowalevski tops. Problems of Lax-Moser pairs, conservation laws, and explicit solver algorithms are discussed. New discretization method is proposed for Kowalevski top, which have properties \(\boldsymbol{\gamma}^{2}=1\), and the Kowalevski integral \(|\xi|^{2}=\text{const.}\) satisfied exactly. Numerical tests are done successfully.
Institute of Computational Fluid Dynamics, 1-16-5, Haramachi, Meguroku, Tokyo, 152-0011, Japan
## 1 Introduction
### Euler-Poisson equation
Motions of general top under gravity are described by the Euler-Poisson equations [1]
\[\dot{\boldsymbol{m}}=\boldsymbol{m}\times\boldsymbol{\omega}+\boldsymbol{ \gamma}\times\boldsymbol{g},\qquad\dot{\boldsymbol{\gamma}}=\boldsymbol{\gamma }\times\boldsymbol{\omega}, \tag{1.1}\]
where the dot implies time differentiation \(d/dt\), and \(\boldsymbol{m}\) angular momentum, \(\boldsymbol{\omega}\) angular velocity, which are related to each other by \(\boldsymbol{m}=\text{diag}(A,B,C)\)\(\boldsymbol{\omega}\), that is, \(m_{1}=A\omega_{1},\ m_{2}=B\omega_{2},\ m_{3}=C\omega_{3}\) with moments of inertia \(A,\ B,\ C\), and the base vector \(\boldsymbol{\gamma}=\boldsymbol{e}_{z}\), and
\(\boldsymbol{g}=mg(x_{0},y_{0},z_{0})\) coordinate of gravitational center of the top. In components they are given by
\[A\dot{\omega}_{1} =(B-C)\omega_{2}\omega_{3}+mg(\gamma_{2}z_{0}-\gamma_{3}y_{0}),\] \[B\dot{\omega}_{2} =(C-A)\omega_{3}\omega_{1}+mg(\gamma_{3}x_{0}-\gamma_{1}z_{0}), \tag{1.2}\] \[C\dot{\omega}_{3} =(A-B)\omega_{1}\omega_{2}+mg(\gamma_{1}y_{0}-\gamma_{2}x_{0}),\] \[\dot{\gamma}_{1} =\gamma_{2}\omega_{3}-\gamma_{3}\omega_{2},\] \[\dot{\gamma}_{2} =\gamma_{3}\omega_{1}-\gamma_{1}\omega_{3},\] (1.3) \[\dot{\gamma}_{3} =\gamma_{1}\omega_{2}-\gamma_{2}\omega_{1}.\]
Before proceeding further let us note that (1.1) have three conservation laws:
\[1)\ \boldsymbol{\gamma}^{2}=1,\quad 2)\ \boldsymbol{m}\cdot \boldsymbol{\gamma}=\text{const.},\quad 3)\ E=\frac{1}{2}\boldsymbol{m}\cdot \boldsymbol{\omega}+\boldsymbol{g}\cdot\boldsymbol{\gamma}=\text{const.}, \tag{1.4}\]
which are derived easily from (1.1). If there exists the fourth integral, (1.1) becomes completely integrable. It is known that only three cases are completely integrable. [1]
(1) Euler: \((x_{0},y_{0},z_{0})=0,\quad\boldsymbol{m}^{2}=\text{const.}\), (1.5)
(2) Lagrange: \(A=B,\ x_{0}=y_{0}=0,\quad\omega_{3}=\text{const.}\), (1.6)
(3) Kowalewski: \(A=B=2C,\ y_{0}=z_{0}=0,\quad k^{2}=|\omega^{2}-c_{0}\gamma|^{2}=\text{const.}\)
with \(\omega=\omega_{1}+i\omega_{2},\ \gamma=\gamma_{1}+i\gamma_{2},\ c_{0}=mgx_{0}\). We discuss on the problem of time discretization of these cases in this paper.
Now let us rewrite (1.2), (1.3) in matrix form. If we introduce \(3\times 3\) anti-symmetric matrices
\[M=\left(\begin{array}{ccc}0&m_{3}&-m_{2}\\ -m_{3}&0&m_{1}\\ m_{2}&-m_{1}&0\end{array}\right),\quad\Omega=\left(\begin{array}{ccc}0& \omega_{3}&-\omega_{2}\\ -\omega_{3}&0&\omega_{1}\\ \omega_{2}&-\omega_{1}&0\end{array}\right), \tag{1.8}\]
\[\Gamma=\left(\begin{array}{ccc}0&\gamma_{3}&-\gamma_{2}\\ -\gamma_{3}&0&\gamma_{1}\\ \gamma_{2}&-\gamma_{1}&0\end{array}\right),\quad G=mg\left(\begin{array}{ccc }0&z_{0}&-y_{0}\\ -z_{0}&0&x_{0}\\ y_{0}&-x_{0}&0\end{array}\right), \tag{1.9}\]
the Euler-Poisson equations (1.2), (1.3) are rewritten in a matrix relation by
\[\dot{M}=[\Omega,M]+[G,\ \Gamma]\quad\Longleftrightarrow\quad \dot{\boldsymbol{m}}=\boldsymbol{m}\times\boldsymbol{\omega}+\boldsymbol{ \gamma}\times\boldsymbol{g}, \tag{1.10}\] \[\dot{\Gamma}=[\Omega,\ \Gamma]\quad\Longleftrightarrow\quad \dot{\boldsymbol{\gamma}}=\boldsymbol{\gamma}\times\boldsymbol{\omega}, \tag{1.11}\]
where the symbol \([A,\ B]=AB-BA\) is a matrix commutator. We can rewrite further these two equations in a combined form [1]
\[\frac{d}{dt}\left(\Gamma+\varepsilon M\right)=[\Omega+\varepsilon G,\ \Gamma+ \varepsilon M], \tag{1.12}\]
where \(\varepsilon\) is a Grassmann symbol \(\varepsilon^{2}=0\). Noting such \(\varepsilon\) is realized by \(2\times 2\) matrix
\[\varepsilon=\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right), \tag{1.13}\]
it is amusing to remark that (1.12) can be rewritten in a kind of Lax-Moser form
\[\frac{d\mathcal{L}}{dt}=[\mathcal{M},\ \mathcal{L}],\qquad\mathcal{L}= \left(\begin{array}{cc}\Gamma&M\\ 0&\Gamma\end{array}\right),\quad\mathcal{M}=\left(\begin{array}{cc}\Omega&G \\ 0&\Omega\end{array}\right), \tag{1.14}\]
however this _should not_ be called Lax-Moser pair, since general Euler-Poisson equations are _not_ integrable at all.
### Discrete Euler Top
Let us explain our problem of time discretization by taking the Euler top as the simplest example. Equations of motion for the Euler top are, which is the special case of \(\boldsymbol{g}=0\) in (1.1),
\[A\dot{\omega}_{1}=(B-C)\omega_{2}\omega_{3},\quad B\dot{\omega}_{2}=(C-A) \omega_{3}\omega_{1},\quad C\dot{\omega}_{3}=(A-B)\omega_{1}\omega_{2},\]
which are discretized many years ago by Hirota-Kimura [2] into
\[\omega_{1}^{n+1}-\omega_{1}^{n} =\frac{h(B-C)}{2A}\left(\omega_{2}^{n+1}\omega_{3}^{n}+\omega_{2} ^{n}\omega_{3}^{n+1}\right),\] \[\omega_{2}^{n+1}-\omega_{2}^{n} =\frac{h(C-A)}{2B}\left(\omega_{3}^{n+1}\omega_{1}^{n}+\omega_{3} ^{n}\omega_{1}^{n+1}\right), \tag{1.15}\] \[\omega_{3}^{n+1}-\omega_{3}^{n} =\frac{h(A-B)}{2C}\left(\omega_{1}^{n+1}\omega_{2}^{n}+\omega_{1} ^{n}\omega_{2}^{n+1}\right),\]
with time increment \(h\) such as \(t=nh\). We call this kind of explicit solver as HK scheme. One of important features of HK scheme is _time reversal symmetry_, _i.e._\(h\to-h\) corresponds exchanging \(n\leftrightarrow n+1\).
Let us mention here the author's work [3] giving the discrete Lax-Moser pair for this Euler top, by relating it to \(2\times 2\) matrix Nahm system.[4]
This HK scheme is rewritten in vector form by
\[\mathbf{m}^{n+1}-\mathbf{m}^{n}=\frac{h}{2}\left(\mathbf{m}^{n+1}\times\mathbf{\omega}^{n}+\mathbf{m} ^{n}\times\mathbf{\omega}^{n+1}\right). \tag{1.16}\]
Then it is easy to verify that \((\mathbf{m}^{n+1})^{2}\neq(\mathbf{m}^{n})^{2}\), because
\[(\mathbf{m}^{n+1}+\mathbf{m}^{n})\cdot(\mathbf{m}^{n+1}-\mathbf{m}^{n})=\frac{h}{2 }(\mathbf{m}^{n+1}+\mathbf{m}^{n})\cdot\left(\mathbf{m}^{n+1}\times\mathbf{\omega}^{n}+\mathbf{m}^ {n}\times\mathbf{\omega}^{n+1}\right)\] \[\quad=\frac{h}{2}\left(\mathbf{m}^{n}\cdot(\mathbf{m}^{n+1}\times\mathbf{ \omega}^{n})+\mathbf{m}^{n+1}\cdot(\mathbf{m}^{n}\times\mathbf{\omega}^{n+1})\right)\] \[\quad=\frac{h}{2}\left(\mathbf{\omega}^{n}-\mathbf{\omega}^{n+1}\right) \cdot(\mathbf{m}^{n}\times\mathbf{m}^{n+1})\neq 0, \tag{1.17}\]
therefore the conservation law \(\mathbf{m}^{2}=\text{const.}\) is broken in HK scheme, and \(O(h)\) correction term is necessary. Hirota-Kimura [2] discussed already such corrections, also for another broken energy conservation law (\(\mathbf{m}\cdot\mathbf{\omega}=\text{const.}\)).
To preserve the moment conservation \((\mathbf{m}^{n+1})^{2}=(\mathbf{m}^{n})^{2}\) exactly, we should use instead
\[\mathbf{m}^{n+1}-\mathbf{m}^{n}=\frac{h}{2}\ (\mathbf{m}^{n+1}+\mathbf{m}^{n})\times\mathbf{\omega}^ {n}, \tag{1.18}\]
where the last \(\mathbf{\omega}^{n}\) can be changed to \(\mathbf{\omega}^{n+1}\). Changing to \(\mathbf{\omega}^{n+1}\) however makes solver implicit. This kind of solver like (1.18) should be called BS scheme, since Bobenko-Suris [5] used similar algorithm to discretize Lagrange top, which will be discussed in the next section. It should be noted that such BS scheme breaks time reversal symmetry.
Finally it should be remarked here that the difference equations
\[\mathbf{m}^{n+1}-\mathbf{m}^{n}=\frac{h}{4}\ (\mathbf{m}^{n+1}+\mathbf{m}^{n})\times(\mathbf{ \omega}^{n+1}+\mathbf{\omega}^{n}), \tag{1.19}\]
conserves both \(\mathbf{m}^{2}\) and \(\mathbf{m}\cdot\mathbf{\omega}\) exactly. The latter is shown by
\[(\mathbf{\omega}^{n+1}+\mathbf{\omega}^{n})\cdot(\mathbf{m}^{n+1}-\mathbf{m}^{n})=0\quad \Longrightarrow\quad\mathbf{m}^{n+1}\cdot\mathbf{\omega}^{n+1}=\mathbf{m}^{n}\cdot\mathbf{ \omega}^{n}, \tag{1.20}\]
where identity \(\mathbf{m}^{n+1}\cdot\mathbf{\omega}^{n}=\mathbf{m}^{n}\cdot\mathbf{\omega}^{n+1}\) is used. This scheme has time reversal symmetry, but this is _not_ the explicit solver unfortunately.
Discrete Lagrange top
### Discrete Euler-Poisson equation in HK scheme
Our Euler-Poisson equations (1.2) (1.3) can be discretized in HK scheme as follows,
\[\omega_{1}^{n+1}-\omega_{1}^{n} =\frac{h(B-C)}{2A}\left(\omega_{2}^{n+1}\omega_{3}^{n}+\omega_{2}^{ n}\omega_{3}^{n+1}\right)+\frac{hmg}{2A}\left(z_{0}(\gamma_{2}^{n+1}+\gamma_{2}^{n})- y_{0}(\gamma_{3}^{n+1}+\gamma_{3}^{n})\right),\] \[\omega_{2}^{n+1}-\omega_{2}^{n} =\frac{h(C-A)}{2B}\left(\omega_{3}^{n+1}\omega_{1}^{n}+\omega_{3}^ {n}\omega_{1}^{n+1}\right)+\frac{hmg}{2B}\left(x_{0}(\gamma_{3}^{n+1}+\gamma_{3 }^{n})-z_{0}(\gamma_{1}^{n+1}+\gamma_{1}^{n})\right),\] \[\omega_{3}^{n+1}-\omega_{3}^{n} =\frac{h(A-B)}{2C}\left(\omega_{1}^{n+1}\omega_{2}^{n}+\omega_{1}^ {n}\omega_{2}^{n+1}\right)+\frac{hmg}{2C}\left(y_{0}(\gamma_{1}^{n+1}+\gamma_{ 1}^{n})-x_{0}(\gamma_{2}^{n+1}+\gamma_{2}^{n})\right),\] \[\gamma_{1}^{n+1}-\gamma_{1}^{n} =\frac{h}{2}\left(\gamma_{2}^{n+1}\omega_{3}^{n}+\gamma_{2}^{n} \omega_{3}^{n+1}-\gamma_{3}^{n+1}\omega_{2}^{n}-\gamma_{3}^{n}\omega_{2}^{n+1 }\right),\] \[\gamma_{2}^{n+1}-\gamma_{2}^{n} =\frac{h}{2}\left(\gamma_{3}^{n+1}\omega_{1}^{n}+\gamma_{3}^{n} \omega_{1}^{n+1}-\gamma_{1}^{n+1}\omega_{3}^{n}-\gamma_{1}^{n}\omega_{3}^{n+1 }\right),\] \[\gamma_{3}^{n+1}-\gamma_{3}^{n} =\frac{h}{2}\left(\gamma_{1}^{n+1}\omega_{2}^{n}+\gamma_{1}^{n} \omega_{2}^{n+1}-\gamma_{2}^{n+1}\omega_{1}^{n}-\gamma_{1}^{n}\omega_{2}^{n+1 }\right), \tag{2.1}\]
which is an explicit solver with time reversal symmetry. In fact, such scheme gives \((\boldsymbol{\omega}^{n+1},\boldsymbol{\gamma}^{n+1})\) from \((\boldsymbol{\omega}^{n},\boldsymbol{\gamma}^{n})\) all six components in one step by linear algebra. It should be noted however that three conservation laws (1.4) are all broken unfortunately. But such violations are in practice very small fortunately.
### Discrete Lagrange Top in HK scheme
Kimura-Hirota [6] applied this HK scheme to the case of Lagrange top, which is \(A=B,\ x_{0}=y_{0}=0\). Their algorithm is (2.1) with such special parameters. They also gave discussions about correction terms for conservation laws. Here it should be remarked further that the same HK scheme (2.1) can also be applied to discrete Kowalevski top by setting \(A=B=2C,\ y_{0}=z_{0}=0\). Application to Kowalevski top will be shown in the next section. In conclusion, the HK scheme difference equations have advantage to be an explicit solver, but have disadvantage that obtained results do not satisfy conservation laws exactly, while it is allowable practically.
### Discrete Lagrange top in BS scheme
On the other hand, Bobenko-Suris [5] considered Lagrange top not in the moving frame but in the inertial coordinate frame, which makes the equations simpler. The author [7] derived such equations of motion, gave conservation laws and obtained exact solutions both in continuous and discrete cases. Let us summarize the results briefly.
Difference equations for discrete Lagrange top are, in dimensionless form,
\[\mathbf{m}^{n+1}-\mathbf{m}^{n}=h\ \mathbf{p}\times\mathbf{a}^{n}, \tag{2.2}\] \[\mathbf{a}^{n+1}-\mathbf{a}^{n}=\frac{h}{2}\ \mathbf{m}^{n+1}\times\left(\mathbf{a}^{n +1}+\mathbf{a}^{n}\right), \tag{2.3}\]
where \(\mathbf{m}\) is the angular momentum, \(\mathbf{a}\) is the base vector \(\mathbf{e}_{3}\) which moves in inertial frame, and \(\mathbf{p}\) is \(\mathbf{e}_{z}\) which is a constant vector. We can recognize easily that these difference equations are two steps explicit solver.
Now conservation laws are
\[1)\ (\mathbf{(}\mathbf{a}^{n})^{2}=1, \tag{2.4}\] \[2)\ \mathbf{m}^{n}\cdot\mathbf{p}=\text{const.},\] (2.5) \[3)\ \mathbf{m}^{n}\cdot\mathbf{a}^{n}=\text{const.},\] (2.6) \[4)\ E=\frac{1}{2}(\mathbf{m}^{n})^{2}+\mathbf{a}^{n}\cdot\mathbf{p}+\frac{h }{2}(\mathbf{a}^{n}\times\mathbf{m}^{n})\cdot\mathbf{p}=\text{const.} \tag{2.7}\]
The first law is shown from (2.3) by the same logic of BS scheme before. Take a glance at the \(O(h)\) term in \(E\), the energy conservation law. Other proofs including this are given in the paper.[7] Exact solutions in terms of Jacobi's elliptic functions are also derived, but are all omitted here.
## 3 Discrete Kowalevski top
### Kowalevski Top and Conservation Laws
For the sake of simplicity, we set \(A=B=2,\ C=1,\ y_{0}=z_{0}=0\) and \(mgx_{0}=c_{0}\) without loss of generality. We keep the parameter \(c_{0}\) for a while, although we set \(c_{0}=1\) in numerical simulations later. Then
we have
\[\dot{\omega}_{1}=\frac{1}{2}\omega_{2}\omega_{3},\quad\dot{\omega}_{2 }=-\frac{1}{2}\omega_{3}\omega_{1}+\frac{c_{0}}{2}\gamma_{3},\quad\dot{\omega}_{ 3}=-c_{0}\gamma_{2}, \tag{3.1}\] \[\dot{\gamma}_{1}=\gamma_{2}\omega_{3}-\gamma_{3}\omega_{2},\quad \dot{\gamma}_{2}=\gamma_{3}\omega_{1}-\gamma_{1}\omega_{3},\quad\dot{\gamma}_{ 3}=\gamma_{1}\omega_{2}-\gamma_{2}\omega_{1}. \tag{3.2}\]
If we introduce \(\omega=\omega_{1}+i\omega_{2},\ \overline{\omega}=\omega_{1}-i\omega_{2}\), and \(\gamma=\gamma_{1}+i\gamma_{2},\ \overline{\gamma}=\gamma_{1}-i\gamma_{2}\), we have
\[\frac{d\omega}{dt}=-\frac{i}{2}\left(\omega_{3}\omega-c_{0}\gamma_{3}\right), \quad\frac{d\overline{\omega}}{dt}=+\frac{i}{2}\left(\omega_{3}\overline{ \omega}-c_{0}\gamma_{3}\right),\quad\frac{d\omega_{3}}{dt}=-c_{0}\gamma_{2}, \tag{3.3}\]
\[\frac{d\gamma}{dt}=-i\left(\omega_{3}\gamma-\omega\gamma_{3}\right),\quad \frac{d\overline{\gamma}}{dt}=+i\left(\omega_{3}\overline{\gamma}-\overline{ \omega}\gamma_{3}\right),\quad\frac{d\gamma_{3}}{dt}=\frac{i}{2}\left( \overline{\omega}\gamma-\omega\overline{\gamma}\right), \tag{3.4}\]
from which we have especially
\[\frac{d}{dt}\omega^{2}=-i\omega_{3}\omega^{2}+ic_{0}\gamma_{3} \omega,\quad c_{0}\frac{d}{dt}\gamma=-ic_{0}\omega_{3}\gamma+ic_{0}\gamma_{3}\omega\] \[\implies\quad\frac{d}{dt}\left(\omega^{2}-c_{0}\gamma\right)=-i \omega_{3}\left(\omega^{2}-c_{0}\gamma\right), \tag{3.5}\]
which implies the existence of the fourth integral, Kowalevski's conservation law
\[\left|\omega^{2}-c_{0}\gamma\right|^{2}=\xi\overline{\xi}=\text{ const.}=k^{2}, \tag{3.6}\]
where we set \(\xi=\omega^{2}-c_{0}\gamma,\ \overline{\xi}=\overline{\omega}^{2}-c_{0} \overline{\gamma}\) according to Kowalevski [8], while her \(c_{0}\) has opposite sign due to her choosing opposite sign of potential energy, which seems to be a strange tradition of her age common to _e.g._ C.G. Jacobi, C. Neumann, and E. Cartan.
In summary, conservation laws for Kowalevski top are
\[1) \quad\boldsymbol{\gamma}^{2}=\gamma_{1}^{2}+\gamma_{2}^{2}+ \gamma_{3}^{2}=1,\] \[2) \quad 2\ell=2(\omega_{1}\gamma_{1}+\omega_{2}\gamma_{2})+\omega_{3} \gamma_{3}=\text{const.},\] \[3) \quad E=\omega_{1}^{2}+\omega_{2}^{2}+\frac{1}{2}\omega_{3}^{2}+c _{0}\gamma_{1}=\text{const.},\] \[4) \quad k^{2}=\xi\overline{\xi}=\left|\omega^{2}-c_{0}\gamma\right|^ {2}=\text{const.},\]
which are all four conserved quantities.
### Discrete, Explicit Schemes for Kowalevski Top
#### 3.2.1 Kowalevski top in HK scheme
Figure 1 shows calculations in HK scheme mentioned before. Doing numerical simulations we set \(c_{0}=1\) and put
\[\omega_{1}=2,\quad\omega_{2}=\omega_{3}=0,\quad\gamma_{1}=\sqrt{1-\gamma_{3}^{2} },\quad\gamma_{2}=0,\quad\gamma_{3}=0.001, \tag{3.7}\]
as the initial condition, which are the same values used by Yoshida. [9] Calculations are made for \(h=0.001\) over \(N=50000\) steps, by using GSL (Gnu Scientific Library) package. Results in Fig.1 are three components of \(\boldsymbol{\gamma}\) (left) and \(\boldsymbol{\omega}\) (right). Our results show almost the same behaviors as Yoshida's, whose computation scheme was not described in the paper, might be however a pre-version of his famous symplectic algorithm. [10] Kowalevski's constant \(k^{2}\) changes between 9.000003 and 9.000011 osscilatory.
#### 3.2.2 Schemes to hold conservation laws
Let us propose new discretization methods which satisfy at least the following two conservation laws
\[1)\quad\boldsymbol{\gamma}^{2}=1,\text{ and } \tag{3.8}\] \[4)\quad\xi\overline{\xi}=k^{2},\qquad\xi=\omega^{2}-c_{0}\gamma,\quad\overline{\xi}=\overline{\omega}^{2}-c_{0}\overline{\gamma}, \tag{3.9}\]
exactly.
(1) Methods to keep \(\boldsymbol{\gamma}^{2}=1\)
For this purpose, we have at least three methods, one is the BS scheme
\[\mbox{(A)}\quad\mathbf{\gamma}^{n+1}-\mathbf{\gamma}^{n}=\frac{h }{2}\left(\mathbf{\gamma}^{n+1}+\mathbf{\gamma}^{n}\right) \times\mathbf{\omega}^{n}. \tag{3.10}\]
The next is to use variable transformation by introducing a complex variable \(z\)
\[\mbox{(B)}\quad\gamma=\frac{2z}{1+z\overline{z}},\quad\overline{ \gamma}=\frac{2\overline{z}}{1+z\overline{z}},\quad\gamma_{3}=\frac{1-z \overline{z}}{1+z\overline{z}} \tag{3.11}\] \[\iff\quad z=\frac{\gamma_{1}+i\gamma_{2}}{1+\gamma_{3}},\quad \overline{z}=\frac{\gamma_{1}-i\gamma_{2}}{1+\gamma_{3}},\quad z\overline{z}= \frac{1-\gamma_{3}}{1+\gamma_{3}}, \tag{3.12}\]
which is a stereo-graphic transformation. Condition \(\mathbf{\gamma}^{2}=1\) is satisfied automatically. In terms of \(z,\ \overline{z}\), (3.2) is rewritten as
\[\dot{z}=\frac{i}{2}\left(\omega-2\omega_{3}z-\overline{\omega}z^{2}\right), \qquad\dot{\overline{z}}=-\frac{i}{2}\left(\overline{\omega}-2\omega_{3} \overline{z}-\omega\overline{z}^{2}\right), \tag{3.13}\]
which are easily time discretized by _e.g._ Euler method.
The last method is to interpret (3.2) as an instantaneous rotation by \(\omega\) with time lapse \(h\)
\[\mbox{(C)}\quad\mathbf{\gamma}^{n+1}=R(\mathbf{\theta})\ \mathbf{\gamma}^{n},\qquad\mathbf{\theta}=h\mathbf{ \omega}^{n} \tag{3.14}\]
where \(3\times 3\) rotation matrix \(R\) is given by
\[R(\mathbf{\theta})=\left(\begin{array}{ccc}n_{1}^{2}+(1-n_{1}^{2}) c&n_{1}n_{2}(1-c)+n_{3}s&n_{1}n_{3}(1-c)-n_{2}s\\ n_{2}n_{1}(1-c)-n_{3}s&n_{2}^{2}+(1-n_{2}^{2})c&n_{2}n_{3}(1-c)+n_{1}s\\ n_{3}n_{1}(1-c)+n_{2}s&n_{3}n_{2}(1-c)-n_{1}s&n_{3}^{2}+(1-n_{3}^{2})c\end{array} \right), \tag{3.15}\]
\[\mathbf{n}=\mathbf{\theta}/\theta,\quad\theta=|\mathbf{\theta}|,\quad c=\cos\theta,\quad s=\sin\theta. \tag{3.16}\]
These three methods break time reversal symmetry unfortunately, and will be tested later.
(2) Method to keep \(\xi\overline{\xi}=k^{2}\)
Our equation (3.5) can be integrated for small \(h\) by trapezoidal rule as follows
\[\frac{d}{dt}\log\xi=-i\omega_{3}\quad\Longrightarrow\quad\log \left(\frac{\xi^{n+1}}{\xi^{n}}\right)=-\frac{ih}{2}\left(\omega_{3}^{n+1}+ \omega_{3}^{n}\right)\] \[\Longrightarrow\quad\xi^{n+1}=e^{-i\chi}\ \xi^{n},\quad\chi=\frac{h}{2} \left(\omega_{3}^{n+1}+\omega_{3}^{n}\right) \tag{3.17}\]
that is,
\[\left(\omega^{2}-c_{0}\gamma\right)^{n+1}=e^{-i\chi}\left(\omega^{2}-c_{0}\gamma \right)^{n}. \tag{3.18}\]
Since \(\gamma^{n},\;\gamma^{n+1},\;\omega^{n}\) and \(\omega_{3}^{n},\;\omega_{3}^{n+1}\) (therefore \(\chi\) also) are already known, this is an equation to be solved for the unknown \(\omega^{n+1}\), such as
\[\left(\omega_{1}^{n+1}+i\omega_{2}^{n+1}\right)^{2}=e^{-i\chi}\left[\left( \omega_{1}^{n}+i\omega_{2}^{n}\right)^{2}-c_{0}\gamma^{n}\right]+c_{0}\gamma^ {n+1}, \tag{3.19}\]
where the right hand side is known and computable. This type of equation \(w^{2}=z\) is similar to _Bohlin transformation_[11] which relates Kepler with Hooke (harmonic oscillator), both 2 dimensional motion. Let us call therefore our method also Bohlin's method. It should be noted this Bohlin's method has time reversal symmetry.
Solution of \(w^{2}=z\) is one of \(w=\pm\sqrt{z}\), or
\[w=r\ e^{i\phi},\quad z=R\ e^{i\Phi}\quad\Longrightarrow\quad r=\sqrt{R},\quad \phi=\frac{\Phi}{2}\mod 2\pi, \tag{3.20}\]
where we must carefully choose the correct angle \(\phi\). For such purpose, we solve once
\[\frac{d\omega}{dt}=-\frac{i}{2}\left(\omega_{3}\omega-c_{0}\gamma_{3}\right) \quad\Longrightarrow\quad\omega^{n+1}=\omega^{n}-\frac{ih}{2}\left(\omega_{3} ^{n}\omega^{n}-c_{0}\gamma_{3}^{n}\right), \tag{3.21}\]
only to find the correct branch of phase \(\phi\) of \(\omega^{n+1}=r\ e^{i\phi}\). To state precisely, it is sufficient to compare two arguments of complex \(\omega^{n+1}\)'s, one from Bohlin's method, the other from (3.21). We need only the sign (plus or minus) of argument, which is obtained by _e.g._ atan2 function of C Language, and change sign of the former (Bohlin's) if two are different.
In summary our algorithm to find \(\left(\boldsymbol{\gamma}^{n+1},\;\boldsymbol{\omega}^{n+1}\right)\) from \(\left(\boldsymbol{\gamma}^{n},\;\boldsymbol{\omega}^{n}\right)\) in three steps is as follows:
\[\begin{array}{ll}\mbox{(1)}&\mbox{Find }\boldsymbol{\gamma}^{n+1}\mbox{ by one of methods (A), (B) or (C).}\\ \mbox{(2)}&\mbox{Set }\omega_{3}^{n+1}=\omega_{3}^{n}-\frac{h}{2}c_{0} \left(\gamma_{2}^{n+1}+\gamma_{2}^{n}\right).\\ \mbox{(3)}&\mbox{Find }(\omega_{1}^{n+1},\;\omega_{2}^{n+1})\mbox{ by Bohlin's method.}\end{array}\]
Figure 2 shows the results of Bohlin's method with (A), taking the same initial conditions as Fig.1. Since differences among methods (A) to (C) are indistinguishable, we omit their figures. In other words
our three methods (A) to (C) keeping \(\mathbf{\gamma}^{2}=1\) have almost the same quality.
Obvious but strange difference found by comparing Fig.2 with Fig.1 is the magnitudes of period, even though the same initial conditions and parameters are used. This implies
_the time reversal symmetry is extremely important_
for the dynamical behaviors of Kowalevski top.
It is also recognized from Fig.3, whose left shows conserved quantities 2) \(2\ell=\) const. and 3) \(E=\) const. by Bohlin with (A) method. We observe that both quantities seem to be constant indeed. However Fig.3 right shows the energy \(E\) actually makes a slow and tiny increase periodically, look at the vertical scale carefully. Behavior of \(2\ell\) is almost the same. These behaviors are surely consequences of time reversal asymmetry.
Figure 3: Kowalevski top \(2\ell\) and \(E\) by Bohlin with (A) method
Figure 2: Kowalevski top \(\gamma\) (left) and \(\omega\) (right) by Bohlin with (A) method
#### Hybrid of two schemes
In this way, we have arrived finally at a hybrid scheme. Our proposal is as follows.
1. Find \((\boldsymbol{\gamma}^{n+1},\ \boldsymbol{\omega}^{n+1})\) by HK scheme once.
2. Solve \(\boldsymbol{\gamma}^{n+1}\) again by modified BS scheme \[\boldsymbol{\gamma}^{n+1}-\boldsymbol{\gamma}^{n}=\frac{h}{4}\left( \boldsymbol{\gamma}^{n+1}+\boldsymbol{\gamma}^{n}\right)\times\left(\boldsymbol {\omega}^{n+1}+\boldsymbol{\omega}^{n}\right).\] (3.22) 3. Find \((\omega_{1}^{n+1},\ \omega_{2}^{n+1})\) again by Bohlin's method.
This scheme keeps \(\boldsymbol{\gamma}^{2}=1\) and \(k^{2}=|\omega-c_{0}\gamma|^{2}=\text{const.}\), and also has time reversal symmetry. Obtained Figure 4 looks almost the same as Figure 1, except that \(\boldsymbol{\gamma}^{2}=1\) and \(k^{2}=|\omega^{2}-c_{0}\gamma|^{2}=\text{const.}\) satisfied exactly.
## 4 Summary and remarks
General Euler-Poisson equations can be discretized by Hirota-Kimura method, which can be applied even to Euler, Lagrange, and Kowalevski tops. One of its defects however is to break conservation laws, although discrepancies are very small. To remedy such defects several methods are examined on account of accuracy and efficiency. In the course it is found that time reversal symmetry is very important. For Kowalevski top, an hybrid scheme of Hirota-Kimura method and Bohlin's method, which is newly found and named, is proposed, and is numerically tested successfully.
Present discretization scheme is, in a sense, merely an approximation to the continuous-time equation. On the contrary, it is desirable
to find the discrete-time equation which is also completely integrable by itself. Many years ago, Haine-Horozov [12] wrote that Kowalevski top is equivalent to Clebsch (or Neumann) system, which is also integrable. However their equivalence needs unfortunately reconsiderations, since their relationships given are _not_ real to real variables correspondence. If any resolution to such problem is completed, the _integrable and discrete_ Kowalevski system will be constructed finally.
|
2309.05733 | Impact of the Galactic bar on tidal streams within the Galactic disc:
The case of the tidal stream of the Hyades | Tidal streams of disrupted clusters are routinely detected in the halo of the
Milky Way. It was recently shown that tidal streams of open clusters can now
also be detected within the disc. In this work, we highlight the fact that
tidal streams provide a powerful new diagnostic of the non-axisymmetric disc
potential and may provide a new constraint on the pattern speed of the Galactic
bar. In particular, we show how the stream-orbit misalignment for an open
cluster on a quasi-circular orbit in the solar vicinity varies as a function of
the position w.r.t the bar resonances. The angular shift rises beyond
corotation, reaching values as high as $30^\circ$ close to the outer Lindblad
resonance (OLR), then dropping again and reversing its sign beyond the OLR. We
applied this mechanism to the recently detected the Hyades stream. We note that
the stream would be very similar when taking a potential with no bar or with a
fast pattern speed of 55 km.s$^{-1}$ kpc$^{-1}$. However, we find that the
stream is different than previously detected when adopting a potential with a
bar pattern speed of $39$ km.s$^{-1}$ kpc$^{-1}$. Previously detected Hyades
candidate members would, on the other hand, favour a barless or a fast bar
galaxy. Interestingly, the previously reported asymmetry in star counts within
the leading and trailing tails of the Hyades tidal stream persists in all
cases. Our study conclusively demonstrates that the effect of disc
non-axisymmetries cannot be neglected when searching for tidal streams of open
clusters and that current candidate members of the Hyades stream should not be
trusted beyond a distance of 200 pc from the cluster. Moreover, our study
allows for ideal targets to be provided for high-resolution spectroscopy
follow-ups, which will enable conclusive identifications of the Hyades stream
track and provide novel independent constraints on the bar pattern speed in the
MW. | Guillaume F. Thomas, Benoit Famaey, Giacomo Monari, Chervin F. P. Laporte, Rodrigo Ibata, Patrick de Laverny, Vanessa Hill, Christian Boily | 2023-09-11T18:02:44Z | http://arxiv.org/abs/2309.05733v1 | # Impact of the Galactic bar on tidal streams within the Galactic disc
###### Abstract
Tidal streams of disrupted clusters are powerful probes of the gravitational potential of the Galaxy and they are routinely detected in the stellar halo of the Milky Way. It was recently shown that tidal streams of open clusters can now also be detected within the Milky Way disc. In this work, we highlight the fact that disc tidal streams also provide a powerful new diagnostic of the non-axisymmetric disc potential and may, in principle, provide a new constraint on the pattern speed of the Galactic bar. In particular, we show how the stream-orbit misalignment for an open cluster on a quasi-circular disk orbit in the solar vicinity varies as a function of the position with respect to the bar resonances. The angular shift rises beyond corotation, reaching values as high as 30\({}^{\circ}\) close to the outer Lindblad resonance (OLR), then dropping again and reversing its sign beyond the OLR. We applied this mechanism to the recently detected tidal stream of the Hyades open cluster and we note that the detected stream stars would be very similar when taking a potential a priori with no bar or with a fast pattern speed of 55 km s\({}^{-1}\) kpc\({}^{-1}\) (or lower than 30 km s\({}^{-1}\) kpc\({}^{-1}\)). However, we find that candidate stream stars are different than previously detected ones when adopting a potential with a bar pattern speed of 39 km s\({}^{-1}\) kpc\({}^{-1}\), which is consistent with the most recent determinations of the actual Galactic bar pattern speed. Previously detected Hyades candidate members would, on the other hand, favour a barless galaxy or a fast bar of pattern speed 55 km s\({}^{-1}\) kpc\({}^{-1}\). Interestingly, the previously reported asymmetry in star counts within the leading and trailing tails of the Hyades tidal stream persists in all cases. Our study conclusively demonstrates that the effect of disc non-axisymmetries cannot be neglected when searching for tidal streams of open clusters and that current candidate members of the Hyades stream should not be trusted beyond a distance of 200 pc from the cluster. Moreover, our study allows for ideal targets to be provided for high-resolution spectroscopy follow-ups, which will enable conclusive identifications of the Hyades stream track and provide novel independent constraints on the bar pattern speed in the Milky Way.
## 1 Introduction
Tidal streams from dissolving globular clusters in the halo of the Galaxy have long been known to be an incredibly powerful probe of the gravitational potential of the Galaxy, its dark matter distribution, and the laws of gravitation itself. In principle, such dynamically cold stellar streams offer an opportunity to directly probe the acceleration field over the extent of the detected streams because when the progenitors are of low mass and dissolve slowly, the ejected stars are lost and subsequently characterised by a low relative energy. This leads to streams that tend to closely (although not perfectly) follow the orbits of their progenitors. While the situation may become more complicated for massive progenitors, the picture for low-mass progenitors can also be complicated by perturbations, either from local disturbances (e.g. giant molecular clouds or putative dark matter subhalos Erkal & Belokurov 2015; Erkal et al. 2016; Amorisco et al. 2016) or by global ones such as the effects brought on by the infall of the Large Magellanic Cloud (Erkal et al. 2019; Vasiliev et al. 2021; Koposov et al. 2023; Lilleengen et al. 2023) or of the Sagittarius dwarf galaxy (Dillamore et al. 2022) or the presence of the Galactic bar at the center of the Galaxy (Hattori et al. 2016; Pearson et al. 2017; Thomas et al. 2020; Dillamore et al. 2023).
Recently, detailed analyses of _Gaia_ data have allowed the realm of the study of tidal streams to be extended to those coming from open clusters inside the Galactic disc. Such tidal streams have been detected up to 1 kpc from the Hyades cluster (Oh & Evans 2020; Jerabkova et al. 2021), but also around Praesepe, Coma Berenices, or NGC 752 (Boffin et al. 2022; Kroupa et al. 2022). Interesting asymmetries have been found in these streams, which could potentially be attributed to close encounters with massive dark matter sub-halos, although such asymmetries are also reminiscent of asymmetries in some globular clus
ter streams (Thomas et al., 2018) and could challenge Newtonian gravity (Kroupa et al., 2022); also, they could argue in favour of a type of dark matter that breaks the weak equivalence principle (Kesden & Kamionkowski, 2006; Naik et al., 2020).
However, such detections assume a priori that the progenitor orbits within an axisymmetric potential, while the Milky Way disc is actually known to harbor a massive central bar component. Here, we investigate in detail the effects that the bar could have on the orientation of such disc tidal streams and how this can affect the selections of candidate stream stars. We show that these selections can, in principle, serve as powerful new probes for constraining the bar pattern speed.
The pattern speed of the Galactic bar has been historically known as being difficult to estimate, with some contradictory indications coming from different data. For a long time, the prevalent value was estimated to be around \(\sim 55\) km s\({}^{-1}\) kpc\({}^{-1}\), following the work of Dehnen (1999, 2000) and Fux (2001), which showed that the prominent Hercules moving group (e.g. Dehnen, 1998; Famaey et al., 2005) in local velocity space could be explained if the Sun is placed just outside the 2 : 1 Outer Lindblad Resonance (OLR) of the bar (see also Antoja et al., 2014). This was confirmed by an analysis of the local non-axisymmetric Oort constant (Minchev et al., 2007) as well as various other lines of evidence (Quillen et al., 2011; Fragkoudi et al., 2019). However, subsequent works on the density of red clump stars in the disc (Wegg et al., 2015) and on the gas kinematics (Sormani et al., 2015; Li et al., 2016), followed by dynamical modelling of the stellar kinematics in the inner Galaxy (Portail et al., 2017), as well as by an analysis of proper motion data from the VVV survey (Sanders et al., 2019; Clarke et al., 2019) and point to a pattern speed of \(\sim 40\) km s\({}^{-1}\) kpc\({}^{-1}\). This value was also favoured by Dillamore et al. (2023) to explain the presence of the ridge structure in the energy-angular momentum space of local stars. Monari et al. (2019) then showed that the Galactic model of Portail et al. (2017) could reproduce most of the observed features in local velocity space, including the Hercules moving group with a characteristic dependence on azimuth (Monari et al., 2019) if the Sun is placed a bit outside of the co-rotation resonance of the bar. To this day, many different possible pattern speeds are still considered for the Galactic bar (Trick, 2022) and it is conceivable that this pattern speed actually varies with time (Hillmi et al., 2020). Therefore, gaining access to a new independent probe of the pattern speed of the bar in the form of tidal streams of open clusters in the disc is most desirable.
In the present paper, we first describe the bar potential that we use for our simulations (Sect. 2) before describing the set-up of our simulation of the stream of the Hyades cluster (Sect. 3). In that section, we show that a 'fast' rotating bar with pattern speed \(\sim 55\) km s\({}^{-1}\) kpc\({}^{-1}\) yields very similar results to a potential without bar, whilst a slower pattern speed of \(39\) km s\({}^{-1}\) kpc\({}^{-1}\) presents a very strong stream-orbit misalignment. We then apply those models to observations to re-assess the probability of candidate stream members for different bar pattern speeds (Sect. 4) and we conclude that high-resolution spectroscopy will be needed to disentangle both types of models. Our conclusions are summarised in Sect. 5.
## 2 Bar potential
We start by setting \((R,\theta,z)\) as the Galactocentric cylindrical coordinates. The plane \(z=0\) corresponds to the Galactic plane. The angle \(\theta\) increases in the clockwise sense, and the line \(\theta=0\) passes through the Sun and the Galactic centre. We can define corresponding Cartesian Galactocentric coordinates as \(x=-R\cos\theta\), \(y=R\sin\theta\), so that the Sun lies at \((x,y)=(-\mathrm{R_{\odot}},0)\), if \(\mathrm{R_{\odot}}\) is the cylindrical distance of the Sun from the Galactic centre.
The model of the Milky Way used in this paper consists of a time-dependent Galactic potential (\(\phi_{\mathrm{tot}}\)) composed of a background axisymmetric potential (\(\phi_{0}\)) and of three non-axisymmetric components corresponding to the three first even modes (\(\phi_{m}\)) of the bar potential, such that:
\[\phi_{\mathrm{tot}}(R,\theta,z,t)=\phi_{0}(R,z)+\sum_{m=2,4,6}\phi_{m}(R, \theta,z,t). \tag{1}\]
The contribution to the total potential of each mode \(m\) of the bar can be computed from the amplitude of the mode with respect to the axisymmetric potential (\(A_{m/0}\equiv A_{m}/A_{0}\)), as follows:
\[\phi_{m}(R,\theta,z,t)=\phi_{0}(R,0)\;A_{m/0}(R)\;\frac{\cos\left[m\left( \theta-\Omega_{b}\,t+\alpha_{0}\right)\right]}{1+[z/z_{m,\mathrm{h}}(R)]^{2}}\;, \tag{2}\]
where \(\Omega_{b}\) is the pattern speed of the bar (positive in the clockwise sense), while the scale height \(z_{m,\mathrm{h}}\) is defined as:
\[z_{m,\mathrm{h}}(R)=0.45R+\zeta_{m}, \tag{3}\]
and \(\zeta_{m}\) is a height. The relative amplitude \(A_{m/0}\) is defined as:
\[A_{m/0}(R)=K_{m}\,(R/R_{\mathrm{max}})^{\alpha_{m}-1}(1-R/R_{\mathrm{max}})^{ \alpha_{m}-1}, \tag{4}\]
where \(K_{m}\) is a scale factor, \(R_{\mathrm{max}}\) is a scale length that we take to be \(R_{\mathrm{max}}=12\) kpc.
The initial phase \(\alpha_{0}\) at \(t=0\) is the same for all modes composing the bar potential and chosen such that the inclination of the \(m=2\) mode compared to the \(x\)-axis at the present time \(t_{\mathrm{now}}\) is \(28^{\circ}\) (i.e. \(\alpha_{0}-\Omega_{b}t_{\mathrm{now}}=-28^{\circ}\)).
Therefore, the \(\alpha_{0}\) angle will refer to the 'inclination of the bar' at the beginning of the simulation. As it is apparent from this equation, we imposed that the phase of the different modes of the bar do not change with time. It is important to note that in the following, we are working within the Galactic plane and thus we do not use the \(z\)-dependence that introduces a vertical dimming of the contribution of each mode of the bar with \(z\). The parameters used for each potential mode \(\phi_{m}\) are summarised in Table 1 and are chosen to roughly resemble the first three even modes of the bar potential by Portail et al. (2017).
This model of the Milky Way has been introduced into the gyroLaON N-body integrator (Dehnen, 2000, 2002, 2014) from the NEMO stellar toolbox (Teuben, 1995) as an external acceleration field by modifying the GalPot program (Dehnen & Binney, 1998; McMillan, 2017). In this modified version, which we call GalBar1, the classical GalPot program is used to compute the background axisymmetric component of the potential. Equation 2 is used to compute the contribution of each non-zero mode to the total Galactic potential, as expressed in Eq. 1. Then, the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(m\) & \(K_{m}\) & \(a_{m}\) & \(b_{m}\) & \(\zeta_{m}\)(kpc) \\ \hline
2 & 0.25 & 1.80 & 5.08 & 0.05 \\
4 & 8.40 & 4.08 & 10.70 & 0.025 \\
6 & 210.41 & 5.96 & 16.06 & 0.05 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used in the bar potential
corresponding acceleration field is computed locally by measuring the finite difference of potential in a Cartesian grid where the points are spaced in intervals of \(10^{-7}\) kpc (\(\simeq 20\) AU)2. We verified that this method gives a similar acceleration field than the classical GalPot for different axisymmetric potentials.
Footnote 2: The choice of this value has been made to avoid propagating rounding errors.
## 3 Simulation of disc tidal streams: the Hyades
The main objective of this paper is to study the impact of the Galactic bar on the morphology and on the dynamics of stellar streams inhabiting the Galactic disc. One of the best candidates for this analysis is the Hyades tidal stream, due to the proximity of its progenitor with the Sun (45.7 pc, Gaia Collaboration et al. 2018) and for which 6D phase-space information is available for numerous of its stars thanks to the successive _Gaia_ data releases. Using DR2 and eDR3 data, the stream has been measured to be 800 pc long (Roser et al. 2019; Jerabkova et al. 2021), a value that is similar to what was predicted in early simulations (Chumak et al. 2005; Ernst et al. 2011), in which its progenitor, the Hyades open cluster, had a total mass of 1230 M\({}_{\odot}\) at the moment of its formation \(600-700\) Myr ago (Perryman et al. 1998; Lebreton et al. 2001; De Gennaro et al. 2009; Reino et al. 2018; Douglas et al. 2019; Lodieu 2020).
### Simulations setup
The simulations of the Hyades stream were made with the qyrr-falCON N-body integrator using the barred potential of the Milky Way implemented in GalBar, as described in the previous section. The adopted axisymmetric component is similar to the Galactic potential used by Ibata et al. (2020) and Thomas et al. (2020) to model the GD-1 and the M92 streams. This potential is composed of the bulge, thin disk, thick disk, and interstellar medium from the first model of Dehnen & Binney (1998) and of a dark matter halo following a Navarro et al. (1997) profile, with a virial radius of 206 kpc (Cautun et al. 2020), a concentration of \(c=12\), an oblateness of \(q=0.82\)(Malhan & Ibata 2019), and a mass of \(9.6\times 10^{11}\) M\({}_{\odot}\). With this model, the circular velocity at the solar radius (R\({}_{\odot}=8.129\) kpc, Gravity Collaboration et al. 2018) is of 229 km s\({}^{-1}\), consistent with the value found by Eilers et al. (2019). For the simulations where a bar is present, the Galactic potential is computed using Eqs. 1 and 2, with the ratio of amplitude of the different modes with respect to the axisymmetric potential shown on Figure 1 and with a current bar angle for the bar of 28\({}^{\circ}\) with respect to the solar azimuth (Portail et al. 2017). The face-on view of the potential and of the density of this model at the current time are presented in Fig. 2.
The initial Hyades cluster was modelled with 1230 equal-mass particles, based on the previous work of Jerabkova et al. (2021) (hereafter, J21) and following a Plummer profile (Plummer 1911) of a total mass of \(M_{\rm d,0}=1230\) M\({}_{\odot}\) and a scale length of \(r_{\rm s}=2.62\) pc, found using the mkplum program included in Nemo. We chose to have equal-mass particles, rather than having the individual masses distributed assuming an initial mass function (IMF), as chosen, for example, by J21. This is driven by our choice of the noncollisional qyrr-falCON N-body integrator with the goal of studying the general morphology and dynamics of the tidal stream in the presence or absence of a bar, rather than the detailed variations along it, which would require taking into account, for instance, mass segregation in the progenitor (Evans & Oh 2022).
For all the simulations, the initial position of the Hyades cluster was computed by integrating backward a point mass from the current position of the cluster over 655 Myr, following the prescription of J21. The current Galactocentric position and velocity of the cluster are computed from its observed parameters, listed in Table 2, assuming that the Sun is slightly above the Galactic plane (z\({}_{\odot}=25\) pc, Bland-Hawthorn & Gerhard 2016) and with the solar peculiar motion adopted from Schonrich et al. (2010), namely (U\({}_{\odot}\), V\({}_{\odot}\), W\({}_{\odot}\)) = (11.1, 12.24, 7.25) km s\({}^{-1}\) in the local standard of rest coordinates. However, it is important to note here that the oscillations of the cluster around the Galactic plane were removed in order to have a cluster orbiting in the Galactic plane, such that \(z_{\rm cl}=0\) pc and \(v_{z\rm cl}=0\) km s\({}^{-1}\). The reason behind this choice is that the stream formed on non-planar orbit had a strong vertical dispersion, with non-physical borders of the stream around 40 pc above the Galactic plane. Our investigation tends to suggest that this is due to the scale height of the thin disc used in the simulations, but we did not manage to remove this artificial border completely3. Therefore, in order to have a realistic stream morphology, we decided to constrain the orbit of the cluster to be in the plane of the Galaxy. This does not significantly impact our study, as J21 noted that the vertical oscillations of the Hyades cluster do not have a significant impact on the morphology of the stream. While neglecting these vertical oscillations leads to a slight overestimation of the response to the bar, we found that this overestimation is actually marginal, namely, at around \(\simeq 1\%\) of the amount of torque transfer from the bar to the progenitor, regardless of its pattern speed.
Footnote 3: Note: this feature is not caused by our implementation of the acceleration field in GalBar, as the acceleration field computed with GalPot generates the same effect.
Because the cluster is losing stars with time, its orbit is slightly different from that of a point mass. Therefore, the position of the cluster at the end of the simulation is very slightly different from its present day position. Therefore, in order to have simulations that are directly comparable with the observed Hyades stream, we shifted the final simulation snapshot such that the remnant simulated cluster has the same position as the present-day position of the cluster and we rotated it so that the velocity vector of the simulated cluster was aligned with the observed one. This standard operation ensures that the simulated stream is correctly projected on the sky without changing its internal kinematics.
Figure 1: Ratio of amplitude of the different Fourier modes of the bar (m=2, 4, and 6) to \(m=0\) along its major axis used in the simulations where the bar is included.
All simulations were made with the same setup of qsrfacON, with a Plummer softening kernel having a smoothing length of 0.004 pc, a maximum level of refinement of \(k_{max}=17\), leading to a maximum time-step of \(2^{-17}=7.6\times 10^{-7}\) yr and a tolerance parameter of \(\theta_{0}=0.4\), which is smaller than the one usually used (see Dehnen, 2002). These parameters were chosen such that an isolated progenitor of the Hyades cluster does not lose more than 20% of its mass over 700 Myr and to have a relative energy error per time step lower than \(10^{-7}\). The choice of parameters used here is also supported by the fact that the simulated cluster has an average mass loss of 0.8 M\({}_{\odot}\).yr\({}^{-1}\) in the non-barred Milky Way potential, similar to what was found by J21 with a fully collisional simulation.
### Results
The present-time morphology of the simulated Hyades stream embedded in a barred Milky Way with a wide range of pattern speed and in a non-barred MW is shown in Figure 3. The extent and the global morphology of the Hyades stream evolving in a barless MW is similar to previous simulations (Chumak et al., 2005; Ernst et al., 2011; Jerabkova et al., 2021), with a mean inclination of the stream of \(\simeq 5^{\circ}\) and the plane being tangent to the Solar azimuth (i.e. the \(y\)-axis). In this axisymmetric potential, the Hyades cluster is on a near-circular orbit (\(e=0.1\)), with a pericentre of 7.14 kpc and an apocentre of 8.78 kpc. Because the Hyades cluster is currently not near its apses, the stream is globally aligned with the orbit of the cluster, as is commonly the case for dynamically cold streams formed by the disruption of star clusters (e.g. Price-Whelan and Bonaca, 2018; Malhan et al., 2018; Ibata et al., 2020, but see Thomas et al., 2020 for a counter-example). However, when the Hyades cluster is located between the corotation and OLR of the bar, the stream-orbit misalignment becomes highly significant, reaching deflection angle values as high as \(30^{\circ}\). If the Hyades were inside the corotation of the bar, that is, in the case of slow-moving bars with \(\Omega_{\rm b}<29\) km s\({}^{-1}\) kpc\({}^{-1}\), all particles of the stream would approach the bar at a similar time and receive similar torques (Hattori et al., 2016), leading to a present-day Hyades stream that is globally similar to the case of the MW without a bar.
For faster bars, it is striking to see that the position, the morphology, the length and the average density of stars along the stream are strongly impacted by the bar pattern speed. Indeed, regarding the length of the stream, it reaches a maximum elongation of \(\sim 3.5\) kpc for a pattern speed of \(\Omega_{\rm b}=35\) km s\({}^{-1}\), decreasing down to a few hundred parsecs around \(\Omega_{\rm b}=50\) km s\({}^{-1}\), rising up again for faster bars. This effect, called shepherding by Hattori et al. (2016), results from the different times at which the
\begin{table}
\begin{tabular}{l r} \hline \hline Parameter & Value \\ \hline R.A. & 67.985 deg \\ Decl. & 17.012 deg \\ Distance & 45.7 pc \\ \(\mu_{\alpha}^{*}\) & 101.005 mas.yr\({}^{-1}\) \\ \(\mu_{\delta}\) & -28.490 mas.yr\({}^{-1}\) \\ V\({}_{los}\) & 39.96 km s\({}^{-1}\) \\ \hline \end{tabular}
\end{table}
Table 2: Current dynamical properties of the Hyades cluster from Gaia Collaboration et al. (2018).
Figure 2: **Present-day Galactic potential (left panel) and density distribution (right panel)** in the plane of the disc (\(z=0\) kpc) of the Milky Way model used by the simulations, with the bar orientated of \(28^{\circ}\)compared to the solar azimuth. In both panels, the representation is Galactocentric, with the current location of the Sun represented by the yellow circle.
Figure 3: Galactocentric position of the particles of the Hyades-like stream without Galactic bar (upper left corner) and for different pattern speed ranging from \(\Omega_{b}=0\) km s\({}^{-1}\) to 60 km s\({}^{-1}\). The yellow dots in each panel indicate the position of the Sun and the red line shows the orbit of the cluster.
particles along the stream approach the Galactic bar. As a result, the particles along the stream receive different torques from the bar, and the resulting variation of energy is different for each particle. As depicted in Hattori et al. (2016), the stream's growth rate depends on the location of the pericentre of the cluster with respect to the bar; a pericentre near the major axis of the bar will enhance the differences of energy resulting in a growing stream, and on the contrary, while a pericentre near the minor axis of the bar will shrink the stream.
As already noted, Fig. 3 clearly shows that the pattern speed of the bar also has an impact on stream-orbit misalignment. The analysis of this specific feature is detailed in the following section.
### Stream-orbit misalignment
Figure 4 presents the deflection angle (\(\theta\)), namely, the angle between the track of the stream in presence of a bar with the track of the stream embedded in an axisymmetric MW potential, for different pattern speeds between 0 and 60 km s\({}^{-1}\) kpc\({}^{-1}\). The track of the stream was obtained by fitting a straight line on the position of the particles located within a distance of 800 pc from each side of the remnant cluster, and the quoted systematic uncertainties correspond to the difference of inclination when halving the maximum distance to the cluster on each side. A negative deflection angle indicates that the leading arm of the stream is closer to the Galactic centre (i.e. have higher value along the \(x\)-axis) than in the case of an axisymmetric MW, and reciprocally for a positive angle. Because the length of the stream varies widely with the pattern speed of the bar - and because the most extended streams tend to be curved (see the present-day stream for \(\Omega_{\rm b}=35\) km s\({}^{-1}\) kpc\({}^{-1}\)) - the deflection angle is measured using exclusively the particles within a fixed distance of 800 pc from the cluster.
In this figure, the vertical dashed line corresponds to the pattern speed for which the co-rotation radius is similar to the guiding radius of the centre of mass of the Hyades cluster in the axisymmetric potential (\(\Omega_{\rm b}\simeq 29\) km s\({}^{-1}\) kpc\({}^{-1}\)). Therefore, we can see that when the Hyades cluster is located inside the co-rotation radius (i.e. for \(\Omega_{\rm b}<29\) km s\({}^{-1}\) kpc\({}^{-1}\)), the track of the stream is similar to the case of a stream formed in an axisymmetric MW (\(\theta=0^{\circ}\)).
For a Hyades cluster located outside the co-rotation radius (i.e. \(\Omega_{\rm b}>29\) km s\({}^{-1}\) kpc\({}^{-1}\)), the deflection angle of the stream track decreases slowly until \(\Omega_{\rm b}=43\) km s\({}^{-1}\) kpc\({}^{-1}\). For a faster bar, the deflection angle changes drastically, reaching a maximum deflection with an angle of \(\theta=-30^{\circ}\) for \(\Omega_{\rm b}=47\) km s\({}^{-1}\) kpc\({}^{-1}\). Beyond that value, the deflection angle increases again, reaches \(\theta=0^{\circ}\) for \(\Omega=52\)kms\({}^{-1}\)kpc\({}^{-1}\), up to a plateau of \(\theta=4^{\circ}\) between \(\Omega_{\rm b}=53\) and \(56\) km s\({}^{-1}\) kpc\({}^{-1}\), before rising up again for even larger values. This plateau corresponds approximately to the pattern speed of the bar if the guiding radius of the Hyades stream were located at the OLR (\(\Omega_{\rm b}\approx 50\)kms\({}^{-1}\)kpc\({}^{-1}\)).
It is important to note that because stellar streams are spatially coherent over several dynamical timescales (Johnston et al., 2008), a deflection of the position of a stream necessarily implies a deflection of the velocity vectors all along it.
As mentioned in the introduction, several recent works (Wegg et al., 2015; Sormani et al., 2015; Li et al., 2016; Portail et al., 2017; Bovy et al., 2019; Sanders et al., 2019; Clarke et al., 2019; Monari et al., 2019; Tepper-Garcia et al., 2021; Leung et al., 2023; Lucey et al., 2023), measured the bar pattern speed in the MW to be in the range of \(\Omega_{\rm b}=39-41\) km s\({}^{-1}\) kpc\({}^{-1}\). For these values, the Hyades stream track will have a non-negligible deflection angle of \(\theta\simeq-11^{\circ}\), which might be in contradiction with the previous detection of candidate stream members, as we discuss in Section 4.
We can see that the Hyades stream has a similar inclination as in the case without a bar for a pattern speed around \(\Omega_{\rm b}\simeq 52\) km s\({}^{-1}\) kpc\({}^{-1}\). However, for this specific pattern speed, the stream is less extended and thicker than without a bar, as visible on Figure 3. A slightly faster bar, with a pattern speed of \(\Omega_{\rm b}=55\) km s\({}^{-1}\) kpc\({}^{-1}\) produces a present-day Hyades stream similar in morphology and length as without a bar, although the stream track presents a small deflection angle of \(\theta=4^{\circ}\). Nevertheless, this deflection is almost three times less than in the case of a stream formed in a \(\Omega_{\rm b}=39\) km s\({}^{-1}\) kpc\({}^{-1}\) barred Galaxy. As discussed in the introduction, this result is interesting since this pattern speed value is close to the older measurements (see Bland-Hawthorn & Gerhard, 2016, for a review), and in particular of the measurement of Minchev et al. (2007) of \(\Omega_{\rm b}=53\pm 1.5\) km s\({}^{-1}\) kpc\({}^{-1}\) using the variation of the Oort \(C\) constant. A similar pattern speed has also been estimated, from the position of the Hercules moving group in the local velocity plane (e.g. Antoja et al., 2014). Although (as explained in the introduction) this last measurement is debatable because the Hercules moving group may also be comprised of stars whose orbits are trapped at the co-rotation resonance of a slower bar, which gives the right dependence for its location in velocity space with azimuth (Perez-Villegas et al., 2017; Monari et al., 2019; Monari et al., 2019).
## 4 Comparison with observations
We undertook a comparison of the simulated streams to the observed Hyades stream and, in particular, to the candidate members selected by J21 using astrometric data from the _Gaia_ early Data Release 3 (eDR3).
Thanks to the deflection angle of the Hyades stream in presence of a bar, it is in principle possible to use the position and the kinematics of the observed stream to select the best model and, ultimately, to give a constraint on the pattern speed of the Galactic bar. Here, we are particularly interested in three models of the Hyades stream: the streams formed in a MW with a
Figure 4: Deflection angle of the plane of the Hyades stream for different pattern speeds of the Galactic bar w.r.t the plane of the stream without a bar. The red triangle highlights the deflection angle for a bar having \(\Omega_{\rm b}=39\) km s\({}^{-1}\) kpc\({}^{-1}\), and the green vertical line corresponds to \(\Omega_{\rm b}=55\) km s\({}^{-1}\) kpc\({}^{-1}\). The dashed vertical black line indicates the pattern speed for which the guiding radius of the Hyades cluster will be located at the co-rotation radius (\(\Omega_{\rm b}\approx 29\) km s\({}^{-1}\) kpc\({}^{-1}\)). The orange line shows the angle in the absence of bar (set to \(0^{\circ}\) by definition).
pattern speed of \(\Omega_{\rm b}=39\) and 55 km s\({}^{-1}\) kpc\({}^{-1}\), and the stream formed in a non-barred Galaxy. The reasons behind our choice of focussing on these three models are that for the two barred models, they correspond to the two typical measurements of the pattern speed of the bar in the MW. For the barless model, the justification is that previous state-of-the-art simulations of the Hyades stream were made in an axisymmetric Galaxy model without a bar (Chumak et al., 2005; Ernst et al., 2011; Jerabkova et al., 2021). For clarity, in the rest of the paper, we refer to the barred-Galaxy with a pattern speed of \(\Omega_{\rm b}=39\) km s\({}^{-1}\) kpc\({}^{-1}\) as the'slow' barred-MW, and to the barred-Galaxy with a pattern speed of \(\Omega_{\rm b}=55\) km s\({}^{-1}\) kpc\({}^{-1}\) as the 'fast' barred-MW, which for our present analysis is actually very similar to the 'barless' case.
### Comparison with the Jerabkova et al. (2021) sample
Figure 5 shows the comparison between the Galactocentric position of the simulated streams with the observed eDR3 sample of J21. As we can clearly see, the candidate members selected in this work are better reproduced by a stream formed in a barless MW, or in a fast barred-Galaxy, than with in a slow barred-MW, especially at a distance beyond \(\simeq 150\) pc from the Hyades cluster, where the differences between the models are the strongest. As we already noted in the previous Section, the streams formed in a non-barred Galaxy and in a fast barred-MW are relatively similar. Because the existence of the bar in the Milky Way has been known for decades, one would at first like to conclude that the observations of the Hyades stream favour a fast rotating bar with \(\Omega_{\rm b}=55\) km s\({}^{-1}\) kpc\({}^{-1}\), in contradiction with the latest measurements, but in agreement with the older estimations (see Section 3.3). However, it has to be noted here that, to make their selection, J21 used an N-body model that was run in an 'axisymmetric' potential. Therefore, it is not completely surprising that the simulations made in the barless MW or in a fast barred-MW better fit these observations, as it might be possible that some actual stars of the Hyades stream are missing and/or that their sample contains a non-negligible number of contaminant stars
Figure 5: Position of the simulated stream (black points) formed in the case of **top (a)**: a barless Galaxy; **middle (b)**: of a barred MW with a pattern speed of \(\Omega_{\rm b}=55\) km s\({}^{-1}\) kpc\({}^{-1}\); **bottom (c)**: of a barred MW with a pattern speed of \(\Omega_{\rm b}=39\) km s\({}^{-1}\) kpc\({}^{-1}\). The red triangles indicate the position of the candidate members of the Hyades stream from the eDR3 sample of Jerabkova et al. (2021).
from the disc. This is especially relevant in the most distant region of the stream, where the differences between the different model are the strongest.
Indeed, the presence of contaminant stars from the disc in the J21 sample is clearly attested by the high dispersion of the line-of-sight (LoS) velocities of those stars, especially in the farthest region of the stream, as shown on the upper panel of Figure 6. The LoS velocities used here were measured by the _Radial Velocity Spectrometer_ (RVS) instrument onboard of the _Gaia_ satellite, and they were obtained by cross-matching the J21 sample with the _Gaia_ third Data Release (DR3, Gaia Collaboration et al., 2021) based on the source_id parameters. Over the 862 stars initially present in the J21 sample, 430 have LoS velocities, most of them contained in the central region of the stream and in the Hyades cluster itself, with a typical individual uncertainty of 2.9 km s\({}^{-1}\). Although the majority of the stars beyond \(\simeq 150\) pc do not have LoS measurements, the high velocity dispersion of several tens of km s\({}^{-1}\) tends to indicate that the J21 sample is dominated by contaminants in that region.
Therefore, due to the method used to select the stars, and due to the presence of a high fraction of contaminants in the furthest parts of the stream, we conclude that the J21 sample is not suited to discriminate which stream models (and, hence, which bar models) are in best agreement with the observations.
### Bayesian membership selection
To circumvent the problem of having to rely on a sample of observed Hyades stars for which the membership selection is biased toward a barless Galaxy, we developed a Bayesian method using the two simulated streams mentioned above to determine the membership probability of each star to belong to the Hyades stream.
The adopted approach is similar to the method described in Thomas and Battaglia (2022), itself inspired by the methods used to assign a membership probability to the stars of dwarf galaxies (Martin et al., 2013; Longeard et al., 2018; Pace and Li, 2019; McConnachie and Venn, 2020, 2020; Battaglia et al., 2022). These methods assume that the observations are well described by a two-component distribution, describing the distribution of the stars of the Milky Way and of the stars from a given structure (here, the Hyades stream). Therefore, the likelihood, \(p(\mathbf{u}|f_{\mathrm{H}})\), of a given star with data \(\mathbf{u}\) given a fraction of stars present in the Hyades stream \(f_{\mathrm{H}}\) can be defined as
\[p(\mathbf{u}|f_{\mathrm{H}})=f_{\mathrm{H}}p_{\mathrm{H}}(\mathbf{u})+(1-f_{ \mathrm{H}})\ p_{\mathrm{MW}}(\mathbf{u})\,, \tag{5}\]
where \(p_{\mathrm{H}}\) and \(p_{\mathrm{MW}}\) are, respectively, the probability distributions of the Hyades stream and of the MW foreground and background.
Both \(p_{\mathrm{H}}\) and \(p_{\mathrm{MW}}\) can be decomposed as a spatial projection density (\(p_{\mathrm{pos}}\)), a kinematic (\(p_{\mathrm{kin}}\)) and a distance (\(p_{\mathrm{dist}}\)) components, assuming that each of them are independent of each other. Using the LoS velocities from the _Gaia_ DR3 it would be possible to add an additional component to the likelihood, but we prefer to use them as an independent validation check of our method. Moreover, not all stars have an LoS velocity, so that with this other component of the likelihood, the spatial coverage of the stream will not be uniform and may potentially bias our selection.
The _Gaia_ data were downloaded from the ESA archive4 using the same ADQL query as J21, except that we relaxed the
Figure 6: LoS velocity (upper panel) and the Galactocentric position (lower panel) of the Hyades stream simulated in a barless galaxy (in blue) and with a bar having a pattern speed of \(\Omega_{b}=39\) km s\({}^{-1}\) kpc\({}^{-1}\) (in green). The candidate members of the Hyades stream from the eDR3 sample of J21 are shown by the red triangles.
parallax cut to be \(\varpi\geq 1\) mas (gs.parallax\(>\)=1.0), instead of \(\varpi\geq 2\) mas, to be able to detect the stream up to a heliocentric distance of 1 kpc. We also included the extinction inferred by GSP-Phot Aeneas from the BP and RP spectra (aq.gsphot and eppmnbrg.gspriot, Delchambre et al. 2023). This query includes a cut on the renormalised unit weight error (RUWE) associated to each source, such as RUWE\(<1.4\), to remove potential non-single sources or sources with problematic astrometric solution. This query results in 38,256,266 objects, among which \(\sim 32\) million have a value of the extinction, which we will refer to as the 'initial' sample.
#### 4.2.1 Photometric filtering
In the following section and in the rest of the paper, the _Gaia_ photometry has been corrected from reddening using the extinction values provided on the _Gaia_ ESA archive as mentioned in the previous section.
Given the Hyades make up a stellar cluster, the colour-absolute magnitude diagram (CaMD) track is aptly described by a PARSEC isochrone (Bressan et al. 2012) of 790 Myr and of metallicity \(Z=0.02\) ([Fe/H]=-0.03), as shown by J21. Therefore, it is possible to use the CaMD to pre-filter the stars that are potential members of the Hyades cluster and to exclude the obvious contaminants. However, as is visible on Figure 7, the CaMD of the stars from the J21 sample located in the inner 5 \(r_{\rm h}\) (\(r_{\rm h}\)=2.62 pc) of the Hyades cluster is not 'perfectly' described by the PARSEC isochrone (orange dashed line), especially for magnitudes fainter than M\({}_{G}\sim 10.0\) mag where the isochrone has a steeper CaMD track than the Hyades. For this reason, the isochrone used to filter the initial sample is modified by hand to better fit the CaMD of the Hyades cluster. The modified isochrone is shown by the green line on Figure 7. From the \(\sim 38\) million stars on the initial sample, \(\sim 19.4\) million of them are located within 0.1 mag of these CaMD tracks and pass the pre-filtering criteria. These stars are highlighted in blue on Figure 7. We note that without taking extinction into account, the total number of stars selected after this pre-filtering would be similar, but less crowded in the blue (luminous) part of the track and more crowded in the red part.
It is important to note in this case that other extinction maps are available (i.e. Green et al. 2019; Lallement et al. 2019, 2022), each having different extinction estimates, particularly outside of the Galactic disc (see discussions in Andrae et al. 2023). Therefore, our choice of a specific map here might have a non-negligible impact on the selection made by the photometric pre
Figure 7: Colour-absolute magnitude diagram of the _Gaia_ DR3 query in grey. On top of it are shown in the orange dashed line and the green plain line, the original and the modified PARSEC isochrone of a stellar population of metallicity \(Z=0.02\) and age 790 Myr, respectively. The black points correspond to the stars from the J21 selection located in the inner 5 \(r_{\rm h}\) of the Hyades cluster. The blue shaded region highlights the stars from the _Gaia_ DR3 query that are photometrically pre-filtered.
filter, in particular for the redder stars (BP-RP\(\gtrsim 2\)), for which the impact of the extinction is more important. For instance, stars from the selection made by Oh and Evans (2020), with their membership probability estimated \(>0.9\), which reach a maximum distance of 150 pc from the Hyades cluster and a heliocentric distance of up to 180 pc, the extinction of the stars from GSP-Phot Aenens ranges from \(A_{G}=0\) to 0.9 mag with a typical uncertainty of 0.08 mag, while the same stars have extinction values up to only 0.08 mag using the extinction map from Lallement et al. (2022), with the difference being the most important for the reddest stars. The extinction values from _Gaia_ GSP-Phot chosen here are (on average) similar to the extinction measured from Planck and to the extinction measured by Schlegel et al. (1998) at a high galactic latitude (Delchambre et al., 2023); it also seems to better catch the region of high extinction compared to other extinction maps (see Section 3.6 of Andrae et al., 2023). We defer the exploration of the detailed effect of the choice of different extinction maps to later studies.
#### 4.2.2 Distribution of the Milky Way foreground and background
Because the MW foreground and background distribution largely dominates the signal, it can be empirically determined from the filtered _Gaia_ data.
The spatial density distribution is made by counting the number of stars in each of the 49,152 HEALPix of level 6 (Gorski et al., 2005), which have an average area of 0.92 deg\({}^{2}\). This distribution is smoothed with a Gaussian over 10\({}^{\circ}\) before being normalised.
Along the Hyades stream, the proper motion distribution presents a wide dispersion due to the heliocentric distance gradient of the stream. Therefore, we preferred to use the physical transversal velocities (\(v_{a}^{*}\), \(v_{d}\)) rather than the proper motions. The distances used to compute the velocities are obtained by inverting the parallaxes. This is possible since our _Gaia_ sample contains only stars with relative precision on the parallax higher than 10 (Luri et al., 2018). Because the kinematic distribution of the Milky Way is spatially dependent, the kinematic distribution is determined locally, at the centre of each HEALPix. For each location, the local kinematic distribution is made from the stars located in the 121 nearest HEALPix, which at the equatorial coordinate equator typically correspond to all the stars located within a radius of 3.36\({}^{\circ}\). The local kinematic distributions are binned on a fine grid of 0.5 km s\({}^{-1}\times 0.5\) km s\({}^{-1}\) bins ranging between -70 and 70 km s\({}^{-1}\) in both \(v_{a}^{*}\), \(v_{d}\). The size of the bins have been chosen to have a similar size as the velocity dispersion along the stream measured by Oh and Evans (2020), 0.4-0.8 km s\({}^{-1}\), which is similar to the dispersion found in our models. This grid is smoothed over 0.5 km s\({}^{-1}\) to remove the local kinematic inhomogeneities. This method leads to a smooth, spatially dependent, kinematic distribution of the Milky Way, and is largely inspired by previous works that used a similar method to make spatially dependent colour-(colour)-magnitude diagrams (Martin et al., 2013; Thomas et al., 2020).
Similarly, the heliocentric distance distribution of the Milky Way depends on the position on the sky and is built in the same way as the kinematic distribution. The local distribution is made of 100 bins of 10 pc width between 0 and 1 kpc, and smoothed by a 10 pc width Gaussian before being normalised.
#### 4.2.3 Distribution of the Hyades stream
For the distribution of the Hyades stream, we explored two scenarios, a Hyades stream formed in a non-barred MW and one formed in a MW with a rotating bar of pattern speed \(\Omega_{\rm b}=39\) km s\({}^{-1}\) kpc\({}^{-1}\). For simplicity, in the rest of the paper, we will refer to these two scenarios as the 'barless' and'slow bar' model, respectively. As the stream formed in a non-barred MW and in a fast-barred MW of pattern speed \(\Omega_{\rm b}=55\) km s\({}^{-1}\) kpc\({}^{-1}\) are relatively similar, the conclusions that we will draw for the barless MW will also stand for the fast-barred MW. For each model, the spatial, kinematic and distance distributions of the Hyades stream were built from 20 realisations of the stream formation simulations.
For the spatial density distribution, the 20 simulations are used to estimate the track of the stream on the sky. However, we decided not to use the simulations to constrain the density variation along the stream itself for two reasons: 1) Due to the non-collisional nature of the simulations we carried out, the actual density of stars along the simulated stream might not be realistic. 2) The observations are largely dominated by the MW distribution and the density variation along the stream can be neglected to find the stream members. Therefore, as for the Milky Way distribution, the sky is split into HEALPix at level 6, but the probability of the pixels having at least one simulated particle is set to a constant. To avoid having some gaps due to the discretisation of the simulations that lead to a non-smooth distribution, we smoothed the distribution with a Gaussian that is 0.5\({}^{\circ}\) in width.
The kinematic and distance distributions were obtained in the same manner as for the MW, but using the particles from the simulations.
#### 4.2.4 Results
The fraction of stellar members of the Hyades stream (\(f_{\rm H}\)) was found by exploring the parameter space of the posterior distribution with the EMCEE Python package (Foreman-Mackey et al., 2013). According to the Bayes theorem, the posterior probability distribution function for the fraction of stars in the Hyades stream \(f_{\rm H}\), given the filtered data of \(N\) stars \(D=\{\mathbf{u}_{1},...,\mathbf{u}_{N}\}\), is \(p(f_{\rm H}|D)\propto p(D|f_{\rm H})\times P(f_{\rm H})\), where \(p(D|f_{\rm H})\equiv\Pi_{i=1}^{N}p(\mathbf{u}_{i}|f_{\rm H})\) is the total likelihood of the data given \(f_{\rm H}\) and \(P(f_{\rm H})\) is a flat prior on \(f_{\rm H}\) between 0 and 1. We found that the posterior distribution of the fraction of stars in the Hyades stream is well approximated near its mode by a Gaussian of mean and standard deviation \((\mu,\sigma)=(82.81,2.72)\times 10^{-5}\) for the barless model, and \((\mu,\sigma)=(165.77,4.89)\times 10^{-5}\) for the slow bar model. From this distribution, it is possible to compute the membership probability of each star of data, \(\mathbf{u}\), of belonging to the Hyades stream, as follows:
\[P_{\rm mem}(\mathbf{u})=\int_{0}^{1}\frac{f_{\rm H}\,p_{\rm H}(\mathbf{u})}{p( \mathbf{u}|f_{\rm H})}\ p(f_{\rm H}|D)\ \mathrm{d}f_{\rm H}, \tag{6}\]
where \(p(\mathbf{u}|f_{\rm H})\) is given by Eq. (5).
Applying this method to the photometrically pre-filtered sample led to 580 (569) candidate members with \(P_{\rm mem}>0.5\) for the barless (slow bar) model, of which 453 (437) have LoS velocity measurements from _Gaia RVS_. 327 stars are in common between the two selections, the majority of them being within 150 pc of the cluster. This is not surprising as both models have a very similar distribution in that region, as already mentioned in Section 4.1. The number of stars for different thresholds of \(P_{\rm mem}\) for both models are listed in Table 3. The barless and slow
bar selections have 228(229) stars in common with the sample of \(1,003\) stars selected by Oh & Evans (2020) that extends up to \(\sim 150\) pc too, with their membership probability above 0.9 for almost all of them in both case (99%). This difference between the number of stars in common with our selection and the total number of stars selected by Oh & Evans (2020) is due to the fact that stars of the latter sample not present in ours either do not have Gaia GSP-Phot extinction measurement or are located outside the CaMD selection region.
The Galactic position and the LoS velocity of these two selections with \(P_{\rm mem}>0.5\) is shown on Figure 8. A visual inspection of the LoS velocity distribution from the _RVS_ shows that both selections are far from being devoid of contaminants. We estimated the contamination levels using two methods. For the first method, for each sample, we compared the number of stars confined within a distance of 1 pc along the \(y\)-axis and of 5 km s\({}^{-1}\) in \(v_{\rm los}\) from a particle in the corresponding set of 20 simulations to the total number of stars with LoS measurement. With this method, we found that the contamination level is of at least 34% (28%) for stars located in the stream in the barless (slow-bar) selection. We note that this is a lower limit, since at least one of the two selections must be wrong and should contain more contaminant stars coming from the MW disc, especially far away from the cluster center. In the second method, the contamination fraction is crudely estimated by measuring the fraction of stars of the stream with a velocity compatible at a 3\(\sigma\) level with the velocity dispersion and the velocity gradient measured by Oh & Evans (2020) along the stream. However, this method is valid only up to a distance of 150 pc from the cluster. With this method, we found that the contamination level goes up to 50% (45%) for the barless (slow-bar) samples. In comparison, using the same crude method, we found that the Oh & Evans (2020) sample would have a contamination fraction of 23%.
Both selections present a stream with a typical length of 800 pc as found by J21. The slow-bar selection is more extended than the barless selection, but this can be explained by the fact that the stream, in the simulations on which the former selection is based, is more extended than the latter one. It is particularly interesting to see that both selections, while different, are still quite well populated, especially in the regions where the two models are clearly distinct (i.e. \(>\)150 pc from the cluster). Indeed, in these regions one would expect that the selection based on the model that reproduced the best the morphology and the dynamics of the real Hyades stream would be substantially more populated than the other one; it would be mostly, if not entirely, populated by disc contaminants. However, both samples have the same number of candidates, even in that distant region (237 stars), making it not possible to favour a scenario over the other with only this information at hand.
When comparing the morphology and the kinematics of the two selections with the two models they are based on, both possess a significant number of stars that have the expected dynamical characteristics especially at large distances, where the models are the most different. This is particularly striking on Fig. 9,
Figure 8: LoS velocity (upper panel) and the Galactocentric position (lower panel) of the candidate member stars selected with our Bayesian method for the barless model (blue points) and for the slow bar model (green points). In both panels, the grey points highlight the track of the simulated stream for each model. The positions of the points have been shifted vertically of \(\pm\)0.2 kpc w.r.t to the current position of the Hyades cluster in the lower panel and of \(\pm\)50 km s\({}^{-1}\) in the upper panel to separate both selections.
where the LoS velocities are used to clean each sample from contaminant stars with the first method described hereabove, that is, by selecting only those stars that are at a maximum distance of 5 km s\({}^{-1}\) in \(v_{\rm los}\) and 1 pc along the \(y\)-axis of a particle from the corresponding set of 20 simulations. The stars belonging to these two cleaned samples are highlighted by the black circles on Fig. 9. Even then, we can see that both cleaned selections still possess a significant number of stars up to \(\sim 400\) pc in each arm of the stream, making it impossible to discard or to favour one scenario over the other, especially since both sample have a similar number of stars, even outside the cluster itself.
Our interpretation of the fact that both selections are largely populated by stars that exhibit a kinematic behaviour that is consistent with the two scenarios that we explored is that the number of stars from the disc having similar dynamical properties as the Hyades stream exceeds the number of stars from the stream itself, particularly at large distances from the cluster. Here, it should be additionally noted that the presence of spiral arms, possibly generating the resonant Hyades'moving group' (e.g. Famaey et al., 2007, 2008; Pompeia et al., 2011; McMillan, 2013), could complicate the picture even further. Many disc stars seem to have a similar metallicity to the cluster, as they passed the photometric pre-filter. This explains why in both scenarios, we can detect stars with similar dynamical properties as the Hyades tidal stream, even in regions where the stream simulated within the two scenarios has clearly distinct dynamical signatures. Another caveat here is that the simulations the selections are based on do not capture the full complexity of the actual Hyades stream, due to their non-collisional nature or due to the MW models used here (neglecting the spiral arms, as explained above), which remain rather simple with respect to the true complexity of our Galaxy.
To push our analysis a bit further, we used the calibrated atmospheric parameters derived from the _Gaia RVS_ spectra by the _MatisseGauguin_ pipeline to check the consistency with the abundances from the Hyades cluster. Following the guidelines from Recio-Blanco et al. (2023), we selected stars having reliable parameters using the flags_gspspec5 from the _Gaia_ dataset :
Footnote 5: See Table 2 from Recio-Blanco et al. (2023).
* [noitemsep,topsep=0pt]
* [noitemsep,topsep=0pt]
* [noitemsep,topsep=0pt]
* [noitemsep,topsep=0pt]
* [noitemsep,topsep=0pt]
* [noitemsep,topsep=0pt]
* [noitemsep,topsep=0pt]
* [noitemsep,topsep=0pt]
The first three parameters assume that the impact of rotation is minimal in the temperature, surface gravity, and metallicity estimates, whilst the last one guarantees that the parametrisation for K and M-giant is correct6 With these criteria, 149(145) stars with a \(v_{\rm los}\) for the barless (slow bar) selection have a metallicity and \(\alpha\)-abundance estimate. Among these, 117 stars are included in both selections. A total of 77 (75) of these stars are located within 5\(r_{h}\) (\(\sim 13\) pc) of the Hyades center in the barless (slow-bar) sample, of which 74 are in common. These stars have \(\rm[M/H]=+0.14\) with a dispersion of 0.16 and \(\rm[\alpha/Fe]=-0.01\) with a dispersion of 0.14, fully consistent with the values found in the literature for the Hyades cluster itself, with \(\rm[M/H]\simeq+0.14\) and \(\rm[\alpha/Fe]\simeq 0.0\)(Strobel, 1991; Cayrel de Strobel et al., 1997; Perryman et al., 1998; Pompeia et al., 2011). In the barless (slow bar) selection, only 2 (5) of these stars are further than 300 pc from the cluster, namely, where the spatial and kinematic differences between the two selections are the most important. For the barless sample, 1 out of 2 stars has a metallicity and \(\alpha\)-abundance similar to the core of the cluster, while this fraction is of 2/5 for the slow bar sample. Obviously, this small number statistics for abundances does not allow us to favour one scenario over the other, but these results nevertheless tend to confirm the hypothesis that many contaminant disc stars with metallicity (and \(\alpha\)-abundance) values similar to the cluster did indeed pass the photometric pre-filter.
Footnote 6: Note: this last parameter is negligible, as our sample should not, in principle, include such stars.
Our analysis shows that to have the ability to favour a scenario over the other, a high-resolution follow-up of the highly probable member stars of the Hyades stream is required to disentangle the stars originated from the Hyades cluster from the contaminant ones originated from the thin disc. Indeed, the stars of the Hyades cluster present a homogeneous chemical distribution and display individual element abundances that differ from the rest of the thin disc population of similar metallicity (Gebran et al., 2010; de Silva et al., 2011; Pompeia et al., 2011; Cummings et al., 2017). The most promising element to discriminate which scenario is favoured by the data is lithium: it has been shown by Boesgaard & King (2002) and Pompeia et al. (2011) that stars in the Hyades cluster follow a tight Li-T\({}_{eff}\) relation in a narrow temperature range of 5000-6500 K (see Fig. 11 of Pompeia et al., 2011), whereas the field stars present a large dispersion in Li-abundance (\(\sim 2\) dex) at every temperature. This tight correlation between Li and temperature follows from the fact that stars of the Hyades cluster are coeval (Deliyannis, 2000). We note that this tight correlation is only visible for stars in a narrow range of temperature (5000-6500 K) since outside that range, Li gets destroyed by diffusive or convective downward motions (see Sessitio & Randich, 2005, and references therein). The Li-abundances have also been used to find likely member candidates for 20 other clusters (Gutierrez Albarran et al., 2020). Therefore, with a spectroscopic high-resolution follow-up for the stars of our selected samples within a range of 5000 \(\leq\)T\({}_{eff}\leq\) 6500K, it will be possible to check whether the stars selected based on one or the other scenario present a Li-T\({}_{eff}\) relation similar to the one seen in the Hyades cluster, or if it spans a wide range of Li-abundance values that are characteristic of field stars. In particular, it will be interesting to focus this analysis on stars of the trailing arm, located at a distance of 250-300 pc of the cluster, since in that region the candidate members are mutually exclusive due to the different trend in \(V_{los}\). Our candidate members in both scenarios will therefore be ideal targets for a WEAVE follow-up study (Jin et al., 2023).
#### 4.2.5 Leading and trailing arm asymmetry
Previous observations of the Hyades stream have found that the trailing arm is less extended and less populated than the leading arm (Roser et al., 2019; Jerabkova et al., 2021), possibly indicating either a disturbance from a putative massive dark matter sub-halo or a departure from the standard gravitational framework (Thomas et al., 2018; Kroupa et al., 2022) in accordance
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline Model & \(P_{\rm mem}\) & \(N_{\rm stars}\) & \(N_{\rm stars}\) with LoS \\ \hline \multirow{3}{*}{Barless} & \(>\)0.5 & 580 & 453 \\ & \(>\)0.7 & 415 & 355 \\ & \(>\)0.9 & 308 & 260 \\ \hline \multirow{3}{*}{Slow bar} & \(>\)0.5 & 569 & 437 \\ & \(>\)0.7 & 420 & 340 \\ \cline{1-1} & \(>\)0.9 & 314 & 259 \\ \hline \end{tabular}
\end{table}
Table 3: Number of candidate members of the Hyades stream for different threshold of \(P_{\rm mem}\) for the two models explored.
with MOND (Milgrom, 1983; Famaey & McGaugh, 2012). For instance, based on the J21 sample, Kroupa et al. (2022) measured that the ratio between the number of stars located in a distance range of 50-200 pc from the cluster in the leading (\(N_{\rm lead}\)) and in the trailing (\(N_{\rm trail}\)) arms is of \(q_{\rm 50-200pc}\equiv N_{\rm lead}/N_{\rm trail}=2.53\pm 0.37\), while the simulations of this stream made in Newtonian dynamics found a ratio that did not exceed 1 by more than 3%.
Next, we aim to revise the ratio of the number of stars in the leading or trailing arms in the selections of Hyades stream candidates that we established previously. As in Kroupa et al. (2022), we focus on the region in a distance range of 50-200 pc from the cluster in both arms. However, here, we used the distance of the stars along the stream - and not the projected 2D distance - to take into account the fact that the Hyades stream that is formed in the barless scenario is slightly wider than that formed in the slow bar scenario within that distance range. Nevertheless, this small modification does not significantly change the results, since we measured a number ratio of \(q_{\rm 50-200pc}=2.59\pm 0.39\) for the J21 sample, similar to the value reported in Kroupa et al. (2022).
As we report in Table 4, for both selections, we found a number ratio lower than with the J21 sample, of \(q_{\rm 50-200pc}=1.51\pm 0.25\) and \(1.25\pm 0.22\) for the selection based on the barless-and-fast-bar and on the slow bar scenario, respectively. This difference can be explained by the lower number of stars that our method selected in the leading arm. Even when selecting stars with the highest probability of membership, the number ratio is lower than previously measured. However, this does not change the conclusion drawn by Kroupa et al. (2022) since a value of \(q_{\rm 50-200pc}\) ranging between 1.2-1.6, as we found in our selections, is closer to the number ratio they measured in the simulation of a Hyades-like stream made in the MOND framework (see their Figure 13), and is systematically above the typical Newtonian value. Yet it is interesting to note that the slow bar scenario brings the ratio closer to the Newtonian expectation of 1. We also note that the number ratio measured here is not impacted by the slight difference in heliocentric distance between the two arms of the stream, as the ratios are similar for different ranges of magnitude. However, these results have to be taken with care, as the potential high fraction of contaminant stars and the use of different extinction maps could alter the conclusions drawn in this work.
\begin{table}
\begin{tabular}{|l|l|c|c|c|} \hline Dataset & Criteria & \(N_{\rm lead}\) & \(N_{\rm trail}\) & \(q_{\rm 50-200pc}\) \\ \hline \multirow{3}{*}{Barless} & \(P_{\rm mem}>0.5\) & 86 & 57 & 1.51 \({}_{\rm 0.25}\) \\ & \(P_{\rm mem}>0.8\) & 62 & 42 & 1.48 \({}_{\rm 0.30}\) \\ & \(V_{\rm bis}\) cleaned & 76 & 55 & 1.63 \({}_{\rm 0.40}\) \\ \hline \multirow{3}{*}{Slow bar} & \(P_{\rm mem}>0.5\) & 71 & 58 & 1.22 \({}_{\rm 0.22}\) \\ & \(P_{\rm mem}>0.8\) & 56 & 41 & 1.37\({}_{\rm 0.28}\) \\ & \(V_{\rm los}\) cleaned & 42 & 31 & 1.35\({}_{\rm 0.32}\) \\ \hline \multirow{3}{*}{J21} & All & 158 & 61 & 2.59 \({}_{\rm 0.39}\) \\ & \(V_{\rm los}\) cleaned & 40 & 14 & 2.86 \({}_{\rm 0.89}\) \\ \hline \end{tabular}
\end{table}
Table 4: Number of stars of the Hyades stream located at a range of 50-200 pc from the cluster in the leading (\(N_{\rm trail}\)) and trailing (\(N_{\rm trail}\)) arms, and the derived number ratio for the sampled made from the two scenario we explored and for different criteria, as well as for the J21 sample.
Figure 9: Same as Figure 8, but only for the stars that have a LoS velocity measurement in _Gaia RVS_ on the lower panel. In both panels, black circles highlight stars that have a LoS velocity compatible with the model on which their selection is based.
A final interesting point to note here is that the bar does not produce a clear asymmetry in the leading or trailing arm in the case of the Hyades stream, regardless of its pattern speed, since we found a number ratio around unity in the simulations made with a fast or slow bar, contrary to what we measured in real candidate stars based on those models. This is thus different from what had been found in the case of the Palomar 5 stream, whose asymmetry had been tentatively attributed to the bar (Pearson et al., 2017).
## 5 Discussions and conclusions
In this work, we applied a new implementation of a multi-modal Galactic bar in the GyraficON \(N\)-body code to the simulation of the tidal stream of the Hyades open cluster. We analysed how the Galactic bar affects the morphology and the dynamics of such a stellar stream inhabiting the solar vicinity on a low-eccentricity disk orbit. The (collisionless) \(N\)-body simulations of the Hyades stream formation shows that its length and (more importantly) its orientation with respect to the Galactic frame are largely impacted by the pattern speed of the Galactic bar, particularly when the cluster is located between the co-rotation radius and the OLR of the bar. We have not considered a change in the length of the bar when varying its pattern speed, but the most important quantities for our results to hold are the amplitudes of the different modes close to the Sun's position - and not their shape in the inner Galaxy. Also, our model does not include the effect of spiral arms, which could actually have a non-negligible effect and it does not consider possible rapid changes in the bar's pattern speed. On the other hand, secular changes in the pattern speed related to such effects as dynamical friction with the dark matter halo would happen on timescales that are much longer than the lifetime of typical open clusters such as the Hyades.
From these simulations, we found that the stream is more inclined toward the Galactic centre in the presence of a slow bar having a pattern speed of \(\Omega_{\rm b}=39\) km s\({}^{-1}\) kpc\({}^{-1}\), consistent with the latest direct measurements, than in the absence of a bar. Moreover, the simulations also show that a Hyades stream formed in a fast bar model with \(\Omega_{\rm b}=55\) km s\({}^{-1}\) kpc\({}^{-1}\) is comparable in length and inclination to one formed in a barless Galaxy.
The comparison of the simulations with the sample of Hyades candidates selected by Jerabkova et al. (2021) shows that these selected stars favour a fast bar scenario. However, using radial velocity measurements from the _Gaia RVS_ instrument, we found that the sample of J21 presents a high number of possible disc contaminants, especially in the most distant region of the stream, where the difference of position generated by the pattern speed is the strongest. The method used to select these Hyades candidates is model-dependent, which is biased toward a stream having a very small stream-orbit misalignment, that is, as measured in the presence of a fast bar or in a barless Galaxy.
Therefore, in the second part of the paper, we present a Bayesian method to select member candidates of the Hyades stream using the astrometry and photometry from the _Gaia_ DR3 based on two scenarios, a barless (equivalent to fast-barred) and a slow barred MW. Both selections present a stream with a typical length of 800 pc and they are both populated by a similar number of candidates in the regions where the two selections are clearly distinct (i.e. each has 237 candidates at \(d\)\(>\)150 pc from the cluster). This, in turn, does not allow to favour one scenario over the other, a problem related to the presence of a non-negligible number of residual disc contaminants in both cases, which is confirmed by the metallicities and alpha-abundances measurements from _Gaia_ GSP-spec.
Unfortunately, the LoS velocities that are available for a large number of stars cannot remove all disc contaminants, especially beyond distances \(\sim\) 150 pc from the cluster. Because many of these contaminants have similar photometry and kinematics as the stars of the Hyades tidal stream, we argue that a high-resolution follow-up of our candidate members is required to decrease the contamination from the disc population to increase the purity of the Hyades stream sample and to disentangle which stream track is the correct one, as the Hyades cluster has some peculiar individual abundance ratios with respect to the thin disc (Gebran et al., 2010; de Silva et al., 2011; Pompcia et al., 2011). In particular, stars in the Hyades clusters present a tight Li-T\({}_{eff}\) relation in the range of 5000 \(\leq\)T\({}_{eff}\)\(\leq\) 6500K, whereas field stars span a wide range of \(\sim\) 2 dex at every temperature (Pompcia et al., 2011). Our study indicates that measuring this Li-abundance for stars at a distance of 250-300 pc from the cluster within the trailing arm region, with, for instance, a WEAVE follow-up (Jin et al., 2023), will allow for a conclusive determination of which stream track is favoured, thereby providing novel constraints on the Galactic bar at the same time.
Finally, we note that in almost all cases, we found that the observed Hyades stream has a leading arm which is more populated than the trailing arm, as previously reported (Roser et al., 2019; Jerabkova et al., 2021; Kroupa et al., 2022). However, the number ratio is lower than measured by Kroupa et al. (2022) using the J21 sample, and is closer to the value they found in their simulations of a Hyades-like stream formed in MOND dynamics.
While the simulations presented here certainly do not capture the full complexity of the actual Hyades tidal stream (for instance by neglecting the effect of spiral arms on the stream track), they nevertheless demonstrate how complicated the dynamics of streams residing within the Galactic disc can be in comparison to those residing in the stellar halo. Thus, there is a strong argument for chemistry and dynamics going hand in hand to disentangle this complexity. Our study conclusively demonstrates that current candidate members of the Hyades stream should not be trusted beyond 200 pc from the cluster.
## Acknowledgements
The authors thank Alejandra Recio-Blanco and Giuseppina Battaglia for useful discussions, as well as the anonymous referee for comments that increased the scientific quality of the publication. GFT acknowledges support from Agencia Estatal de Investigacion del Ministerio de Ciencia en Innovacion (AEI-MICN) under grant number PID2020-118778GB-100/10.13039/501100011033, the MICIN under grant number FJC2018-037323-I, and the AEI under grant number CEX2019-000920-S. BF, GM and R.I. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 834148) and from the Agence Nationale de la Recherche (ANR projects ANR-18-CE31-0006 and ANR-19-CE31-0017). CL acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 852839)
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This re
search has made use of the SIMBAD database and of the VizieR catalogue access tool, operated at CDS, Strasbourg, France.
The N-body simulations, and the selected candidate members of the Hyades stream will be shared via private communication upon a reasonable request. The GaBar code is publicly available at [https://github.com/GFThomas/GalBar.git](https://github.com/GFThomas/GalBar.git).
|
2309.03342 | A note on a generalized double series | By employing contour integration the derivation of a generalized double
finite series involving the Hurwitz-Lerch zeta function is used to derive
closed form formulae in terms of special functions. We use this procedure to
find special cases of the summation and product formulae in terms of the
Hurwitz-Lerch zeta function, trigonometric functions and the gamma function. | Robert Reynolds | 2023-09-06T19:54:19Z | http://arxiv.org/abs/2309.03342v1 | # A note on a generalized double series
###### Abstract.
By employing contour integration the derivation of a generalized double finite series involving the Hurwitz-Lerch zeta function is used to derive closed form formulae in terms of special functions. We use this procedure to find special cases of the summation and product formulae in terms of the Hurwitz-Lerch zeta function, trigonometric functions and the gamma function. A short table of quotient gamma functions is produced for easy reading by interested readers.
Key words and phrases:double finite product, contour integration, gamma function, trigonometric function, Euler's constant, Catalan's constant 2020 Mathematics Subject Classification: Primary 30E20, 33-01, 33-03, 33-04
## 1. Introduction
Finite sums of special functions are recorded in chapter 5 in [1]. Equations (5.3.10.1) and (5.3.10.2) in [2] yield finite summations of the Hurwitz-zeta function \(\Phi(z,s,a)\) in terms of the Hurwitz-zeta function. In [3] an infinite sum involving the Hurwitz-Lerch zeta function was studied. The underlying formula transformed by contour integration is the infinite series involving the tangent function whose angle is a multiple of 2 raised to an integer power. The work done in [3] was centred around evaluating an infinite sum in terms of special functions, while in the current paper the author studies double finite series of special functions. In the current paper, the underlying function transformed by contour integration has a more generalized angle form which allows for more interesting evaluations in term of trigonometric and special functions.
The reason behind this research, is to expand on literature involving the finite double sum and product of special functions, such as the gamma function. The novelty this work represents, lies in the derivation and evaluation involving the double finite product of quotient gamma functions which is not present in current literature to the best of my knowledge. We use a contour integral method and apply it to a generalized double finite summation formula involving the secant function to study the resulting double finite summation of the Hurwitz-Lerch zeta function. Since the Hurwitz-Lerch zeta function is a special function it has several composite functions which can yield new results by algebraic means. Some of these composite functions are the gamma function and other trigonometric functions. The objectives pursued in this work were the derivation of a double finite sum involving
the Hurwitz-Lerch zeta function, along with double finite products involving the gamma function and a few functional identities for the Hurwitz-Lerch zeta function.
In this work we apply the contour integral method [4], to the the generalized form of the finite secant sum on page 90 in [5] to derive a generalized form involving the finite sum of the secant function, resulting in
\[\frac{1}{2\pi i}\int_{C}\sum_{p=0}^{n}a^{w}w^{-1-k}\left(\left(1+(- 1)^{1+z}\right)\sec\left((m+w)(1+2z)^{p}\right)\right.\\ +\sum_{j=1}^{z}2(-1)^{1+j+z}\cos\left(2j(m+w)(1+2z)^{-1+p}\right)\] \[\qquad\qquad\qquad\left.\sec\left((m+w)(1+2z)^{p}\right)\right)dw\] \[=\frac{1}{2\pi i}\int_{C}a^{w}w^{-k-1}\left(\sec\left((m+w)(2z+1) ^{n}\right)-\sec\left(\frac{m+w}{2z+1}\right)\right)dw \tag{1.1}\]
where \(a,m,k\in\mathbb{C},Re(m+w)>0,n\in\mathbb{Z}^{+},z\in\mathbb{Z}^{+}\). Using equation (1.1) the main Theorem to be derived and evaluated is given by
\[\sum_{p=0}^{n}\left((-1)^{z+1}+1\right)\left((2z+1)^{p}\right)^{k }e^{im(2z+1)^{p}}\Phi\left(-e^{2im(2z+1)^{p}},-k,\frac{1}{2}\left(a(2z+1)^{-p} +1\right)\right)\\ +\sum_{p=0}^{n}\sum_{j=1}^{z}(-1)^{j+z+1}\left((2z+1)^{p}\right)^ {k}e^{im(-2j+2z+1)(2z+1)^{p-1}}\\ \left(\Phi\left(-e^{2im(2z+1)^{p}},-k,\frac{1}{2}\left(a(2z+1)^{- p}+1-\frac{2j}{2z+1}\right)\right)\right.\\ \left.+e^{4ijm(2z+1)^{p-1}}\Phi\left(-e^{2im(2z+1)^{p}},-k,\frac{ j}{2z+1}+\frac{1}{2}\left(a(2z+1)^{-p}+1\right)\right)\right)\\ =((2z+1)^{n})^{k}\,e^{im(2z+1)^{n}}\Phi\left(-e^{2im(2z+1)^{n}},- k,\frac{1}{2}\left(a(2z+1)^{-n}+1\right)\right)\\ -\left(\frac{1}{2z+1}\right)^{k}e^{\frac{im}{2z+1}}\Phi\left(-e^{ \frac{2im}{2z+1}},-k,\frac{1}{2}(2za+a+1)\right) \tag{1.2}\]
where the variables \(k,a,m\) are general complex numbers and \(n,z\) are positive integers. The derivations follow the method used by us in [4]. This method involves using a form of the generalized Cauchy's integral formula given by
\[\frac{y^{k}}{\Gamma(k+1)}=\frac{1}{2\pi i}\int_{C}\frac{e^{wy}}{w^{k+1}}dw, \tag{1.3}\]
where \(y,w\in\mathbb{C}\) and \(C\) is in general an open contour in the complex plane where the bilinear concomitant [4] is equal to zero at the end points of the contour. This method involves using a form of equation (1.3) then multiplies both sides by a function, then takes the double finite sum of both sides. This yields a double finite sum in terms of a contour integral. Then we multiply both sides of equation (1.3) by another function and take the infinite sum of both sides such that the contour integral of both equations are the same.
## 2. The Hurwitz-Lerch zeta function
We use equation (1.11.3) in [6] where \(\Phi(z,s,v)\) is the Lerch function which is a generalization of the Hurwitz zeta \(\zeta(s,v)\) and Polylogarithm functions \(Li_{n}(z)\). In number theory and complex analysis, the Lerch function is a mathematical function that appears in many branches of mathematics and physics. It is named after Czech mathematician Mathias Lerch, who published a paper about the function in 1887. Numerous areas of mathematics, including number theory (especially in the investigation of the Riemann zeta function and its generalizations), complex analysis, and theoretical physics, all have uses for it. It can be used to express a variety of complex functions and series and is involved in numerous mathematical identities. The Lerch function has a series representation given by
\[\Phi(z,s,v)=\sum_{n=0}^{\infty}(v+n)^{-s}z^{n} \tag{2.1}\]
where \(|z|<1,v\neq 0,-1,-2,-3,..\), and is continued analytically by its integral representation given by
\[\Phi(z,s,v)=\frac{1}{\Gamma(s)}\int_{0}^{\infty}\frac{t^{s-1}e^{-(v-1)t}}{e^{t }-z}dt \tag{2.2}\]
where \(Re(v)>0\), and either \(|z|\leq 1,z\neq 1,Re(s)>0\), or \(z=1,Re(s)>1\).
## 3. Contour integral representation for the finite sum of the Hurwitz-Lerch zeta functions
In this section we derive the contour integral representations of the left-hand side and right-hand side of equation (1.1) in terms of the Hurwitz-Lerch zeta and trigonometric functions.
### Derivation of the generalized Hurwitz-Lerch contour integral for the secant function
We use the method in [4]. Using equation (1.3) we first replace \(y\) by \(\log(a)+ib(2y+1)\) and multiply both sides by \(-2i(-1)^{y}e^{ibm(2y+1)}\) then take the infinite sum over \(y\in[0,\infty)\) and simplify in terms of the Hurwitz-Lerch zeta function to get
\[-\frac{i2^{k+1}(ib)^{k}e^{ibm}\Phi\left(-e^{2ibm},-k,\frac{b-i \log(a)}{2b}\right)}{\Gamma(k+1)}\\ =\frac{1}{2\pi i}\sum_{y=0}^{\infty}\int_{C}(-1)^{y}a^{w}w^{-k-1} e^{ib(2y+1)(m+w)}dw\\ =\frac{1}{2\pi i}\int_{C}\sum_{y=0}^{\infty}(-1)^{y}a^{w}w^{-k-1 }e^{ib(2y+1)(m+w)}dw\\ =-\frac{1}{2\pi i}\int_{C}ia^{w}w^{-k-1}\sec(b(m+w))dw \tag{3.1}\]
from equation (1.232.2) in [7] where \(Re(w+m)>0\) and \(Im\left(m+w\right)>0\) in order for the sums to converge. We apply Tonelli's theorem for multiple sums, see page 177 in [8] as the summands are of bounded measure over the space \(\mathbb{C}\times[0,\infty)\).
Derivation of a generalized Hurwitz-Lerch zeta function in terms of the product of the cosine and secant function contour integral
We use the method in [4]. Using a generalization of Cauchy's integral formula (3) we first replace \(y\) by \(\log(a)+ix+y\) then multiply both sides by \(e^{mxi}\) then form a second equation by replacing \(x\) by \(-x\) and add both equations to get
\[\frac{e^{-imx}\left(e^{2imx}(\log(a)+ix+y)^{k}+(\log(a)-ix+y)^{k} \right)}{\Gamma(k+1)}\\ =\frac{1}{2\pi i}\int_{C}2w^{-k-1}e^{w(\log(a)+y)}\cos(x(m+w))dw \tag{4}\]
Next we replace \(y\) by \(ib(2y+1)\) and multiply both sides by \((-1)^{y}e^{ibm(2y+1)}\) and take the infinite sum over \(y\in[0,\infty)\) and simplify in terms of the Hurwitz-Lerch zeta function to get
\[\frac{2^{k}(ib)^{k}e^{im(b-x)}\left(\Phi\left(-e^{2ibm},-k,\frac{b-x-i\log(a)} {2b}\right)+e^{2imx}\Phi\left(-e^{2ibm},-k,\frac{b+x-i\log(a)}{2b}\right) \right)}{\Gamma(k+1)}\\ =\frac{1}{2\pi i}\sum_{y=0}^{\infty}\int_{C}2(-1)^{y}a^{w}w^{-k- 1}e^{ib(2y+1)(m+w)}\cos(x(m+w))dw\\ =\frac{1}{2\pi i}\int_{C}\sum_{y=0}^{\infty}2(-1)^{y}a^{w}w^{-k- 1}e^{ib(2y+1)(m+w)}\cos(x(m+w))dw\\ =\frac{1}{2\pi i}\int_{C}a^{w}w^{-k-1}\sec(b(m+w))\cos(x(m+w))dw \tag{5}\]
from equation (232.2) and (1.411.3) in [7] where \(Re(w+m)>0\) and \(Im\,(m+w)>0\) in order for the sums to converge. We apply Tonelli's theorem for sums and integrals, see page 177 in [8] as the summand and integral are of bounded measure over the space \(\mathbb{C}\times[0,\infty)\).
### Derivation of the contour integrals
In this section we will use equations (4) and (5) and simple substitutions to derive the contour integrals in equation (1).
#### 3.3.1 Left-hand side first contour integral
Use equation (4) and replace \(b\) by \((2z+1)^{p}\) then multiply both sides by \((-1)^{z+1}+1\) and take the finite sum over \(p\in[0,n]\) to get;
\[\sum_{p=0}^{n}\frac{1}{k!}2^{k+1}\left((-1)^{z+1}+1\right)\left(i(2z+1)^{p} \right)^{k}e^{im(2z+1)^{p}}\\ \Phi\left(-e^{2im(2z+1)^{p}},-k,\frac{1}{2}(2z+1)^{-p}\left((2z+ 1)^{p}-i\log(a)\right)\right)\\ =\frac{1}{2\pi i}\int_{C}\sum_{p=0}^{n}\left((-1)^{z+1}+1\right)a ^{w}w^{-k-1}\sec\left((m+w)(2z+1)^{p}\right)dw \tag{6}\]
#### 3.3.2 Left-hand side second contour integral
Use equation (5) and replace \(x\) by \(2j(2z+1)^{p-1}\), and \(b\) by \((2z+1)^{p}\) then multiply both sides by \(2(-1)^{j+z+1}\) and take the finite sums over \(p\in[0,n]\) and \(j\in[1,z]\) to get;
\[\sum_{p=0}^{n}\frac{1}{k!}2^{k+1}(-1)^{j+z+1}\left(i(2z+1)^{p}\right) ^{k}\exp\left(im\left((2z+1)^{p}-2j(2z+1)^{p-1}\right)\right)\\ \sum_{j=1}^{z}\left(\Phi\left(-e^{2im(2z+1)^{p}},-k,\frac{1}{2}(2z+ 1)^{-p}\left(-2j(2z+1)^{p-1}+(2z+1)^{p}-i\log(a)\right)\right)\right.\\ \left.+e^{4ijm(2z+1)^{p-1}}\Phi\left(-e^{2im(2z+1)^{p}},-k,\frac{ 1}{2}(2z+1)^{-p}\left(2j(2z+1)^{p-1}+(2z+1)^{p}-i\log(a)\right)\right)\right)\\ =\frac{1}{2\pi i}\int_{C}\sum_{p=0}^{n}\sum_{j=1}^{z}2a^{w}(-1)^{ j+z+1}w^{-k-1}\sec\left((m+w)(2z+1)^{p}\right)\cos\left(2j(m+w)(2z+1)^{p-1} \right)dw \tag{3.5}\]
#### 3.3.3. Right-hand side first contour integral
Use equation (3.1) and replace \(b\) by \(\frac{1}{2z+1}\) then multiply both sides by \(-1\) to get;
\[-\frac{1}{k!}2^{k+1}\left(\frac{i}{2z+1}\right)^{k}e^{\frac{im}{2 z+1}}\Phi\left(-e^{\frac{2im}{2z+1}},-k,\frac{1}{2}(2z+1)\left(\frac{1}{2z+1}-i \log(a)\right)\right)\\ =-\frac{1}{2\pi i}\int_{C}a^{w}w^{-k-1}\sec\left(\frac{m+w}{2z+1} \right)dw \tag{3.6}\]
#### 3.3.4. Right-hand side second contour integral
Use equation (3.1) and replace \(b\) by \((2z+1)^{n}\) to get;
\[\frac{1}{k!}2^{k+1}\left(i(2z+1)^{n}\right)^{k}e^{im(2z+1)^{n}} \Phi\left(-e^{2im(2z+1)^{n}},-k,\frac{1}{2}(2z+1)^{-n}\left((2z+1)^{n}-i\log( a)\right)\right)\\ =\frac{1}{2\pi i}\int_{C}a^{w}w^{-k-1}\sec\left((m+w)(2z+1)^{n} \right)dw \tag{3.7}\]
## 4. Finite sum of the Hurwitz-Lerch zeta function and evaluations
In this section we will derive and evaluate formulae involving the finite double sum of the Hurwitz-Lerch zeta function in terms other special functions, trigonometric functions and fundamental constants.
**Theorem 4.1**.: _For all \(k,a\in\mathbb{C},-1<Re(m)<1,n\in\mathbb{Z}_{+},z\in\mathbb{Z}_{+}\) then,_
\[\sum_{p=0}^{n}\left((-1)^{z+1}+1\right)\left((2z+1)^{p}\right)^{k }e^{im(2z+1)^{p}}\Phi\left(-e^{2im(2z+1)^{p}},-k,\frac{1}{2}\left(a(2z+1)^{-p} +1\right)\right)\\ +\sum_{p=0}^{n}\sum_{j=1}^{z}(-1)^{j+z+1}\left((2z+1)^{p}\right) ^{k}e^{im(-2j+2z+1)(2z+1)^{p-1}}\\ \left(\Phi\left(-e^{2im(2z+1)^{p}},-k,\frac{1}{2}\left(a(2z+1)^{- p}+1-\frac{2j}{2z+1}\right)\right)\right.\\ \left.+e^{4ijm(2z+1)^{p-1}}\Phi\left(-e^{2im(2z+1)^{p}},-k,\frac{ j}{2z+1}+\frac{1}{2}\left(a(2z+1)^{-p}+1\right)\right)\right)\\ =((2z+1)^{n})^{k}\,e^{im(2z+1)^{n}}\Phi\left(-e^{2im(2z+1)^{n}},-k,\frac{1}{2}\left(a(2z+1)^{-n}+1\right)\right) \tag{4.1}\]
\[-\left(\frac{1}{2z+1}\right)^{k}e^{\frac{im}{2z+1}}\Phi\left(-e^{\frac{2im}{2z+1} },-k,\frac{1}{2}(2za+a+1)\right)\]
Proof.: Observe that the addition of the right-hand sides of equations (3.4) and (3.5), is equal to the addition of the right-hand sides of equations (3.6) and (3.7) so we may equate the left-hand sides and simplify relative to equation (1.1) and the Gamma function to yield the stated result.
**Example 4.2**.: The degenerate case.
\[\sum_{p=0}^{n}\sec\left(m(2z+1)^{p}\right)\left(\sum_{j=1}^{z}2e^{ i\pi(j+z)}\cos\left(2jm(2z+1)^{p-1}\right)+e^{i\pi z}-1\right)\\ =\sec\left(\frac{m}{2z+1}\right)-\sec\left(m(2z+1)^{n}\right) \tag{4.2}\]
Proof.: Use equation (4.1) and set \(k=0\) and simplify using entry (2) in Table below equation (64:12:7) in [9].
**Example 4.3**.: Double finite product of quotient gamma functions, where \(a\in\mathbb{C},n\in\mathbb{Z}_{+}\), \(z\) is even. Multiple finite products involving the gamma function are recorded in [p. 214] in [10] and [p.173] in [11].
\[\prod_{p=0}^{n}\prod_{j=1}^{z}\left(\frac{\Gamma\left(\frac{1}{4} \left(a(2z+1)^{-p}+1-\frac{2j}{2z+1}\right)\right)\Gamma\left(\frac{1}{4} \left(a(2z+1)^{-p}+1+\frac{2j}{2z+1}\right)\right)}{\Gamma\left(\frac{1}{4} \left(a(2z+1)^{-p}+3+\frac{2j}{2z+1}\right)\right)}\right)^{(-1)^{j}}\\ =\frac{(2z+1)^{\frac{n+1}{2}}\Gamma\left(\frac{1}{4}(2za+a+1) \right)\Gamma\left(\frac{1}{4}\left(a(2z+1)^{-n}+3\right)\right)}{\Gamma \left(\frac{1}{4}(2za+a+3)\right)\Gamma\left(\frac{1}{4}\left(a(2z+1)^{-n}+1 \right)\right)} \tag{4.3}\]
Proof.: Use equation (4.1) and set \(m=0\) and simplify in terms of the Hurwitz zeta function using entry (4) in Table below (64:12:7) in [9]. Next take the first partial derivative with respect to \(k\) and set \(k=0\) and simplify in terms of the log-gamma function using equation (25.11.18) in [12]. Next we take the exponential of both sides and simplify in terms of the gamma function.
**Example 4.4**.: Product of gamma functions in terms of the square root of odd numbers greater than equal to 5. We also present a product formula for the ratio the Elliptic integral of the first kind see [p.1140] in [13].
\[\frac{\Gamma\left(\frac{a+1}{4}\right)\Gamma\left(\frac{1}{4}(2za+ a+3)\right)}{\Gamma\left(\frac{a+3}{4}\right)\Gamma\left(\frac{1}{4}(2za+a+1) \right)}\prod_{j=1}^{z}\left(\frac{\Gamma\left(\frac{1}{4}\left(a-\frac{2j}{2 z+1}+1\right)\right)\Gamma\left(\frac{1}{4}\left(a+\frac{2j}{2z+1}+3\right) \right)}{\Gamma\left(\frac{1}{4}\left(a-\frac{2j}{2z+1}+3\right)\right)\Gamma \left(\frac{1}{4}\left(a+\frac{2j}{2z+1}+3\right)\right)}\right)^{(-1)^{j}}\\ =\sqrt{2z+1} \tag{4.4}\]
Proof.: Use equation (4.3) and set \(p=n=0\) and simplify.
**Example 4.5**.: A Hurwitz-Lerch zeta functional equation with first parameter involving the cubic root of z.
\[\Phi\left(-(iz)^{2/3},s,a\right)\\ =9^{-s}\left(3^{s}\Phi\left(z^{2},s,\frac{a}{3}\right)+(iz)^{2/3} \left(\Phi\left(z^{6},s,\frac{a+1}{9}\right)-2\times 3^{s}\Phi\left(z^{2},s, \frac{a+1}{3}\right)\right.\right.\\ \left.\left.+3^{s}(iz)^{2/3}\Phi\left(z^{2},s,\frac{a+2}{3}\right) +z^{4}\Phi\left(z^{6},s,\frac{a+7}{9}\right)+z^{2}\Phi\left(z^{6},s,\frac{a+4} {9}\right)\right)\right) \tag{4.5}\]
Proof.: Use equation (4.1) and set \(n=1,z=1,m=-i\log(z),a=e^{ia}\) and simplify. Next replace \(a=\frac{2}{3}\left(a-\frac{1}{2}\right),k=-s,z=zi\) and simplify.
**Example 4.6**.: Double finite sum in terms of trigonometric functions.
\[\sum_{p=0}^{n}\sum_{j=1}^{z}2e^{i\pi(j+z)}(2z+1)^{p-1}\sec\left(x( 2z+1)^{p}\right)\\ \left((2z+1)\tan\left(x(2z+1)^{p}\right)\cos\left(2jx(2z+1)^{p-1 }\right)-2j\sin\left(2jx(2z+1)^{p-1}\right)\right)\\ +\sum_{p=0}^{n}\left(1-e^{i\pi z}\right)(2z+1)^{p}\tan\left(x(2z+ 1)^{p}\right)\sec\left(x(2z+1)^{p}\right)\\ =\frac{1}{8z+4}\left(\sec^{2}\left(\frac{x}{2z+1}\right)\left((2z +1)^{n+1}\left(2\sin\left(x(2z+1)^{n}\right)-\sin\left(x\left(\frac{2}{2z+1}-( 2z+1)^{n}\right)\right)\right.\right.\\ \left.\left.+\sin\left(x\left((2z+1)^{n}+\frac{2}{2z+1}\right) \right)\right)-\sin\left(x\left(\frac{1}{2z+1}-2(2z+1)^{n}\right)\right)\\ \left.-\sin\left(x\left(2(2z+1)^{n}+\frac{1}{2z+1}\right)\right) -2\sin\left(\frac{x}{2z+1}\right)\right)\sec^{2}\left(x(2z+1)^{n})\right) \tag{4.6}\]
Proof.: Use equation (4.1) and set \(k=1,a=1,m=x\) and apply the method in section (8) in [3].
**Example 4.7**.: A functional equation involving the Hurwitz-Lerch zeta function.
\[\Phi\left(-\sqrt[3]{-z},s,a\right)\\ =3^{-s}\left(\Phi\left(z,s,\frac{a}{3}\right)-\sqrt[3]{-z}\Phi \left(z,s,\frac{a+1}{3}\right)+(-z)^{2/3}\Phi\left(z,s,\frac{a+2}{3}\right)\right) \tag{4.7}\]
Proof.: Use equation (4.1) and set \(n=0,z=1,m=\log(z)/(2i),a=e^{ia}\) and simplify. Next replace \(a=\frac{2}{3}\left(a-\frac{1}{2}\right),k=-s\) and simplify.
**Example 4.8**.: Finite product involving the product of trigonometric functions.
\[\prod_{p=0}^{n}\prod_{j=1}^{z}\exp\left(i(-1)^{j+z}(2z+1)^{-p}e^{ im(-2j+2z+1)(2z+1)^{p-1}}\left(\Phi\left(-e^{2im(2z+1)^{p}},1,\frac{1}{2}- \frac{j}{2z+1}\right)\right.\right.\\ \left.\left.+e^{4ijm(2z+1)^{p-1}}\Phi\left(-e^{2im(2z+1)^{p}},1, \frac{j}{2z+1}+\frac{1}{2}\right)\right)\right)\\ \prod_{p=0}^{n}\left(1-ie^{im(2z+1)^{p}}\right)^{\left((-1)^{z+1 }+1\right)(2z+1)^{-p}}\left(1+ie^{im(2z+1)^{p}}\right)^{\left((-1)^{z}-1)(2z+1 )^{-p}} \tag{4.8}\]
\[=\left(i\cot\left(\frac{2m+2\pi z+\pi}{8z+4}\right)\right)^{2z+1}\left(-i\tan \left(\frac{1}{4}\left(2m(2z+1)^{n}+\pi\right)\right)\right)^{(2z+1)^{-n}}\]
Proof.: Use equation (4.1) and set \(k=-1,a=1\) and simplify using entry (3) in Table below (64:12:7) in atlas.
## 5. Special cases involving the double finite sum of the Hurwitz-Lerch zeta function
In this section we derive special cases of the Hurwitz-Lerch zeta function in terms of special functions and Catalan's constant \(C\).
**Example 5.1**.: Double finite sum in terms of Catalan's constant \(C\).
\[\sum_{p=0}^{n}\sum_{j=1}^{z}(2z+1)^{-2p}\left(16C\left((-1)^{z}-1 \right)+(-1)^{j+z}\left(\psi^{(1)}\left(\frac{-2j+2z+1}{8z+4}\right)\right.\right.\] \[\left.\left.+\psi^{(1)}\left(\frac{2j+2z+1}{8z+4}\right)-\psi^{( 1)}\left(\frac{-2j+6z+3}{8z+4}\right)-\psi^{(1)}\left(\frac{2j+6z+3}{8z+4} \right)\right)\right)\\ =16C\left((2z+1)^{2}-(2z+1)^{-2n}\right) \tag{5.1}\]
Proof.: Use equation (4.1) and set \(k=-2,a=1,m=0\) and simplify using equation (4) in [14].
**Example 5.2**.: Finite sum in terms of Catalan's constant \(C\) and the digamma function \(\psi^{(1)}(s)\).
\[\sum_{j=1}^{z}\frac{(2z+1)^{2}(-1)^{j+z}}{4z(z+1)}\left(\psi^{(1) }\left(\frac{-2j+2z+1}{8z+4}\right)+\psi^{(1)}\left(\frac{2j+2z+1}{8z+4}\right)\right. \\ \left.-\psi^{(1)}\left(\frac{-2j+6z+3}{8z+4}\right)-\psi^{(1)} \left(\frac{2j+6z+3}{8z+4}\right)\right)\\ =\frac{4C(2z+1)^{2}\left(4z-(-1)^{z}+5\right)}{z+1} \tag{5.2}\]
Proof.: Use equation (5.1) and take the limit as \(n\to\infty\) and simplify.
**Example 5.3**.: Finite sum in terms of the first partial derivative of the Hurwitz-Lerch zeta function.
\[\sum_{j=1}^{z}(-1)^{j}\left(\Phi^{\prime}\left(-1,-1,\frac{1}{2}- \frac{j}{2z+1}\right)+\Phi^{\prime}\left(-1,-1,\frac{j}{2z+1}+\frac{1}{2} \right)\right)\\ =\frac{C\left(-2z+(-1)^{-z}-1\right)}{\pi(2z+1)} \tag{5.3}\]
Proof.: Use equation (4.1) and take the first partial derivative with respect to \(k\) and set \(k=1,m=0,a=1\) and simplify using equation (20) in [14].
**Example 5.4**.: Double finite sum involving Euler's constant \(\gamma\).
\[\sum_{p=0}^{n}\sum_{j=1}^{z}2ie^{i\pi(j+z)}(2z+1)^{-p}\left(-\Phi^{\prime} \left(-1,1,\frac{1}{2}-\frac{j}{2z+1}\right)-\Phi^{\prime}\left(-1,1,\frac{j}{ 2z+1}+\frac{1}{2}\right)\right. \tag{5.4}\]
\[+\pi\sec\left(\frac{\pi j}{2z+1}\right)\log\left(i(2z+1)^{p}\right)\biggr{)}-i\pi \left((-1)^{z}-1\right)\sum_{p=0}^{n}(2z{+}1)^{-p}\log\left(-\frac{ie^{\gamma} \pi^{3}(2z+1)^{-p}}{32\Gamma\left(\frac{5}{4}\right)^{4}}\right)\]
\[=-i\pi(2z{+}1)^{-n}\left((2z+1)^{n+1}\log\left(-\frac{8ie^{\gamma}\pi^{3}(2z+1) }{\Gamma\left(\frac{1}{4}\right)^{4}}\right)+\log\left(\frac{32ie^{-\gamma} \Gamma\left(\frac{5}{4}\right)^{4}(2z+1)^{n}}{\pi^{3}}\right)\right)\]
Proof.: Use equation (4.1) and take the first partial derivative with respect to \(k\) and set \(k=-1,m=0,a=1\) and simplify using equation (53) in [14].
**Example 5.5**.: The exponential of the Hurwitz-Lerch zeta function in terms of the log-gamma function.
\[\exp\left(-\frac{13\left(\Phi^{\prime}\left(-1,1,\frac{1}{6}\right)+\Phi^{ \prime}\left(-1,1,\frac{5}{6}\right)\right)}{54\pi}\right)\ =\ \frac{2^{11/27}e^{-13\gamma/27}\Gamma\left(\frac{1}{4}\right)\Gamma \left(\frac{5}{4}\right)^{25/27}}{3^{13/36}\pi^{13/9}} \tag{5.5}\]
Proof.: Use equation (5.4) and set \(n=2,z=1\) and simplify.
**Example 5.6**.: A double finite sum involving Catalan's constant \(C\).
\[\sum_{p=0}^{n}4\left((-1)^{z}-1\right)(2z+1)^{-2p}\left(-\Phi^{ \prime}\left(-1,2,\frac{1}{2}\right)+4C\log\left(i(2z+1)^{p}\right)\right)\] \[+\sum_{p=0}^{n}\sum_{j=1}^{z}(-1)^{j+z}(2z{+}1)^{-2p}\left(-4\Phi ^{\prime}\left(-1,2,\frac{1}{2}-\frac{j}{2z+1}\right)-4\Phi^{\prime}\left(-1,2,\frac{j}{2z+1}+\frac{1}{2}\right)\right.\] \[\qquad\qquad\left.+\left(\psi^{(1)}\left(\frac{-2j+2z+1}{8z+4} \right)+\psi^{(1)}\left(\frac{2j+2z+1}{8z+4}\right)-\psi^{(1)}\left(\frac{-2j+ 6z+3}{8z+4}\right)\right.\] \[\qquad\qquad\qquad\left.-\psi^{(1)}\left(\frac{2j+6z+3}{8z+4} \right)\right)\log\left(i(2z+1)^{p}\right)\biggr{)}\] \[=4(2z+1)^{-2n}\left(\Phi^{\prime}\left(-1,2,\frac{1}{2}\right)-4C \log\left(i(2z+1)^{n}\right)\right)\] \[\qquad\qquad\qquad\qquad+4(2z+1)^{2}\left(-\Phi^{\prime}\left(-1, 2,\frac{1}{2}\right)+4C\log\left(\frac{i}{2z+1}\right)\right) \tag{5.6}\]
Proof.: Use equation (4.1) and take the first partial derivative with respect to \(k\) and set \(k=-2,m=0,a=1\) and simplify using equation (4) in [14].
**Example 5.7**.: Double finite product involving tangent and cotangent functions.
\[\prod_{p=0}^{n}\prod_{j=1}^{z}\exp\left(\frac{\pi i(-1)^{j+z}\sec \left(\frac{j\pi}{1+2z}\right)}{(1+2z)^{p}}\right)\prod_{p=0}^{n}\tan^{\frac{- 1+(-1)^{z}}{(1+2z)^{p}}}\left(\frac{1}{4}\pi\left(1+2(1+2z)^{1+p}\right)\right)\] \[=i\left(-(-1)^{3/4}\right)^{\frac{(1+2z)-n\left(-1+(-1)^{z}-(-1+(- 1)^{z})(1+2z)^{1+n}\right)}{z}}(-1)^{z}\left(i\cot\left(\frac{1}{4}\pi\left(1+2 (1+2z)^{1+n}\right)\right)\right)^{(1+2z)^{-n}} \tag{5.7}\]
Proof.: Use equation (4.1) and set \(k=-1,m=\pi(2z+1),a=1\) and simplify using entry (1) in Table below (64:12:7) in [9].
**Example 5.8**.: Double finite product involving the tangent function.
\[\prod_{p=0}^{n}\prod_{j=1}^{z}\exp\left(-\frac{i(-1)^{j+z}}{(1+2z)^{p}} \left(e^{i\pi(1+2z)^{-1-n+p}(1-2j+2z)}\Phi\left(-e^{2i\pi(1+2z)^{-n+p}},1,\frac{ 1}{2}-\frac{j}{1+2z}\right)\right.\right.\] \[\left.\left.+e^{i\pi(1+2z)^{-1-n+p}(1+2j+2z)}\Phi\left(-e^{2i\pi( 1+2z)^{-n+p}},1,\frac{1}{2}+\frac{j}{1+2z}\right)\right)\right)\] \[\prod_{p=0}^{n}\left(-i\tan\left(\frac{1}{4}\pi\left(1+2(1+2z)^{- n+p}\right)\right)\right)^{\frac{-1+(-1)^{z}}{(1+2z)^{p}}}\] \[=-e^{-\frac{\pi i}{2(1+2z)^{n}}}\left(-i\tan\left(\frac{1}{4}\pi \left(1+2(1+2z)^{-1-n}\right)\right)\right)^{2z}\left(i\tan\left(\frac{1}{4} \pi\left(1+2(1+2z)^{-1-n}\right)\right)\right) \tag{5.8}\]
Proof.: Use equation (4.1) and set \(k=-1,m=\pi(2z+1)^{-n},a=1\) and simplify using entry (1) in Table below (64:12:7) in [9].
**Example 5.9**.: A double finite product involving the gamma function and table of special cases.
\[\prod_{p=0}^{n}\prod_{j=1}^{z}\left(i(2z+1)^{p}\right)^{(-1)^{j+z} +\frac{1}{2}((-1)^{z}-1)}\left(\frac{\Gamma\left(\frac{-2j+6z+3}{8z+4}\right) \Gamma\left(\frac{j}{4z+2}+\frac{3}{4}\right)}{\Gamma\left(\frac{-2j+2z+1}{8z+ 4}\right)\Gamma\left(\frac{j}{4z+2}+\frac{1}{4}\right)}\right)^{(-1)^{j+z}}\\ =(2z+1)^{\frac{1}{2}(-n-1)}\left(\frac{3\Gamma\left(-\frac{3}{4} \right)}{\Gamma\left(-\frac{1}{4}\right)}\right)^{(-1)^{z}-1} \tag{5.9}\]
Proof.: Use equation (4.1) and set \(m=0,a=1\) and simplify in terms of the Hurwitz zeta function using entry (4) in Table below (64:12:7) in [9]. Next take the first partial derivative with respect to \(k\) and set \(k=0\) and simplify in terms of the log-gamma function using equation (25.11.18) in [12].
## 6. Table of quotient gamma functions
In this section we look at simplified forms of equation (5.9).
**Example 6.1**.: A Gosper relation for a \(q\)-trigonometric form in terms of quotient gamma functions given on page (80) in [15] and entry (1) in Table 1.
\[\prod_{n=0}^{\infty}\frac{(10n+1)(10n+3)(10n+7)(10n+9)}{(10n+2)(10n+ 4)(10n+6)(10n+8)}=\frac{\sin\left(\frac{3\pi}{10}\right)}{2\sin^{2}\left(\frac {2\pi}{5}\right)}\\ =\frac{\Gamma\left(\frac{3}{20}\right)\Gamma\left(\frac{7}{20} \right)\Gamma\left(\frac{11}{20}\right)\Gamma\left(\frac{19}{20}\right)}{ \Gamma\left(\frac{1}{20}\right)\Gamma\left(\frac{9}{20}\right)\Gamma\left( \frac{13}{20}\right)\Gamma\left(\frac{17}{20}\right)}=\frac{1}{\sqrt{5}} \tag{6.1}\]
**Example 6.2**.: Double finite product involving exponential of the Hurwitz-Lerch zeta function and quotient tangent functions.
\[\prod_{p=0}^{n}\prod_{j=1}^{z}\exp\left(i(-1)^{j+z}(1+2z)^{-p}\left(e^{im(1+2z )^{-1+p}(1-2j+2z)}\Phi\left(-e^{2im(1+2z)^{p}},1,\frac{1}{2}-\frac{j}{1+2z} \right)\right.\right. \tag{6.2}\]
\begin{table}
\begin{tabular}{l|c} \hline \(\frac{\Gamma\left(\frac{3}{20}\right)\Gamma\left(\frac{7}{20}\right)\Gamma \left(\frac{11}{20}\right)\Gamma\left(\frac{19}{20}\right)}{\Gamma\left(\frac{ 2}{20}\right)\Gamma\left(\frac{24}{20}\right)\Gamma\left(\frac{19}{20}\right) }\) & \(\frac{1}{\sqrt{5}}\) \\ \(\frac{\Gamma\left(\frac{3}{20}\right)^{2}\Gamma\left(\frac{7}{20}\right)^{2} \Gamma\left(\frac{11}{20}\right)^{2}\Gamma\left(\frac{19}{20}\right)^{2}}{ \Gamma\left(\frac{19}{20}\right)^{2}\Gamma\left(\frac{19}{20}\right)^{2}}\) & \(\frac{1}{5}\) \\ \(\frac{\Gamma\left(\frac{1}{12}\right)^{2}\Gamma\left(\frac{7}{36}\right)^{2} \Gamma\left(\frac{11}{36}\right)^{2}\Gamma\left(\frac{5}{36}\right)^{2} \Gamma\left(\frac{19}{36}\right)^{2}\Gamma\left(\frac{19}{36}\right)^{2}\Gamma \left(\frac{23}{36}\right)^{2}\Gamma\left(\frac{31}{36}\right)^{2}}\) & \(\frac{1}{9}\) \\ \(\frac{\Gamma\left(\frac{1}{12}\right)^{3}\Gamma\left(\frac{7}{36}\right)^{3} \Gamma\left(\frac{11}{36}\right)^{3}\Gamma\left(\frac{5}{36}\right)^{3}\Gamma \left(\frac{19}{36}\right)^{3}\Gamma\left(\frac{32}{36}\right)^{3}\Gamma \left(\frac{31}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}}{\Gamma \left(\frac{31}{36}\right)^{3}\Gamma\left(\frac{34}{36}\right)^{3}\Gamma\left( \frac{35}{36}\right)^{3}\Gamma\left(\frac{37}{12}\right)^{3}\Gamma\left(\frac{ 35}{36}\right)^{3}\Gamma\left(\frac{33}{36}\right)^{3}\Gamma\left(\frac{34}{36} \right)^{3}\Gamma\left(\frac{37}{12}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{33}{36}\right)^{3}\Gamma\left(\frac{34}{36}\right) ^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{34}{36}\right)^{3} \Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{34}{36}\right)^{3}\Gamma \left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{34}{36}\right)^{3}\Gamma \left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left( \frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{3 5}{36}\right)^{3}\Gamma\left(\frac{37}{12}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{33}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right) ^{3}\Gamma\left(\frac{37}{12}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3} \Gamma\left(\frac{33}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma \left(\frac{37}{12}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left( \frac{33}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{ 35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right) ^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3} \Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3} \Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma \left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left( \frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{ 36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3} \Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma \left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left( \frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{ 35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3} \Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma \left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left( \frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{ 35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3} \Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma \left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left( \frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{ 35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3} \Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma \left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left( \frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36} \right)^{3}\Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\right)^{3} \Gamma\left(\frac{35}{36}\right)^{3}\Gamma\left(\frac{35}{36}\
\[+e^{im(1+2z)^{-1+p}(1+2j+2z)}\Phi\left(-e^{2im(1+2z)^{p}},1,\frac{1}{2}+ \frac{j}{1+2z}\right)\] \[-e^{ir(1+2z)^{-1+p}(1-2j+2z)}\Phi\left(-e^{2ir(1+2z)^{p}},1,\frac{1 }{2}-\frac{j}{1+2z}\right)\] \[-e^{ir(1+2z)^{-1+p}(1+2j+2z)}\Phi\left(-e^{2ir(1+2z)^{p}},1,\frac{ 1}{2}+\frac{j}{1+2z}\right)\right)\] \[\prod_{p=0}^{n}\left(\frac{\tan\left(\frac{1}{4}\left(\pi+2m(1+2z )^{p}\right)\right)}{\tan\left(\frac{1}{4}\left(\pi+2r(1+2z)^{p}\right)\right) }\right)^{-\left(\left(-1+(-1)^{z}\right)(1+2z)^{-p}\right)}\] \[=\left(\frac{\tan\left(\frac{2m+\pi+2\pi z}{4+8z}\right)}{\tan \left(\frac{\pi+2r+2\pi z}{4+8z}\right)}\right)^{-1-2z}\left(\frac{\tan\left( \frac{1}{4}\left(\pi+2m(1+2z)^{n}\right)\right)}{\tan\left(\frac{1}{4}\left( \pi+2r(1+2z)^{n}\right)\right)}\right)^{(1+2z)^{-n}}\]
Proof.: Use equation (4.1) and form a second equation by replacing \(m\to r\) and take the difference of both these equations then set \(k=-1,a=1\) and simplify using entry (3) of Section (64:12) in [9].
**Example 6.3**.: Finite product involving exponential of the Hurwitz-Lerch zeta function and quotient tangent functions.
\[\prod_{j=1}^{z}\exp\left(i(-1)^{j+z}\left(e^{im\left(1-\frac{2j}{1 +2z}\right)}\Phi\left(-e^{2im},1,\frac{1}{2}-\frac{j}{1+2z}\right)\right.\right.\] \[\left.\left.+e^{im\left(1+\frac{2j}{1+2z}\right)}\Phi\left(-e^{2 im},1,\frac{1}{2}+\frac{j}{1+2z}\right)-e^{ir\left(1-\frac{2j}{1+2z}\right)} \Phi\left(-e^{2ir},1,\frac{1}{2}-\frac{j}{1+2z}\right)\right.\right.\] \[\left.\left.-e^{ir\left(1+\frac{2j}{1+2z}\right)}\Phi\left(-e^{2 ir},1,\frac{1}{2}+\frac{j}{1+2z}\right)\right)\right)\] \[=\frac{\tan\left(\frac{1}{4}(2m+\pi)\right)\left(\frac{\tan\left( \frac{2m+\pi+2\pi z}{4+8z}\right)}{\tan\left(\frac{\pi+2r+2\pi z}{4+8z}\right) }\right)^{-1-2z}}{\tan\left(\frac{1}{4}(\pi+2r)\right)\left(\frac{\tan\left( \frac{1}{4}\left(2m+\pi\right)\right)}{\tan\left(\frac{1}{4}(\pi+2r)\right)} \right)^{1-\left(-1\right)^{z}}} \tag{6.3}\]
Proof.: Use equation (6.2) and set \(p=n=0\) and simplify, where \(z\) is any positive integer.
7. An example involving the exponential of the hypergeometric function in terms of reciprocal angles
**Example 7.1**.: Exponential of the hypergeometric function involving reciprocal angles.
\[\exp\left(i\left(6e^{\frac{i}{3}/r}\,_{2}F_{1}\left(\frac{1}{6},1 ;\frac{7}{6};-e^{2i/r}\right)-6e^{\frac{ir}{3}}\,_{2}F_{1}\left(\frac{1}{6},1; \frac{7}{6};-e^{2ir}\right)\right.\right.\] \[\left.\left.+\frac{6}{5}e^{\frac{i}{3}/r}\,_{2}F_{1}\left(\frac{5 }{6},1;\frac{11}{6};-e^{2i/r}\right)-\frac{6}{5}e^{\frac{5ir}{3}}\,_{2}F_{1} \left(\frac{5}{6},1;\frac{11}{6};-e^{2ir}\right)\right)\right)\] \[=\tan\left(\frac{1}{4}(2r+\pi)\right)\tan^{3}\left(\frac{1}{12}(2 r+3\pi)\right)\cot\left(\frac{1}{4}\left(\frac{2}{r}+\pi\right)\right)\cot^{3} \left(\frac{1}{12}\left(\frac{2}{r}+3\pi\right)\right) \tag{7.1}\]
Proof.: Use equation (6.3) and replace \(m\) by \(1/r\) and set \(z=1\) and simplify.
## 8. Determining the first partial derivative of the Hurwitz-Lerch zeta function
In this example we derive a finite product expression which can be used to determine the first partial derivatives of the Hurwitz-Lerch zeta function with respect to the first parameter and the second parameter is zero.
**Example 8.1**.: Finite product of quotient gamma functions raised to complex power.
\[\prod_{j=1}^{z}\left(\frac{\Gamma\left(\frac{1}{4}\left(a-\frac{2j}{2z +1}+3\right)\right)}{\Gamma\left(\frac{1}{4}\left(a-\frac{2j}{2z+1}+1\right) \right)}\right)^{(-1)^{j+z}e^{-\frac{2i\pi j}{2z+1}}}\left(\frac{\Gamma\left( \frac{1}{4}\left(a+\frac{2j}{2z+1}+3\right)\right)}{\Gamma\left(\frac{1}{4} \left(a+\frac{2j}{2z+1}+1\right)\right)}\right)^{(-1)^{j+z}e^{\frac{2i\pi j}{2 z+1}}}\] \[=(4z+2)^{\frac{1}{2}\sec\left(\frac{\pi}{2z+1}\right)}\left( \frac{\Gamma\left(\frac{a+1}{4}\right)}{\Gamma\left(\frac{a+3}{4}\right)} \right)^{(-1)^{z}}\exp\left(e^{\frac{i\pi}{2z+1}}\Phi^{\prime}\left(-e^{\frac {2i\pi}{2z+1}},0,\frac{1}{2}(2az+a+1)\right)\right) \tag{8.1}\]
Proof.: Use equation (4.1) and set \(p=n=0,m=\pi\) and simplify in terms of the Hurwitz zeta function. Next take the first partial derivative with respect to \(k\) and set \(k=0\) and simplify in terms of the log-gamma function. Finally we take the exponential of both sides and simplify. Here \(z\) is any positive integer greater than or equal to \(1\).
**Example 8.2**.: An example in terms of the log-gamma function.
\[-(-1)^{2/3}\log\left(\frac{\Gamma\left(\frac{3}{8}\right)\left(\frac{\Gamma \left(\frac{5}{24}\right)}{\Gamma\left(\frac{24}{24}\right)}\right)^{\frac{3} {\sqrt{-1}}}\left(\frac{\Gamma\left(\frac{25}{24}\right)}{\Gamma\left(\frac{23 }{24}\right)}\right)^{(-1)^{2/3}}}{6\Gamma\left(\frac{7}{8}\right)}\right)\,= \,\Phi^{\prime}\left(-(-1)^{2/3},0,\frac{5}{4}\right) \tag{8.2}\]
Proof.: Use equation (8.1) and set \(a=1/2,z=1\) and simplify.
## 9. Discussion
In this study, we employed a contour integration method to establish mathematical expressions for double finite sums involving the Hurwitz-Lerch zeta function. While this approach is generally easy, its application to the double finite sum of the secant function posed difficulties during evaluation. Our challenges encompassed both the simplification of the secant function's representation within the contour integral and the identification of specific values where the double product finite sum becomes simpler. The significance of this research lies in the deriving closed-form solutions through these methodologies. Consequently, we have introduced formulas to the existing body of knowledge, with the hope that they will prove valuable to the academic community
## 10. Conclusion
In this paper, we have presented a method for deriving double finite sum and products formulae involving the Hurwitz-Lerch zeta function along with some interesting special cases using both contour integration and well known algebraic techniques. We plan to apply these methods to derive other multiple sum and product formulae involving other special functions in future work. |
2309.13151 | A Survey of Brain Computer Interface Using Non-Invasive Methods | Research on Brain-Computer Interface (BCI) began in the 1970s and has
increased in volume and diversified significantly since then. Today BCI is
widely used for applications like assistive devices for physically challenged
users, mental state monitoring, input devices for hands-free applications,
marketing, education, security, games and entertainment. This article explores
the advantages and disadvantages of invasive and non-invasive BCI technologies
and focuses on use cases of several non-invasive technologies, namely
electroencephalogram (EEG), functional Magnetic Resonance Imaging (fMRI), Near
Infrared Spectroscopy (NIRs) and hybrid systems. | Ritam Ghosh | 2023-09-22T19:27:03Z | http://arxiv.org/abs/2309.13151v1 | # A Survey of Brain Computer Interface Using Non-Invasive Methods
###### Abstract
**Research on Brain-Computer Interface (BCI) began in the 1970's and has increased in volume and diversified significantly since then. Today BCI is widely used for applications like assistive devices for physically challenged users, mental state monitoring, input devices for hands-free applications, marketing, education, security, games and entertainment. This article explores the advantages and disadvantages of invasive and non-invasive BCI technologies and focuses on use cases of several non-invasive technologies, namely electroencephalogram (EEG), functional Magnetic Resonance Imaging (fMRI), Near Infrared Spectroscopy (NIRs) and hybrid systems.**
**Keywords: brain computer interface, brain computer interaction, MRI, EEG, NIRs**
## 1 Introduction
The brain is the largest and the most complex of all human organs. It consists of billions of nerve cells called neurons that send information to other nerve cells, muscles or gland cells. The brain has unmatched computational capacity and is capable of parallel processing information streams pouring in from multiple sense organs simultaneously and generating appropriate responses. It is capable of learning and memory, consciousness and emotions. Its extremely complex structure and exceptional computational capabilities have intrigued researchers from the early ages. Various paradigms like neuroscience, cognitive sciences, machine learning and artificial intelligence, brain computer interface etc. have been developed to understand the structure and functioning of the brain in more detail.
A brain-computer interface is defined as a communication system that does not depend on the brain's normal output pathways of peripheral nerves and muscles [1]. The main inspiration behind the development of BCI was to provide alternate means of control to people suffering from neuromuscular disabilities. This led to the designing of several assistive devices that facilitated the restoring of lost motor abilities of physically challenged individuals [2]. Mobile robots [3, 4] and BCI based prosthetic limbs also called neuro-prosthetic devices [5, 6] are being used to help people regain normal functionality. Research has been conducted to develop methods to help people recover from stroke and various rehabilitation methods have been suggested. BCI based virtual reality setups have been used to gather data from stroke patients which was later used to control robotic prosthetics [7]. BCI has also been used to restore communication for people who are completely or partially paralyzed using different techniques ranging from yes/no binary capabilities [8] to virtual keyboards and spellers [9]. BCI's have also been used to evaluate the mental state of a subject for monitoring performance capability [10]. Other applications include workload monitoring, browsing and other media applications [11] and even as a sole or additional control input to games. The key to all these applications is in extracting reliable and meaningful information from the activities of the brain and devising methods and algorithms to extract features from it. Several methods and devices have been developed over time to'read' the activities of the brain.
Neurons communicate with each other using electrical signals through physical connections or by exchanging chemicals called neurotransmitters. While communicating, the neurons exhibit an increase in the consumption of oxygen and glucose and hence cause an increase in blood flow to the active regions of the brain. Using various brain imaging technologies, it is possible to observe the electrical, chemical, or blood flow changes as the brain processes information or responds to various stimuli. The multichannel measurements from the instruments are then used to create a map of the activity patterns of the brain and from that we can infer the specific cognitive processes occurring in the brain at any given time. The different technologies used to accomplish this are discussed in the following section.
Types of Brain Imaging
Technology: Advantages and Disadvantages
Different techniques have been developed to measure the activity levels of the different regions of the brain. The methods can be broadly classified into three categories: non-invasive, semi-invasive and invasive. In non-invasive BCI, the sensors are placed on the scalp over the skin. In semi-invasive BCI, the sensors are placed under the skin on the surface of the brain and in invasive BCI, the micro-electrodes are placed directly inside the cortex to measure the activity of single neurons [12].
Examples of non-invasive brain controlled interface methods include electroencephalography (EEG) where electrodes measure electric potentials produced by different regions of the brain, magnetoencephalography (MEG) where probes measure the magnetic fields generated by the currents in the brain, functional magnetic resonance imaging (fMRI) where sensors detect the change in magnetic characteristics of haemoglobin when it is oxygenated or deoxygenated, functional near infrared spectroscopy (RINRS) which measure the hemodynamic responses associated with neural behavior by the difference in absorption characteristics of oxygenated and deoxygenated haemoglobin. Non-invasive techniques suffer from low spatial and/or temporal resolution and low signal to noise ratio. MRI and MEG require bulky and expensive equipment and hence experiments are confined to very limited lab setups. On the other hand, unlike the semi-invasive and invasive techniques, these methods do not require any surgical implants and hence the chance of medical complications do not exist. These are relatively cheaper and more convenient to use and safer for the subjects than the other two categories and hence most BCI research uses non-invasive techniques and so it benefits from wide availability of data and literature [13].
Semi-invasive techniques include electrocorticography (ECoG) which uses electrodes placed on the exposed surface of the skin to measure electrical activity from the cerebral cortex. Though ECoG and EEG record similar types of signals, ECoG exhibit higher special resolution and signal fidelity and is relatively resistant to noise. It has lower risk of medical complications than invasive methods and produces signals with higher amplitude than non-invasive methods. But it still requires a craniotomy to implant the electrodes and hence is performed only when a surgery is required for medical reasons. This makes it inconvenient for general use despite having superior signal recording ability [13].
Invasive techniques involve micro electrodes implanted directly into the brain during a neurosurgery. These can be used to collect data from a single area of the brain or multiples areas. The quality of the signal is the highest with the maximum resolution and signal to noise ratio. But with time, the body reacts to the presence of the electrodes by building scar tissues around them which over time reduce the resolution and SNR of the system. The surgery can be risky and hence is performed mainly on paralyzed subjects and often on blind subjects. In fact, one of the most significant applications of the invasive BCI technique was the restoration of partial eyesight to patients with acquired blindness developed by Dr. William Dobelle [14].
In the following sections, a few non-invasive methods of brain computer interface are described in detail.
## 3 Electroencephalography (EEG)
### Data Acquisition Mechanism
It is the most commonly used and the oldest non-invasive technique for brain imaging. It operates by using metal electrodes placed on the scalp to measure the electrical activity of the neurons. A conductive electrolytic gel is applied on the electrodes to increase the conductivity between the electrodes and the skin. The position where the electrodes are placed is governed by some protocols to ensure consistency and repeatability. The most common system for placing the electrodes is the international 10-20 system which provides twenty-one standard positions. With the advent of higher resolution EEG devices with more electrodes, the international 10-10 system with eighty-one standard electrode placement positions was introduced and later the even more tightly packed 10-5 system with more than three hundred electrode placement positions was introduced.
Brain signals recorded using an EEG device are classified based on their frequency, called brain rhythmic activity or EEG rhythms [15]. The main frequencies of EEG signals in humans are:
1. Delta (<3 Hz): These waves have the highest amplitude and lowest frequency. It is the dominant rhythm in infants up to one year of age and in stages 3 and 4 of sleep. It may occur focally with subcortical lesions and in general distribution with diffuse lesions, metabolic
encephalopathy hydrocephalus or deep midline lesions. It is usually most prominent frontally in adults (e.g. FIRDA - Frontal Intermittent Rhythmic Delta) and posteriorly in children e.g. OIRDA - Occipital Intermittent Rhythmic Delta).
2. Theta (3.5 Hz - 7.5 Hz): It is classified as the slow activity of the brain. It is typical in children up to thirteen years of age and in sleep but abnormal in awake adults. It can be seen as a manifestation of focal subcortical lesions. It can also be seen in generalized distribution in diffuse disorders such as metabolic encephalopathy or some instances of hydrocephalus.
3. Alpha (7.5 Hz - 13 Hz): It can be best observed in the posterior part of either side of the brain and is greater in amplitude on the dominant side. It is the dominant rhythm in normal relaxed humans above thirteen years of age but disappears if alerted or thinking,
4. Beta (> 14 Hz): It is known as the fast activity of the brain. It is usually seen on both sides in symmetrical distribution and is most evident frontally. It is accentuated by sedative-hypnotic drugs especially the benzodiazepines and the barbiturates. It may be absent or reduced in areas of cortical damage. It is generally regarded as a normal rhythm. It is the dominant rhythm in patients who are alert or anxious or have their eyes open. [16]
### Data Processing
Once the raw data is collected, it is processed through a series of steps before being used for BCI systems. These steps involve pre-processing, feature selection and classification. One of the most common pre-processing methods is Independent Component Analysis (ICA) where the mixed signal can be decomposed into its statistically independent components. The raw signals are first centered i.e. the mean is subtracted from the raw data so that the resultant signal is an oscillatory signal centered around zero. Then this data is ready for feature extraction.
The temporal and frequency information in EEG data are characterized using hybrid features [17]. The common features used for EEG data are the coefficients of the auto-regressive model that is fit to the data, features related to the power spectral density and third order statistics like sum of logarithmic amplitudes. A method to extract features using the first layers of a multilayer perceptron has also been suggested [18] where once the whole network has been trained with data from each electrode as the input to a node in the input layer, the first hidden layer is examined and the nodes whose sum of weights is close to zero is assumed to have low discriminative power. These features are then used for classification using methods like Linear Discriminant Analysis (LDA), Support Vector Machines (SVM) etc.
### BCI Systems and Applications
One of the most significant signals used in EEG based BCI is the P300 signal [19]. It is an event related potential generated as a result of decision making. In EEG it shows up as a positive voltage spike which appears when a person encounters a target object among a group of non-target objects and the spike appears roughly 300ms after the stimulus is presented from which it gets its name. The P300 is widely used in BCI for assistive devices like spellers. A matrix of letters is shown to the user and each row is highlighted one by one. Once the user selects a row, each column is highlighted one by one and thus the user can select the desired letter. A combination of ML techniques is also used for word predictions to streamline the process.
One of the major challenges in BCI research is subject training. To solve this problem, an online single trial EEG based BCI was proposed to control hand holding and hand grasping through an interactive virtual reality environment [20]. The goal was to test if untrained subjects could achieve satisfactory results online without offline classifier training. It was found that online training with feedback performed at par with the traditional method of data collection without feedback and subsequent offline classifier training. Also, since brain signals vary significantly from person to person and also from day to day, the use of an adaptive probabilistic neural network was introduced that performed well in classification of time varying EEG signals. The experimental evaluation showed an accuracy of 75.4 % after only three minutes of online training and a peak accuracy of 84% after significant online training.
The high temporal resolution of EEG facilitates the development of real time controllers for mobile robots. In [21] EEG based BCI was used to control a robotic wheelchair capable of going left or right. In the experimental space a grid was drawn with a left goal and a right goal. The subject was required to navigate to the required goal from the starting point. The subjects were asked to think left and think right a hundred times and the data was collected. Fifty of them was used for pattern generation and the remaining was used for pattern validation. To remove the physiological artefacts a band
pass filter was used with a passband of 0.53 Hz - 30 Hz. An iterative recursive training algorithm was used to train the classifier. The experiment was set up such that to accomplish the target the number of correct decisions required was three and the maximum wrong decisions allowed was one. If the decision taken by the BCI was random, then the chance of still accomplishing the target was 31.2 %. On performing the experiment ten times each for left goal and right goal on each of six subjects, an accuracy of 80% was obtained. This experiment showed the validity of a wheelchair with EEG based BCI being the sole controller.
Another notable application of EEG based BCI is its use as a game controller for games designed to help subjects improve concentration or relaxation. In [22] two games "Brain Chi" and "Dancing Robot" have been developed to help patients train for concentration and relaxation. In "Brain Chi" the size of the protective shield of the character depends on the concentration of the user i.e. the user has to protect the character with their 'brain power'. In "Dancing Robot" the speed of dancing depends on the concentration of the user. When the user concentrates positive points are added to their score and when the concentration level is detected as low, points are deducted. The same game can be converted as a relaxation game by just reversing the logic. The notable feature of this work is the use of fractal dimension feature to detect concentration level instead of the power of EEG signals. Higher fractal dimensions correspond to more complex and irregular signals and lower fractal dimensions correspond to more regular signals. Higuchi and Box-counting algorithms were used for the calculation of the fractal dimensions.
## 4 Functional Magnetic Resonance Imaging (fMRI)
### Data Acquisition Mechanism
Another commonly used non-invasive technique for BCI applications is functional magnetic resonance imaging (fMRI). The device is equipped with a very powerful electromagnet with a field strength of typically 3 teslas i.e. about 50000 times the Earth's magnetic field. The magnetic field generated by this magnet affects the orientation of the atoms in the targets brain. Normally the atoms are randomly oriented and hence the net magnetic field of the atoms cancel out. The magnetic field from the MRI machine orients the atoms in a particular way such that their own fields add up to be significant enough to be measured. During any neural activity, there is a demand for oxygen and the response is an increase in blood flow towards the region. Oxygen is carried by haemoglobin in blood and haemoglobin is diamagnetic when oxygenated but paramagnetic when deoxygenated. This difference in magnetic property leads to a difference in the magnetic resonance signal and the amount of difference depends on the degree of oxygenation. FMRI records this difference and interprets brain activity. Unlike EEG, which measures rapid changes in the cortical activity of the brain, fMRI indirectly measures neural activity by measuring the oxygen flow in blood i.e. the Blood Oxygen Level Dependent (BOLD) signal. The fMRI uses Echo Planar Imaging technique (EPI) to acquire brain activity slice by slice and creates an activity map. The real-time fMRI-BCI's are important because of their magnetic field strength, good spatial and moderate temporal resolution (in the range of mm and seconds respectively), better echo time, and good magnetic field homogeneity. However, unlike EEG, the BOLD signal reflects vascular effects and only indirectly the neuronal signal so the hemodynamic response function introduces a physiological delay of 3 to 6 s before signal changes can be observed. Despite the fact that BOLD is an indirect measure, there is growing evidence for a strong correlation between the BOLD signal and electrical brain activity. Studies have characterized the relationship between localized increases in neuronal activity and the corresponding increase in BOLD making it possible to interpret positive functional responses in terms of neural changes [23]. FMRI creates a 3D volume of the subjects' brain consisting of voxels with one intensity value per voxel per scan. This 3D structure is compressed and flattened into a single line representation and several such volumes from a single session are joined together to create a 4D data structure which represents the entire data for the session. This data is then used for further processing.
### Data Processing
The typical first step in pre-processing MRI data is slice timing correction [24]. Since the image is produced in slices, various parts of the image are taken at different time points. This is difficult to account for in later analysis and hence a timing correction is applied to bring all the slices of a single image to the same time point reference. This is done by interpolating the voxel values for the time points where that particular voxel was not scanned (this assumes that the intensity of the voxels changes smoothly in a continuous curve).
Another very common pre-processing step is head motion correction [24]. When the head moves relative to the scanner, the 3D coordinates for the scanner remains the
same but a different neuron is now present in that coordinate. This results in a voxel representing a neuron that was represented by some other voxel in the past. These distortions are removed by a rigid body transform where the entire matrix is translated and rotated by the same amount to make the time series of each voxel smooth. The difference between the intensities of each voxel in two consecutive time points is considered as a cost function and the rotation and translation is done in an iterative manner with a goal of reducing the cost function.
The non-uniformities of the field produced by the scanner causes some distortion in the resultant images. To account for this distortion, a field map is created by acquiring multiple images with multiple echo times. Then the final image is rectified using the field map [24].
Another common filtering used in FMRI data is temporal filtering where unwanted frequencies which are of no interest are eliminated. The time series data is converted to the frequency domain by observing the repeating values of the intensities of the voxels (Fourier transform), unwanted frequencies are removed, and the data is converted back to the time series using the inverse Fourier transform.
A common noise removal method is spatial filtering, which means averaging the intensities of neighboring voxels to create a smooth image. The averaging is often done by convolution with a Gaussian kernel. This improves the signal to noise ratio but at the cost of a slight reduction in spatial resolution.
After filtering and removing any distortions, the data is ready for statistical analysis. A very common approach is called the Generalized Linear Model (GLM). But this model does not take into account the contribution of relationships between multiple voxels. To rectify this, a newer statistical model called the Multi Voxel Pattern Analysis (MVPA) is used. The data is then used for training neural networks and other classifiers according to the application.
### BCI Systems and Applications
One of the applications of fMRI based BCI is in emotional processing due to its high spatial resolution. In [25], fMRI based BCI was used to study the voluntary control of the anterior cingulated cortex (ACC) on emotional processing. There are two major subdivisions of the ACC: the dorsal ACC, also called the "cognitive division" (ACcd) and the rostral-ventral ACC, called the "affective division" (ACaD). Due to its involvement in different functional networks, physiological self-regulation was applied to study cognitive and emotional parameters, for example, emotional valence and arousal, dependent on the differential activation of the two subdivisions. In this study, two continuously updated curves were presented to the subject depicting BOLD activity in ACcd and ACad. During blocks of 60-second duration, subjects were instructed to move both curves upwards (alternating 60 seconds rest and 60 seconds up-regulation). The subject was instructed to use his own strategy for voluntary BOLD regulation. The subject reported the use of mental imagery of winter landscapes, engaging in snowboarding and social interaction as his strategy to increase the BOLD signal. During baseline blocks, he attended to the signal time-course without performing any specific task. The subject rated the valence of his affective state significantly more positive for activation blocks as compared to baseline blocks.
Study of neuroplasticity and functional reorganization for recovery after neurological diseases such as stroke is another area of research for BCI's. Real-time fMRI feedback has been successfully used to successively reactivate affected regions of the brain. In [26], the researchers trained four healthy volunteers to control the BOLD response of the supplementary motor area (SMA). Offline analysis showed significant activation of the SMA with training. Further, with training there was a distinct reduction in activation in the surrounding areas, indicating that volitional control training focuses activity in the region-of-interest.
Another important domain in which fMRI based BCI is extremely effective is in treating chronic pain. Chronic pain can be substantially affected by cognitive and emotional processes. Subregions within the rostral ACC in association with other brain regions are implicated to be involved in the perception of pain. Hence, it is possible that by altering the activity in the rostral ACC, pain perception might be accordingly varied. In [27], researchers reported a substantial decrease of symptoms in chronic pain patients by training patients to self-regulate ACC. A further report from the same group [28], involving 16 healthy volunteers and 12 chronic pain patients, indicates the potential application of real-time fMRI for treating chronic pain. Subjects were able to learn to control activation in the rostral anterior cingulate cortex (rACC), a region involved in pain perception and regulation. The authors reported
that if subjects deliberately induced increases or decreases of rACC fMRI activation, there was a corresponding change in the perception of pain caused by an applied noxious thermal stimulus. Control experiments showed that this effect was not observed after training without real-time fMRI feedback, or using feedback from a different region, or sham feedback derived from a different subject. Chronic pain patients were also trained to control activation in rACC and reported decreases in the ongoing level of chronic pain after training.
## 5 Near Infrared Spectroscopy (fNIRS)
### Data Acquisition Mechanism
The use of Near Infrared Spectroscopy (NIRs) in BCI applications is fairly recent. NIRS also measures the changes in blood flow as fMRI, but using a different technique, infrared light (650nm - 1000nm) instead of magnetic field to measure the concentration changes of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR). NIRS measures the blood flow changes in the local capillary network caused by neuron firings. Since the hemoglobin is an oxygen carrier, the changes of HbO and HbR concentration levels after a neuronal activation can be related to the relevant neuronal firings. fNIRS uses near-infrared light emitter-detector pairs operating with two or more wavelengths. The light emitted into the scalp diffuses through the brain tissues resulting in multiple scattering of photons. Some of these photons exit the head after being partially absorbed and partially reflected by the cortical region of the brain, wherein the chromophores (i.e., HbO and HbR) are changing in time. These exited photons are then detected by using strategically positioned detectors. Since HbO and HbR have different absorption coefficients for different wavelengths of near infrared light, the relationship between the exiting-photon intensity and the incident-photon intensity can be used to calculate the changes of the concentrations of HbO and HbR along the path of the photons by applying the modified Beer-Lamberts law [29]. The distance between the emitter and detector determines the depth at which the measurement is taken, an increase in emitter-detector distance corresponds to an increase in imaging depth. To measure hemodynamic response signals from the cortical areas, an emitter-detector separation of around 3 cm was suggested in [30]. A separation of less than 1 cm might contain only skin-layer contribution, whereas that of more than 5cm might result in weak and therefore unusable signals. The number of 'channels' of measurement depend on the number of both emitters and detectors as well as the configuration of their placement relative to each other. A 'channel' of measurement is located at the mid-point of the straight line joining an emitter and detector and depth of the channel depends on the distance between the emitter and the detector e.g. a configuration of ten detectors arranged in an array of two rows and five columns and four emitters each placed at the center of a'square' formed by the detectors gives rise to sixteen channels. fNIRS typically have better spatial resolution than EEG and better temporal resolution than fMRI which makes it a good choice for monitoring activities in the motor cortex and prefrontal cortex regions.
### Data Processing
The acquired fNIRS signals contain various types of noise, which can be categorized into instrumental noise, experimental error, and physiological noise. Since the instrumental noise and experimental error are not related to the brain activities, it is better to remove them prior to converting the raw optical density signals to the concentration changes of HbO and HbR through the modified Beer-Lambert law.
Instrumental noise can be caused due to imperfections in the hardware design and assembly, limitations of the analog to digital convertors or due to external factors like other electronic gadgets in the vicinity. This type of noise is usually a constant signal of high frequency and can be removed using a low pass filter with a 3-5 Hz cutoff frequency. Experimental error is typically caused by head motion which causes the optodes to move away from their designated positions. This causes a sudden sharp change in the light intensity resulting in a spike in the resultant waveform. Several methods of motion artifact filtering are used like the Wiener filtering-based method [31], eigenvector-based spatial filtering [32] etc. Physiological noise include heartbeat (1 \(\sim\) 1.5 Hz), respiratory noise (0.2 \(\sim\) 0.5 Hz) and Mayer Waves (\(\sim\)0.1 Hz), which are related to changes in blood pressure. These noises are removed by using a bandpass filter with typically 0.1 Hz - 0.4 Hz cut-off frequency, though filters with other cut-off frequencies have also been employed. Other methods like adaptive filtering, principal component analysis (PCA) and Independent Component Analysis (ICA) have also been used to remove them.
After filtering, the different brain activities are classified based on several features. Some features can be extracted
from the raw intensity data while most are extracted from the hemodynamic signals (Hb0, HbR, total haemoglobin HbT = Hb0 + HbR and cerebral oxygen exchange COE = Hb0 - HbR) which are computed from the raw data using the modified Beer-Lamberts law. Typical features used for NIRs based BCI are peak amplitude, mean, variance, slope, skew, kurtosis and zero-crossing. Some studies have also proposed the use of features like filter coefficients from Kalman filtering, recursive least squares estimation and wavelet transform for classification purposes. Some of the common classification techniques used are Linear discriminant analysis (LDA), support vector machines (SVM) and also more recently artificial neural networks (ANN).
### BCI Systems and Applications
BCI systems have been shown to be effective in restoring some of the lost motor and/or cognitive functions in individuals with stroke and spinal cord injury. The underlying idea of doing so is the ability of BCI feedback to induce self-regulation of brain activity. EEG has the limitations of imprecise localization and inaccessibility of subcortical areas and fMRI, though more effective, typically involves bulky and expensive machines making them inconvenient. This makes fNIRS, with its good spatial resolution and decent temporal resolution, a very attractive option, in comparison with fMRI, in accessing subcortical brain signals. It is low cost, easy to use, and most of all it is portable. It can be used even in an ambulance. Moreover, NIRS is less sensitive to motion artifacts because it can be fashioned into a cap or headband which can be securely attached to the subjects' head. This makes the potential of use of fNIRS in neurofeedback studies very high. In [33], the researchers demonstrated the possibility of using fNIRS-based neurofeedback to allow the users to willfully regulate their hemodynamic responses. In [34], studies revealed that fNIRS-based neurofeedback can be used for long-term training as well, and such repetitive neurofeedback can induce specific and focused brain activation, while in contrast, sham feedback (synthesized or from another subject) has led to diffuse brain activation patterns over broader brain areas.
Another important application of BCI is to serve as a means of communication for people with motor disorders such as Amyotrophic Lateral Sclerosis (ALS), spinal cord injury or Locked-In syndrome. In [35] researchers developed an fNIRS-BCI system for binary communication based on activations from the prefrontal area. The subjects were required to perform a specific task such as mental arithmetic or music imagery to increase the cognitive load and, thereby, respond "yes" or to remain relax and, thus, respond "no" to a given question. The average accuracies obtained with online classification were approximately 82%. In [36], researchers proposed an fNIRS-BCI-based online word speller. Their system involves using right-hand and left-hand motor imagery to move a cursor on a two-dimensional to select letters.
Another important application of fNIRS-BCI is the restoration of movement capability for people with motor disabilities. The control commands generated by a BCI system can be used to control a prosthetic limb or a wheelchair. It is desirable to have a portable system for these applications so that the user can move freely. These applications, for safety purposes, cannot afford high error rates, and must be fast enough to provide real-time control. The portability and resilience to motion artefacts and moderate temporal resolution make fNIRS a good choice for this type of application. Several fNIRS-BCI studies [37] have tried to improve classification accuracies and information transfer rates which will make these applications more feasible.
## 6 Hybrid BCI Systems
Each BCI type has its own shortcoming and disadvantages. To utilize the advantages of different types of BCIs, different approaches are combined, called hybrid BCIs [38]. In a hybrid BCI, two or more different types of BCI systems are combined. It is also possible to combine one BCI system with another system which is not BCI-based, for example, combining a BCI system with an electromyogram (EMG)-based system. Different techniques and their combinations are utilized based on the application that the hybrid BCI is going to be used for. The main purpose of combining different systems to form a "hybrid" BCI is to improve accuracy, reduce errors, and overcome disadvantages of each conventional BCI system. In general, in a hybrid BCI, two systems can be combined sequentially or simultaneously [39]. In a simultaneous hybrid BCI, both systems are processed in parallel. Input signals used in simultaneous hybrid BCIs can be two different brain signals or one brain signal and another input. In sequential hybrid BCIs, the output of one system is used as the input of the other system. This approach is mostly used when the first system task is to indicate that the user is trying to communicate to the system, sort of a switch for the second system.
A combination of two types of brain signals can be collected using the same device to increase accuracy, reliability and compatibility for a wide range of users. In [40], a hybrid system utilizing Steady State Visually Evoked Potential (SSVEP) and Event Related Desynchronization (ERD) was proposed using EEG data. The proposed hybrid was evaluated during a task that compared ERD based BCI, SSVEP based BCI, and ERD-SSVEP based hybrid BCI. During the ERD BCI task, two arrows appeared on the screen. When the left arrow appeared, subjects were instructed to imagine opening and closing their left hand and for the right arrow, subjects imagined opening and closing the right hand. In the SSVEP task, subjects were instructed to gaze at either left (8 Hz) or right (13 Hz) LED depending on which cue appeared. In the hybrid task, when the left arrow was showed, subjects were imagining the left hand opening and closing while gazing at the left LED simultaneously. The task was similar for the right arrow. Results show the average accuracy of 74.8% for ERD, 76.9% for SSVEP, and 81.0% for hybrid.
An example of two types of brain signals being used sequentially can be found in [41] where the very reliable P-300 signal and SSVEP have been used. The P300 and SSVEP combination work well as the stimuli for evoking both patterns can be shown on one screen simultaneously. The P300 paradigm considered in this study is a 6x6 speller matrix based on the original P300 row/column paradigm. Only one frequency is allocated for the SSVEP paradigm. The background color was flashed with a frequency slightly less than 18 Hz. The background color change facilitates the SSVEP detection. During the classification, P300 and SSVEP signals were separated by a band pass filter. The SSVEP was utilized as a control state (CS) detection. When the user was gazing at the screen, the SSVEP was detected and it was assumed that the user intended to send a command. At that point the P300 based speller system was activated. For SSVEP detection, the mean power spectral density (PSD) in the narrow band near the desired frequency and the PSD in the wider range near the desired frequency were utilized in an objective function. These values were subtracted from each other and divided over the PSD value from the wide band and the function value was compared to a specified threshold.
Another possible combination for a hybrid BCI is P300 and motor imagery (MI)- based BCI. The basic concept in this type of hybrid is based on the features of P300 and ERD/ERS in control applications. P300 is a reliable BCI type for selecting one item out of several items and can be used for discrete control commands. On the other hand, due to the low degree of freedom presented by MI-based BCI, this type of BCI is more efficient for continuous control commands. These two types of BCIs can be joined to present more complicated control commands in one task. In [42], a P300-Motor Imagery Hybrid BCI for controlling a wheelchair in a home environment is introduced. The wheelchair control commands were divided into three steps: destination selection, navigation and stopping. For destination selection the user had to select the destination of the wheelchair motion by selecting one of the items among a list of destinations. To implement this control command, an accurate and reliable interface is needed, and false acceptance rates should be as low as possible. For this task, a P300 BCI presented at a screen was utilized. The experiments on healthy subjects showed a response time of about 20 seconds, the false acceptance rate 2.5% and the error less than 3%. The results showed that P300 was an appropriate option for the interface. For the navigation of the wheelchair, an autonomous motion control was introduced. The destination was selected, and the wheelchair started its motion toward the destination following virtual guiding paths. A proximity sensor was considered for stopping the wheelchair facing obstacles. For the stop command, the interface needs to be fast, reliable and have a low false acceptance rate. Two approaches for a stopping command were presented. The first approach was the fast P300, in which, on the screen, there is only one item "The Stop" and the task is the detection of user's intention. Experimental results showed reduction in response time. However, increase in false acceptance makes this approach inapplicable. The second approach was to use a mu-beta BCI. The position of a cursor was considered for presenting the visual feedback for the mu-beta BCI system and the control of the cursor was based on an arm movement imagination. Results showed approximately the same response time as the fast P300 approach, but for false acceptance, a rate of zero was achieved. Since the low false acceptance rate and fast response are the most important needs for this type of BCI, the mu-beta BCI was considered a more reliable system for this application.
Another very effective hybrid BCI system can be composed by using two types of brain imaging devices simultaneously to improve reliability. In [43], EEG and NIRS measurements were utilized simultaneously for ERD-based BCIs. In this study, the experiment consisted of 2 blocks of motor execution and 2 blocks of motor imagery. For all blocks, both EEG and NIRS were measured simultaneously. The increase in concentration of oxygenated hemoglobin (Hb0) and decrease in
concentration of deoxygenated hemoglobin (HbR) were measured using NIRS. The global peak cross-validation accuracy for each subject was considered for evaluation of the hybrid BCI. The mean classification accuracies of HbO, HbR, and EEG separately for executed movement tasks were 71.1%, 73.3%, and 90.8%. For motor imagery tasks they were 71.7%, 65.0%, and 78.2% respectively. The mean classification accuracies of EEG/HbO, EEG/HbR, and EEG/HbO/HbR for executed movement tasks were 92.6%, 93.2%, and 87.4%, and for motor imagery tasks were 83.2%, 80.6%, and 83.1%, respectively. It was shown that the combination of EEG and NIRS improved the classification accuracy in both MI and executed movement tasks.
## 7 Conclusions
In this article, different types of brain-controlled interfaces were reviewed, and their working principles, advantages, disadvantages and use-cases were highlighted. A brain-computer interface is defined as a communication system that does not depend on the brain's normal output pathways of peripheral nerves and muscles. When the workload of any region of the brain increases due to various types of activities, certain observable changes occur. These changes may be in the form of electrical pulses, chemical changes, change in blood flow and change in the oxygen level of blood. Several devices have been developed to monitor these changes using a variety of physical principles. These devices are known as brain imaging devices. They can be invasive, semi-invasive or non-invasive. This article discusses the advantages and disadvantages of each type and elaborates on different types of non-invasive techniques to monitor the activities of the brain. Using these devices to facilitate the communication of the user with a computing system to monitor, enhance or enable certain activities is the core focus of the subject known as BCI. Emphasis is placed on the non-invasive techniques because these are convenient, can be used outside of a very controlled clinical facility and much more accessible and affordable than their invasive and semi-invasive counterparts. These are the qualities that have enabled the widespread use of such devices in research and commercial applications and thus have generated a rich collection of data and literature. Each type of imaging device and data collection technique has its own disadvantages and shortcomings, this led to the concept of hybrid BCI techniques which use different imaging devices or different modalities from the same device to improve reliability and accuracy and also increase the functionality of the system. Research has shown that the operation of the BCI depends on two adaptive controllers: the human brain, which gets better with practice at certain activities like motor imagery, and the brain imaging device controller, which gets better as more data is generated and it is better tuned to recognize certain activities with more precision and reliability. Successful implementations of BCI require the user to learn and maintain new skills which are not normally required in daily life. The subjects can be trained in these new skills using real time biofeedback from the BCI. This observation has led to separate line of research on the ways in which BCI can be used for the rehabilitation of patients of sclerosis, stroke and other motor disorders. This idea has also led to research on the ways BCI can be used to augment and enhance the communication capabilities of typical humans with computing devices to increase efficiency and provide more 'channels' of control input. BCI consists of several components like the raw data collecting device, filters, feature extraction, classification and user training tools. This makes BCI development a truly interdisciplinary endeavor combining physical principles, statistics, data science, electrical engineering, signal processing, algorithm design, psychology, neuroscience and knowledge of rehabilitation techniques.
|
2303.00604 | 4D-EGB Black Holes in RPS Thermodynamics | In this paper, we study thermodynamics of charged and uncharged 4-Dimension
Einstein-Gauss-Bonnet (4D-EGB) black holes. The context of this study is the
Visser's holographic thermodynamics with a fixed anti-de Sitter radius and a
variable Newton constant known as restricted phase space thermodynamics (RPST).
Our setup is constructed by using the AdS/CFT correspondence and by introducing
a conjugate quantity of the Gauss-Bonnet parameter. By this ansatz, we conclude
that the Gauss-Bonnet action multiplied by a temperature, behaves as a free
energy. We derive the conjugate quantities corresponding to the first law in
the RPST formalism.
The study of the $T-S$ processes and the effect of the Gauss-Bonnet constant,
$\alpha$, show that thermodynamic properties of charged black holes depend on
the Gauss-Bonnet term and the charge of black holes. For an uncharged black
holes, the effect of Gauss-Bonnet becomes crucial, as it behaves as a charged
black hole with an effective charge. Finally, we find that the Hawking-Page
phase transition occurs between a large black hole and a thermal AdS space. | Y. Ladghami, B. Asfour, A. Bouali, A. Errahmani, T. Ouali | 2023-02-28T11:59:33Z | http://arxiv.org/abs/2303.00604v3 | # 4D-EGB Black Holes in RPS Thermodynamics
###### Abstract
In this paper, we study thermodynamics of charged and uncharged 4-Dimension Einstein-Gauss-Bonnet (4D-EGB) black holes. The context of this study is the Visser's holographic thermodynamics with a fixed anti-de Sitter radius and a variable Newton constant known as restricted phase space thermodynamics (RPST). Our setup is constructed by using the AdS/CFT correspondence and by introducing a conjugate quantity of the Gauss-Bonnet parameter. By this ansatz, we conclude that the Gauss-Bonnet action multiplied by a temperature, behaves as a free energy. We derive the conjugate quantities corresponding to the first law in the RPST formalism. The study of the \(T-S\) processes and the effect of the Gauss-Bonnet constant, \(\alpha\), show that thermodynamic properties of charged black holes depend on the Gauss-Bonnet term and the charge of black holes. For an uncharged black holes, the effect of Gauss-Bonnet becomes crucial, as it behaves as a charged black hole with an effective charge. Finally, we find that the Hawking-Page phase transition occurs between a large black hole and a thermal AdS space.
## 1 Introduction
Black holes and their thermodynamics provide a fertile ground for research, especially after Hawking and Bekenstein's work on the temperature and the entropy of black holes [1, 2], in addition to the four laws of black hole thermodynamics, through the thermodynamic analogy of the ordinary systems [3, 4, 5]. This thermodynamic is known as traditional black hole thermodynamics (TBHT), for which, the mass of the black hole is considered as its internal energy. The most important results of TBHT are the Hawking-Page phase transition for the space-time with a negative cosmological constant (\(\Lambda<0\)) i.e an anti-de Sitter space-time (AdS) [6]. Another important formalism in the study of black holes is called extended phase space thermodynamics (EPST) started for the first time in [7]. This formalism is also called "Black hole chemistry" [8, 9]. A number of works have been published in this regard [10, 11, 12, 13, 14, 15, 16, 17]. In the EPST, the mass of black holes is interpreted as an enthalpy and thermodynamic parameters such as the pressure, P, and the volume, V, have been interpenetrated as thermodynamic variables, where the pressure is related to the cosmological constant via \(P=-\Lambda/8\pi G\)[8]. AdS/CFT correspondence, speculated by Maldacena [18], has played an important role to understand the behavior of black hole and it is used as a framework to introduce a new thermodynamic approach. In this background, the AdS black hole is equivalent to a thermal state in the conformal field theories (CFT) [19]. Recently, within the framework of the AdS/CFT correspondence and based on the Visser's thermodynamics [20], a new thermodynamic called restricted phase space thermodynamics (RPST) was initiated by Gao et al in [21]. In RPST, black hole masses are reinterpreted as internal energy. However, instead of the pressure and the volume characterizing the EPST formalism, a new pair of conjugate thermodynamic variables characterizing the RPST formalism has been introduced. This new conjugates thermodynamic variables, are the chemical potential (color susceptibility) \(\mu\), and the central charge C which
represents the number degrees of freedom (number of colors) in CFT [20, 22]. The role of central charge C is similar to that of the number of particles in statistical physics of ordinary matter [23]. The central charge is related to the Newton's constant G and the AdS curvature radius \(l\) via \(C=l^{d-2}/G\), where \(d\) is the dimension of the AdS space-time [21, 22]. Homogeneity of Smarr relation, thermodynamic processes and phase transitions, were studied in the RN-AdS, Kerr-AdS, BTZ and Taub-NUT black holes in the context of RPST [21, 23, 24, 25, 26].
Weyl [27] and Eddington [28] first suggested modifications of General Relativity (GR) in 1918, and the idea has stuck around ever since. The current concept aims to develop GR alternatives for some cosmological issues, such as the cosmic acceleration in the early and late universe. One of these alternative theories is the Gauss-Bonnet gravity [29, 30], in which an arbitrary function \(f(\mathcal{G})\) is included in the gravitational action, where \(\mathcal{G}\) is the Gauss-Bonnet (GB) term. However, gravitational dynamics in 4-dimensional spacetime are unaffected by the GB term. For this reason, D. Glavan and C. Lin have recently proposed a novel 4-dimensional Einstein-Gauss-Bonnet (4D-EGB) gravity [31], which is founded on the idea of multiplying the GB term by the factor \(1/(d-4)\) in order to circumvent the strict requirements of Lovelock's theorem [32] and avoid Ostrogradsky instability. Additionally, in recent years, this new approach has developed into a useful framework for researching physics and thermodynamics of black holes. For instance, vacuum solution for charged black holes, thermodynamic properties [33], extended thermodynamics of black holes [34, 35], a charged black hole with entangled particles and antiparticles [36], stable circular orbit and shadow of the black holes [37], power spectra of the Hawking radiation in de-Sitter spacetime [38], stability of black holes [39] and non-linear charged planar black holes [40, 41].
As far as we know, even with the number of applications of 4D-EGB, there is no study so far about the RPST formalism in this approach. This motivates us to consider the RPST formalism in the 4D-EGB gravity background. A second motivation is to study effects of the Gauss-Bonnet constant on thermodynamic behaviors of a charged and uncharged black holes in both approaches and to complete the study of black holes in the RPST formalism already made in [21, 23, 24, 25, 26].
The paper is organised as follows: In Section 2, we present the 4D-EGB black holes solution in AdS space-time. In Section 3, we construct an holographic thermodynamics with a fixed AdS radius for 4D-EGB gravity. In section 4, the first law and thermodynamic variables are deduced from RPST. In section 5, we look at thermodynamic properties of the charged 4D-EGB black hole in the RPS thermodynamics by \(T-S\) processes. In section 6, we study uncharged 4D-EGB black holes. In section 7, we discuss our main conclusions. In this paper, we adopt the unit \(\hbar=c=k_{B}=1\).
## 2 4D-EGB Black Holes in AdS space-time
The action of the Gauss-Bonnet gravity in a d-dimensional space-time with a negative cosmological constant is given by [33]
\[I=\frac{1}{16\pi G}\int d^{d}x\left(R-2\Lambda-F_{\mu\nu}F^{\mu\nu}+\frac{ \alpha}{d-4}\mathcal{G}\right), \tag{1}\]
where the GB term writes
\[\mathcal{G}=R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^ {2}, \tag{2}\]
\(\alpha\) is Gauss-Bonnet coupling with dimension of \([\text{length}]^{2}\) and represents ultraviolet corrections to the Einstein theory, \(G\) is Newton's constant and the cosmological constant is given by
\[\Lambda=-\frac{(d-1)(d-2)}{2l^{2}}. \tag{3}\]
The AdS radius is given by \(l\) and \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the Maxwell strength field. The black hole metric, by adopting the limit \(d\longrightarrow 4\), is given by [31, 42]
\[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right), \tag{4}\]
where the metric function is given by
\[f(r)=1+\frac{r^{2}}{2\alpha}\left(1-\sqrt{1+4\alpha\left(\frac{2MG}{r^{3}}-\frac{ Q^{2}G}{r^{4}}-\frac{1}{l^{2}}\right)}\right), \tag{5}\]
where Q is the electric charge and M is the black hole's mass. Solving the equation \(f(r)=0\), we find two horizons: the event horizon \(r_{+}\) and the Cauchy horizon \(r_{-}\). Using Eq. (5) and the equation for the even horizon, \(f(r_{+})=0\), the black hole's mass is given by
\[M=\frac{r_{+}}{2G}+\frac{\alpha}{2\,r_{+}G}+\frac{r_{+}^{3}}{2\,l^{2}G}+\frac{ Q^{2}}{2\,r_{+}}. \tag{6}\]
The Hawking temperature of the 4D-EGB black hole is given by
\[T=\frac{f^{\prime}(r_{+})}{4\pi}=\frac{3r^{4}l^{-2}+r^{2}-GQ^{2}-\alpha}{4\pi r ^{3}+8\pi\alpha r}, \tag{7}\]
and the electrical potential of the event horizon is defined by
\[\Phi=\frac{Q}{r_{+}}. \tag{8}\]
In the limit \(\alpha\longrightarrow 0\), we recover the mass of Reissner-Nordstrom black hole in AdS space-time (RN-AdS) [21].
## 3 Holographic thermodynamics of black holes
In this section, we will construct holographic thermodynamics of 4D-EGB black hole based on AdS/CFT correspondence. In this construction, the function of a partition in the AdS space-time, \(Z_{AdS}\), is equal to the one of the dual boundary theory i.e. \(Z_{CFT}\)[20, 43]
\[Z_{AdS}=Z_{CFT}. \tag{9}\]
At finite temperature, the thermal partition function and the free energy \(W\) are related by
\[W=-T\ln Z_{CFT}, \tag{10}\]
where T is the temperature. In the other hand, the grand canonical ensemble defines the free energy as
\[W=\mu C, \tag{11}\]
where the central charge C determines the number of degrees of microscopic freedom in CFT and its conjugate, \(\mu\), is considered as a chemical potential.
The partition function in the AdS space-time is given by the Euclidean action, \(I_{E}\), of the gravitational theory [44]
\[I_{\rm E}=-\ln Z_{\rm AdS}. \tag{12}\]
In the 4D-EGB gravity, the Euclidean action \(I_{E}\) is given by Eq. (1) and can be decomposed as
\[I_{E}=I_{EM}+I_{GB}, \tag{13}\]
where
\[I_{EM}=\frac{1}{16\pi G}\int d^{d}x\left(R-2\Lambda-F_{\mu\nu}F^{\mu\nu}\right), \tag{14}\]
and
\[I_{GB}=\frac{1}{16\pi G}\int d^{d}x\frac{\alpha}{d-4}\mathcal{G}, \tag{15}\]
are the Einstein-Maxwell (EM) action and the Gauss-Bonnet action, respectively. The EM action \(I_{EM}\) is given by [44]
\[TI_{EM}=M-TS_{BH}-\Phi Q-\Omega J, \tag{16}\]
where \(S_{BH}=\pi r_{+}^{2}/G\) is the Bekenstein-Hawking entropy, T is its conjugate quantity, \(\Phi\) is the electrical potential of the event horizon, \(\Omega\) is the angular velocity and \(J\) the angular momentum. From now on, we will denote \(S_{BH}\) by \(S\). By analogy to Eq. (16), we set the GB action \(I_{GB}\) as
\[TI_{GB}=-\alpha{\cal A}, \tag{17}\]
where we have introduced a new quantity \({\cal A}\) as the conjugate of the GB constant \(\alpha\). The expression of this new quantity will be given later in terms of thermodynamics and AdS/CFT quantities. The free energy \(W\) is expressed as the partition function Z such that
\[Z_{CFT}=e^{-\beta W}, \tag{18}\]
where \(\beta=1/T\).
From Eqs. (9)- (12), Eqs. (16) and (17), The classical gravitational action writes
\[Z_{AdS}=e^{-\beta\,\mu\,C}=e^{-\beta(W_{EM}+W_{GB})}, \tag{19}\]
where
\[W_{EM}=M-TS-\Phi Q-\Omega J, \tag{20}\]
and
\[W_{GB}=-\alpha{\cal A}. \tag{21}\]
are the Einstein-Maxwell and the Gauss-Bonnet free energies, respectively.
Finally, the free energy, can be rewritten as
\[\mu C=M-TS-\Phi Q-\Omega J-\alpha{\cal A}. \tag{22}\]
## 4 Charged Black Holes
In this section, we study in the RPST formalism a charged 4D-EGB black hole i.e. \(Q\neq 0\) and \(J=0\). As we indicated in the introduction, the pressure P and the volume V lose their meanings in the RPS thermodynamics standpoint by introducing the quantity \(C\) and its conjugate parameter \(\mu\)[20, 21]. The relationship between the central charge, the AdS radius and the Newton's constant are given by [21, 23]
\[C=\frac{l^{2}}{G}. \tag{23}\]
The mass of black holes in RPST is interpreted as an internal energy [20, 21], and from Eq. (22), we find the Smarr relation of a charged 4D-EGB black hole in the RPS formalism as
\[M=TS+\tilde{\Phi}\tilde{Q}+\mu C+{\cal A}\alpha, \tag{24}\]
where the re-scaled electric potential, \(\tilde{\Phi}\), and the re-scaled electric charge, \(\tilde{Q}\), are defined by the dual CFT quantities as [21, 45]
\[\tilde{Q}=\frac{Ql}{\sqrt{G}}, \tag{25}\]
and
\[\tilde{\Phi}=\frac{\Phi\sqrt{G}}{l}=\frac{Q\sqrt{G}}{r_{+}l}, \tag{26}\]
respectively. From Smarr's relation, we can say that the mass of a black hole is a function of extensive variables
\[M=M(S,\tilde{Q},C,\alpha). \tag{27}\]
By differentiating Eq. (27), we can formulate the first law in the RPS thermodynamics for a charged 4D-EGB black hole as
\[dM=TdS+\Phi d\tilde{Q}+\mu dC+{\cal A}d\alpha, \tag{28}\]
with the corresponding conjugate quantities
\[T=\left(\frac{\partial M}{\partial S}\right)_{\tilde{Q},C,\alpha},\qquad\tilde {\Phi}=\left(\frac{\partial M}{\partial\tilde{Q}}\right)_{S,C,\alpha},\qquad \mu=\left(\frac{\partial M}{\partial C}\right)_{S,\tilde{Q},\alpha},\quad \mbox{and}\quad{\cal A}=\left(\frac{\partial M}{\partial\alpha}\right)_{S, \tilde{Q},C}. \tag{29}\]
We can also infer the Gibbs-Duheim relation
\[-SdT=\tilde{Q}d\Phi+Cd\mu+\alpha d\mathcal{A}. \tag{30}\]
Furthermore, the event horizon radius \(r_{+}\) in terms of the Bekenstein-Hawking entropy and the AdS/CFT quantities writes
\[r_{+}=l\sqrt{\frac{S}{\pi C}}. \tag{31}\]
In order to compute thermodynamic variables in RPST, we rewrite the mass of the 4D-EGB black hole, Eq. (6), as
\[M=\frac{S^{2}+\pi SC+\pi^{2}\tilde{Q}^{2}+\pi^{2}\alpha\,C^{2}l^{-2}}{2\pi^{3/2 }l\sqrt{SC}}. \tag{32}\]
For \(\alpha=0\), we recover the mass of the RN-AdS black holes in the RPS formalism [21]. The RPST temperature of the 4D-EGB black hole is defined as a partial derivative of Eq. (32), i.e
\[T=\frac{3S^{2}+\pi SC-\pi^{2}\tilde{Q}^{2}-\alpha C^{2}\pi^{2}l^{-2}}{4\pi^{3/2 }lS\sqrt{SC}}. \tag{33}\]
This temperature corresponds to the Hawking temperature, Eq. (7), for \(A\gg 8\pi\alpha\), where \(A\) is the area of the event horizon.The chemical potential, which is the conjugate of the central charge, is given by Eq. (29)
\[\mu=-\frac{S^{2}-\pi SC+\pi^{2}\tilde{Q}^{2}-3\alpha C^{2}\pi^{2}l^{-2}}{4\pi^ {3/2}lC\sqrt{SC}}. \tag{34}\]
Finally, the expression of the conjugate quantity of \(\alpha\) or the GB potential is given by
\[\mathcal{A}=\frac{C^{2}\,\sqrt{\pi}}{2\,l^{3}\,\sqrt{C\,S}}. \tag{35}\]
We notice that in the RPST formalism, Schwarzschild black holes i.e. \(\tilde{Q}=0\) are equivalent to RN black holes with an effective charge due to the 4D-EGB coupling. However, The Hawking-Page phase transition in Schwarzschild black holes may occurs at a entropy and temperature specific. Electrical and chemical potentials depend on the electric, the central charges, while the GB potential \(\mathcal{A}\) depends only on the central charge. Furthermore, the expression of the Gauss-Bonnet action \(I_{GB}\) in term of thermodynamics and AdS/CFT quantities by using Eqs. (17), (33) and (35) is given by
\[I_{GB}=-\frac{\alpha\pi^{2}C^{2}S}{3S^{2}l^{2}+\pi CSl^{2}-\pi^{2}\tilde{Q}^{ 2}l^{2}-\alpha C^{2}\pi^{2}}. \tag{36}\]
This action observes a divergence for \(T=0\) as shown in Eq. (33). Such unphysical divergences were already found in the on-shell action and surface terms of the total action of the 4D Gauss-Bonnet theory [46]. We can check easily that the mass of the 4D-EGB black holes is a first-order homogeneity and \(T\), \(\Phi\), \(\mu\) and \(\alpha\) are zeroth order homogeneity in \(S\), \(C\), \(\mu\) and \(\mathcal{A}\). Indeed, a re-scaling \(S\), \(\tilde{Q}\), C and \(\mathcal{A}\) as \(S\rightarrow\lambda S\), \(\tilde{Q}\rightarrow\lambda\tilde{Q}\), \(C\rightarrow\lambda C\) and \(\mathcal{A}\rightarrow\lambda\mathcal{A}\) the mass of the 4D-EGB black hole M, Eq.(32), is re-scaled as \(M\rightarrow\lambda M\) while \(T\), \(\Phi\), \(\mu\) and \(\alpha\) remain unchanged. This behavior is also observed from the Smarr relation Eq. (24).
## 5 Thermodynamic processes
This section is devoted to the study of the phase transition and thermodynamic processes of a charged 4D-EGB black hole by fixing the re-scaled electric charge, the Gauss-Bonnet constant and the central charge. Critical points of the \(T-S\) curve are obtained by inflection points of the temperature with respect to the entropy at fixed \(\tilde{Q}\), \(\alpha\) and C, i.e. by solving the system
\[\left(\frac{\partial T}{\partial S}\right)_{\tilde{Q},C,\alpha}=0,\qquad\left( \frac{\partial^{2}T}{\partial S^{2}}\right)_{\tilde{Q},C,\alpha}=0. \tag{37}\]
Using Eqs. (33) and (37), we obtain the critical parameters as
\[S_{c}=\frac{\pi C}{6},\qquad\mbox{and}\quad\tilde{Q}_{c}=\pm\frac{C}{l}\,\sqrt{ \frac{l^{2}}{36}-\alpha}. \tag{38}\]
Since the charge in the mass expression Eq. (32) is raised to the square root, we do not care about the sign of the charge. The critical charge imposes a bound on the parameter i.e \(\alpha l^{-2}<\frac{1}{36}\) and for \(\alpha l^{-2}=\frac{1}{36}\), there is no critical charge. By substituting these critical parameters in Eqs. (32) and (33), we obtain
\[M_{c}=\frac{1}{3}\sqrt{\frac{2}{3}}\frac{C}{l}\qquad\mbox{and}\qquad T_{c}= \frac{\sqrt{6}}{3\pi l}. \tag{39}\]
The equation of state (EoS) in the RPST formalism, by setting \(a=\alpha l^{-2}\), is given by
\[t=\frac{3\,s^{2}+6\,s-q^{2}+36\,a\,\left(q^{2}-1\right)}{8\,s^{3/2}}, \tag{40}\]
where we have normalized the parameters as
\[t=\frac{T}{T_{c}},\quad s=\frac{S}{S_{c}},\qquad\mbox{and}\quad q=\frac{ \tilde{Q}}{\tilde{Q}_{c}}. \tag{41}\]
We can also introduce the Helmholtz free energy F, its the critical parameter and the normalized quantity by
\[F=M-TS, \tag{42}\]
\[F_{c}=\frac{C}{3\,\sqrt{6}\,l}, \tag{43}\]
and
\[f=\frac{F}{F_{c}}=\frac{-4s^{3/2}t+q^{2}+s^{2}+6s}{4\sqrt{s}}, \tag{44}\]
respectively. We notice that, the EoS parameter and the expression of Helmholtz free energy, Eq. (44), are independent from the central charge C. This means that the central charge does not affect
Figure 1: \(T-S\) curve for different values of the GB constant and the electric charge.
the thermodynamic behavior of black holes. This is like "the law of corresponding states" in standard thermodynamics for ordinary matters in which the number has the same behavior.
Figs.1 and 2 show \(T-S\) and \(F-T\) curves corresponding to a fixed electric charge and the GB constant processes for different values of \(a\) and \(q\). In Fig.1, we notice the same thermal behaviors for \(q=1\), i.e, \(\tilde{Q}=\tilde{Q}_{c}\). This means that the Gauss-Bonnet coupling constant \(\alpha\) has no effect on the thermal evolution of the 4D-EGB black holes and this case is accompanied with a second-order phase transition. This result can also be seen from Eq. (40), where RN-AdS black holes at the critical electric charge are recovered even for 4D-EGB black holes [21]. The same is true for the free energy \(F-T\) curves in Fig.2. For \(0<\tilde{Q}<\tilde{Q}_{c}\), the \(T-S\) curves are non-monotonic. This represents a first-order phase transition from small black holes to a large one passing through the medium sized black holes. While the latter is unstable as it corresponds to a transition zone, small and large black holes are stable. The \(F-T\) curves, in this case (\(0<\tilde{Q}<\tilde{Q}_{c}\)), contain a swallow tail with the root located at temperature \(T>T_{c}\) (supercritical temperature). The Gauss-Bonnet constant affects the evolution of the \(T-S\) and the \(F-S\) curves for \(0<\tilde{Q}<\tilde{Q}_{c}\). Indeed, with the same value of \(q\), transition temperatures \(T_{tran}\), are different for different negative values of \(\alpha\). The transition temperature is, furthermore, more pronounced for a negative value of \(\alpha\) compared to a vanishing or a positive value. For \(\tilde{Q}>\tilde{Q}_{c}\) there are no phase transitions.
## 6 Uncharged 4D-EGB Black Holes
From Eqs. (33) and (40), we notice that for an uncharged 4D-EGB black hole, the GB constant behaves as an effective charge in the RPST formalism. This means that an uncharged 4D-EGB black holes behaves like a RN-AdS black holes in AdS/CFT. In this section, we will study an uncharged 4D-EGB black holes in restricted phase space thermodynamics. The first law and the Smarr relation
Figure 2: \(F-T\) curve for different values of the GB constant and the electric charge.
of an uncharged 4D-EGB black holes write as
\[dM=TdS+\mu dC+\mathcal{A}d\alpha, \tag{45}\]
\[M=TS+\mu C+\mathcal{A}\alpha, \tag{46}\]
respectively. The expression of the mass of the uncharged 4D-EGB black hole becomes from Eq. (32)
\[M=\frac{S^{2}+\pi SC+\pi^{2}\alpha C^{2}l^{-2}}{2\pi^{3/2}l\sqrt{SC}}. \tag{47}\]
The conjugate thermodynamics variables are as follows
\[T=\frac{3S^{2}+\pi SC-\alpha C^{2}\pi^{2}l^{-2}}{4\pi^{3/2}lS\sqrt{SC}}, \tag{48}\]
\[\mu=-\frac{S^{2}-\pi SC-3\alpha C^{2}\pi^{2}l^{-2}}{4\pi^{3/2}lC\sqrt{SC}}, \tag{49}\]
\[\mathcal{A}=\frac{C^{2}\sqrt{\pi}}{2l^{3}\sqrt{CS}}. \tag{50}\]
Given the conjugate thermodynamics variable in the RPST formalism, we proceed by studying the phase transition and thermodynamic processes (\(T-S\) processes) of an uncharged 4D-EGB black holes with a fixed Gauss-Bonnet constant and a central charge. The critical points in the \(T-S\) curve at fixed \(\alpha\) and C, are
\[S_{c}=\frac{\pi C}{6},\quad\alpha_{c}=\frac{l^{2}}{36},\quad T_{c}=\frac{ \sqrt{6}}{3\pi l},\quad\text{and}\quad F_{c}=\frac{\sqrt{6}\,C}{18\,l}. \tag{51}\]
The equation of state and the Helmhotz free energy in RPST formalism for the uncharged 4D-EGB black holes are
\[t=\frac{3s^{2}+6s-b}{8s^{3/2}}, \tag{52}\]
and
\[f=\frac{b+s^{2}+6s-4t\,s^{3/2}}{4s^{1/2}}, \tag{53}\]
where we have defined the relative parameter b as \(\frac{\alpha}{\alpha_{c}}\).
In Figs. 3 and 4, the \(T-S\) and \(F-T\) curves correspond to the iso-\(\alpha\) processes. Although the black hole is not electrically charged, it behaves like a RN-AdS [21]. This means that, in the RPST formalism, the Gauss-Bonnet constant causes an uncharged black hole to behave like a charged one. That is, \(\alpha\) may be related to an effective electric charge. Indeed, for \(\alpha<\alpha_{c}\), we have a first-order phase transition, for \(\alpha=\alpha_{c}\) we have a second-order phase transition, and for \(\alpha>\alpha_{c}\) there is no phase transition.
Figure 3: \(T-S\) curve for different values of the GB constant.
To find out this relationship, we consider the mass of RN-AdS black holes and that of an uncharged 4D-EGB black hole in the RPST formalism. The mass of an uncharged 4D-EGB black hole is
\[M_{4D-EGB}=\frac{S^{2}+\pi SC+\pi^{2}\alpha C^{2}l^{-2}}{2\pi^{3/2}l\sqrt{SC}}, \tag{54}\]
and the one of RN-AdS black holes is
\[M_{RN-AdS}=\frac{S^{2}+\pi SC+\pi^{2}\tilde{Q}^{2}}{2\pi^{3/2}l\sqrt{SC}}. \tag{55}\]
By comparing the expression of these two masses, a similitude can be made by introducing a re-scaled effective Gauss-Bonnet charge \(\tilde{Q}_{GB}\) as
\[\tilde{Q}_{GB}=\frac{C\,\sqrt{\alpha}}{l}. \tag{56}\]
In the RPST formalism, Schwarzschild black holes in 4D-EGB are similar to those of RN-AdS one.
One of the well known transition between the Schwarzschild black hole and the thermal AdS space is the Hawking-Page phase transition. These transitions may exist with respect to the conservation of charge and angular momentum. In our setup, i.e. the RPST formalism for 4D-EGB gravity, the Schwarzschild black hole becomes charged at thermodynamic properties level. This effective charge has a topological not electromagnetic properties. Thus, we can still consider the Hawking-Page phase transition between the 4D-EGB black holes and the thermal AdS space in the RPST formalism without violating the conservation of charge and angular momentum. The Gibbs free energy \(G\) of the thermal AdS space is zero. The Hawking-Page phase transition requires a null Gibbs free energy.
\[G=M-TS+PV. \tag{57}\]
However, as the pressure and the volume are not thermodynamic variables in the RPST formalism, the Gibbs free energy is none other than the Helmholtz free energy F. Putting G = 0, we get the Hawking-Page entropy and temperature from Eqs. (47) and (48)
\[S_{HP}=\frac{\pi C}{2}\left(\sqrt{\frac{b}{3}+1}+1\right), \tag{58}\]
and
\[T_{HP}=\frac{\sqrt{2}\left(\frac{b}{9}+\sqrt{\frac{b}{3}+1}+1\right)}{\pi\,l \left(\sqrt{\frac{b}{3}+1}+1\right)^{3/2}}, \tag{59}\]
Figure 4: \(F-T\) curve for different values of the GB constant.
respectively. From Figs. 3 and 4 and Eqs. (58)-(59), we infer that \(S_{HP}/S_{c}\leqslant 6\) and \(T_{HP}/T_{c}\) is approximately equal to 1.25, i.e, the Hawking-Page phase transition is between a large black hole and thermal AdS space.
## 7 Discussion and Conclusion
In this paper, we have studied the 4D-EGB black hole in the context of restricted phase space thermodynamics. We have started by solving the space-time black hole with a negative cosmological constant, with respect to 4D-EGB gravity. From the metric, we deduced the expression of the mass of the black hole in terms of the parameters of the black hole, the geometry, and the GB constant.
We constructed a holographic thermodynamic of the 4D-EGB black hole, based on the AdS/CFT correspondence by equating the partition function of AdS to that of CFT (\(Z_{AdS}=Z_{CFT}\)) and the free energy \(W=\mu C\), to the Euclidean action \(I_{E}=-\ln Z_{CFT}\). We were able to formulate Smarr relation, the first law and the Gibbs-Duhem relation for a charged and an uncharged 4D-EGB black holes. We show that Smarr relation is homogeneous at a first order. In addition, we have derived thermodynamic variables in the RPST formalism.
We have also studied thermodynamic properties of a charged black hole, and we found that the equation of state does not depend on the central charge. This result is similar to that in the law of corresponding states in standard thermodynamics for ordinary matter. We show that when the charge of black hole is greater than the critical one, there is no phase transition, but when \(Q=Q_{c}\) we find a phase transition of the second order. For \(0<Q<Q_{c}\), we find a first-order phase transition. A remarkable result that not only the charge that influences thermodynamic properties but even the GB constant, precisely when the charge is sub-critical.
For the study of the uncharged black hole, we find that its thermodynamic properties depend on the value of the GB constant. Indeed, for \(\alpha=\alpha_{c}\), it corresponds to a second-order phase transition and \(0<\alpha<\alpha_{c}\), it corresponds to a first-order phase transition. But, for \(\alpha>\alpha_{c}\), there is no phase transition. Although it is not charged, the black hole in 4D-EGB behaves as if it is charged one with an effective charge that depends on the GB constant and the central charge.
Finally, the Hawking-Page phase transition may occurs between a large black hole and a thermal AdS space. |
2310.01422 | Service Pet Robot Design: Queer, Feminine and Sexuality Aspects | The integration of robots and AI in society raises concerns about
discrimination and biases mostly affecting underrepresented groups, including
queer and feminine figures. Socially assistive robots (SAR) are being used in a
variety of service and companion roles, following social norms during their
interaction with humans and seem to be beneficial in many roles, such as the
pet therapy robots. To promote inclusion and representation, robot design
should incorporate queer and feminine characteristics. As a response to these
concerns, a pet robot called BB was designed using a multidisciplinary and
inclusive approach. BB was presented in a queer architecture and aesthetics
environment, emphasizing aspects of techno-touch, vulnerability, and sexuality
in human-robot interactions. The audience's perception of both the robot and
the female researcher was evaluated through questionnaires and focus groups.
This study aims to explore how technology and design can better accommodate
diverse perspectives and needs in the field of SAR. | Anna-Maria Velentza, Antigoni Tsagkaropoulou | 2023-09-27T08:54:58Z | http://arxiv.org/abs/2310.01422v1 | # Service Pet Robot Design: Queer, Feminine and Sexuality Aspects
###### Abstract
The integration of robots and AI in society raises concerns about discrimination and biases mostly affecting underrepresented groups, including queer and feminine figures. Socially assistive robots (SAR) are being used in a variety of service and companion roles, following social norms during their interaction with humans and seem to be beneficial in many roles, such as the pet therapy robots. To promote inclusion and representation, robot design should incorporate queer and feminine characteristics. As a response to these concerns, a pet robot called BB was designed using a multidisciplinary and inclusive approach. BB was presented in a queer architecture and aesthetics environment, emphasizing aspects of techno-touch, vulnerability, and sexuality in human-robot interactions. The audience's perception of both the robot and the female researcher was evaluated through questionnaires and focus groups. This study aims to explore how technology and design can better accommodate diverse perspectives and needs in the field of SAR.
human robot interaction, robot design, queer, feminine, evaluation
## I Introduction
The integration of robots into society gives rise to social, ethical, and even legal dilemmas. It is well-known that discrimination and bias are inherent challenges in numerous AI applications. This leads us to question how these issues impact the underrepresented groups, queer, and feminine figures and whether they are considered during the development and utilization of robotics and AI[1]. Surprisingly, the exploration of how machines affect them remains largely unexplored. Only a limited number of works in the literature delve into LGBT\(\mathrm{Q}+\) concerns in the design of robots and artificial intelligence (AI) [1]. The categorization of AI applications into clear and distinct groups, often based on binary distinctions, is dependent on ways of creating knowledge that are ingrained in systemic authority and mythological frameworks[2]. These frameworks promote a narrow perspective of human identity, adhering to heteronormative and liberal ideals, which consequently restricts and confines our understanding of humanity [2]. Apart from the under representation of people interacting with robots, there are still major unsolved problems with gender biases and underrepresented groups in science[3, 4]. Evidences show that increasing femininity in a person's appearance, decreases the likelihood of being perceived as being a scientist [5].
Socially assistive robots (SAR) refer to robots that aid or support to humans in social interactions. These robots prioritize meaningful and productive interactions by facilitating effective and intimate engagement with human users, enabling assistance and tangible advancements in areas such as recovery, rehabilitation, education, and other domains[6].
The success of human-robot interactions is heavily dependent on the robots' proficiency in communication across various modalities, encompassing both verbal and nonverbal means [7]. Emotion plays a central role in these forms of communication, expressed through facial expressions, speech, colors, and full-body motion [8]. Particularly, the robots' expressive body language significantly influences various aspects of interaction, such as shaping humans' attitudes towards them, evoking emotional arousal, and impacting the perception of the interaction itself [9, 10, 11]. Moreover, the robots' speech, both auditory and semantic, holds a crucial position in expressing emotional behavior during human-robot interactions [12]. By mastering these elements of communication, robots can foster meaningful and effective engagements with humans, leading to more natural and beneficial interactions in various domains.
Studies have shown that individuals tend to ascribe moral norms to robots in a manner analogous to their evaluations of human beings[13]. This phenomenon is indicative of a process wherein the same social and emotional characteristics that influence human-human interactions also impact robot-human interactions[14]. Additionally, over an extended period, pet therapy has been acknowledged for its considerable emotional benefits. Recent investigations have demonstrated that robotic pets yield analogous positive effects, while circumventing the potential drawbacks associated with conventional live pets. Consequently, robotic pet therapy presents a viable and promising alternative to traditional pet therapy approaches[15].
## II Related Work
The design of human-like robots accompanying and serving us in everyday activities demands a deep understanding of the human body, humanoid and animistic cues, and to go beyond heterosexual gender norms. Treusch suggested performative demonstrations for people to experience and co-design the humanoid robot companions [16]. Joo [17] suggests the notion of an Asian robot, embracing the idea that it opens the door to exploring alternative perceptions of race that go beyond mere association with human form. Following his
lead, we support the idea that a queer robot can embrace queer perspective in society. Moreover, Shildrick [18] examines how the unique practices and settings of dementia care can be reshaped and challenged by the introduction of technological and intelligent assisted aids, and particularly zoomorphic robots, which are progressively augmenting traditional human assistance.
## III Present study
In the present study, we focus on the first part of the lecture performance 'BB the pet robot', where we design and implement from a queer and feminine perspective a service pet robot prototype costume, and we evaluated both the female lecturer who was presenting it and the robot design with Likert scale questionnaires and focus group. The artist and second author wore the robot costume and acted as it.
## IV BB the Pet Robot
### a) Robot Design
The robot called BB, is human sized, as shown in Fig.1, with humanoid and animal characteristics. The morphology follows all three major design concepts of SAR, the cute companionship, the human mimic and a futuristic machine [19]. The robot's fur is fluffy and very soft to touch. The increasing prevalence of companion and empathy robots in assistive applications introduces the concept of "techno-touch," which encompasses both the physical aspect of touch and its meta-phorical representation for emotional connection. In both senses, the act of touching and being touched should be considered in relation to vulnerability, not as a precarious state but as a gateway to an imagined realm of inseparable interconnections, where the distinction between self and other loses significance [18]. Moreover, BB has a light blue color, since blue is linked with stress reduction, positive reactions such as excitement and happiness, while it is considered attractive, inflating people's attention without affecting their memory and acceptable throughout cultures [20, 21].
BB has four paws and a tablet as a face with expressive facial expressions changing according to the context of the speech. The variation of facial expressions were the size and movements of the eyes and mouth. The voice is both machinery and humanoid, non-binary and in accordance with the appearance. Moreover, the robot throughout the presentation was making robotic sounds and it was doing slight movements even when it was not competing a task, to be perceived as alive. The robot's facial expressions, voice and storytelling are cute, following the evidence of the importance of cuteness in human-robot interaction and especially for service robots to be considered more likable[22, 23].
Additionally, we focused on creating a queer aesthetics indoors lecture environment for the presentation of BB, which differs from the heteromormative space. The strength of an exhibit or installation lies in its transience and apparent lack of practical purpose. The very "uselessness" of these installations leads to unforeseen revelations, compelling individuals to pause, ponder, and reconsider their conventional interactions with architecture[24]. Thus, we combined the usefulness, the existence of a table for the laptop and the monitors and a curtain to hide the robot before the reveal with the "uselessness", covering the table with shiny fabric and using a sequin and spangle curtain, as shown in Fig.1, C. The act of exposing and sharing what is typically private or concealed, and more appropriately, making it visible (sometimes quite literally), serves as a disruption and subversion of the conventional domestic space. This process can be seen as a form of "quearing," challenging traditional norms and expectations associated with private areas [24]
### Participants
The lecture performance was promoted using printed flyers and electronic communication channels, specifically through university e-mail lists. The promotional materials included the names of both authors, along with their respective professional expertise, to establish credibility and context. In adherence to research methodology principles and to prevent potential expectation biases [25], explicit details regarding the robot's functionalities or appearance were deliberately omitted from the promotional content. Instead, the audience was informed solely about the presence of a pet robot, without any specific information on its capabilities or visual attributes.60 people from the audience agreed to fill in the questionnaire and 20 out of them participated in the focus group the next day. Their age range was 21-72, 30 identified as she/her (50%), 12 as he/him (20%), 15 as they/them (25%), and the rest preferred not to mention.
### Procedure
The lecture performance spanned across two days, each held at distinct venues. At the beginning, the lecturer, and first author delivered a comprehensive presentation with the aid of PowerPoint on SAR, elucidating their interactions with individuals in various tasks and expounding on their communication channels, akin to the Introduction section, and also by highlighting papers from her personal research. Following the lecture's conclusion, the lecturer revealed BB, the robot, by drawing back the curtain depicted in Fig. 1,C. Subsequently, the robot introduced itself to the audience, employing endearing sounds, robotic gestures, and facial expressions to accompany its storytelling.
During this phase, the lecturer expounded upon the practicality of employing pet robots in service-oriented activities within the realm of human-robot interaction. Additionally, the rationale behind the robot's specific appearance,
encompassing its fluffy texture and color, was elucidated (Fig.1, B). To engage the audience further, the robot approached them, allowing individuals to interact by petting it.
Following this interaction, the lecturer requested the robot to return to the stage and subsequently commanded it to perform a dance by uttering the prompt, 'BB, dance.' This demonstration showcased the robot's capability to respond to verbal commands and execute dynamic movements, underscoring its potential as an interactive and entertaining technological entity. Following the feminism critiques the pornographic portrayals of femininity, and the prevailing sex robotics trends as intensifying outdated gender stereotypes to an extreme level, BB performed a sensual dance accompanied by a song. According to Kubes [26], the notion of robot sex as a sexist representation influenced by toxic masculinity. Advocating from a new materialist, sex-positive, and queer standpoint, freeing robots from the compulsion to imitate the human body with great precision could serve as a remedy to the discomfort and aversion often experienced when encountering humanoid but not entirely human-like replicas [26]. After 2 minutes of dance, the lecturer commanded the robot to stop and presented a QR code at the main screen, asking the audience to fill in the questionnaires to help them with their research.
#### d) Measurements
To evaluate both the lecturer and the robot we used 5 Likert Scale Aesthetic valence questionnaire [27], using the same questions to evaluate them both [28]. The questions were regarding their appearance, i.e., 'lovely, attractive', the uncanny valley, i.e.,'scary, creepy', the perceived personality i.e., 'cheerful, boring', their functionality i.e., 'functional, trustworthy'. Furthermore, we added two provocative, based on gender stereotypes items,'slutty', and 'affectionate'. Additionally, to the questionnaires, the next day of the performance a focus group/ critique section took place.
#### e) Results
As positive evaluation we sum up the replies scoring 4 and 5 in the Likert scale (agree and strongly agree) and then we calculate the percentage out of the total replies per question (N=60). The audience highly evaluated the robot and the lecturer in terms of trustworthiness (46.66%, N=28 for the robot and 70%, N=42 for the lecturer) and effectiveness (50%, N=30 for the robot, and 80%, N=48 for the lecturer).
Additionally, they both perceived as attractive in terms of physical appearance (76.66%, N=46 for the robot, and 86.66%, N=52 for the lecturer), while they were both not perceived as uncanny, receiving low scores in the corresponding items, creepy and scary (25% for the robot and 13.33% for the lecturer).
It is worth mentioned that enough people evaluated the lecturer as'slutty', with 28.6% scoring 4 and 5 in the Likert scale and 39.3% not being suer, scoring 3, neither agree nor disagree. Similarly for the robot, 50% evaluated as'slutty', while 35.7% scored 3 in the Likert scale. Moreover, the robot was noticed to have a higher score evaluated as 'affectionate', 85.7%, in comparison with the lecturer, 46.4%.
During the focus group the participants agreed that they couldn't believe that the lecturer was a real researcher and scientists and that she looked like an actor playing the role of the researcher. Moreover, they criticized the existence of the sensual dance, with all men who participated to pointed out that it was unnecessary, while female and queer participants mentioned that they understand that there is a point in the existence of that dance, but maybe they should try to explain it verbally to the audience.
## V Discussion & Conclusions
The primary objective of this study is to establish guidelines for the design of SAR that embrace inclusivity and queerness. This advancement builds upon the incorporation of vulnerability and feminine characteristics in SAR design, which contributes to providing support and eliciting positive emotional responses from users. Furthermore, the inclusion of these aspects prompts critical discussions, both at the societal and individual levels, pertaining to the dynamics of human-robot interaction, as well as the evolving relationships with one's own body, behavior, and sexuality in the context of an increasingly technologically driven world.
By infusing SAR with qualities that evoke vulnerability and femininity, the aim is to create robots that offer empathetic and nurturing attributes, promoting a sense of emotional connection and well-being among users. This approach challenges traditional notions of robotic design and explores alternative paradigms of human-robot interaction that are sensitive to individual diversity and needs.
Consequently, the introduction of inclusive and queer SAR engenders contemplation and dialogue on broader societal implications, such as public perceptions of robots, privacy concerns in human-robot interactions, and the influence of smart technologies on shaping human behavior and attitudes towards their own sexuality and identity. Ultimately, it is envisioned that this research will contribute to the development of more empathetic and socially aware SAR that can positively impact various user groups and foster a more inclusive and accepting technological society.
BB performed social characteristics and the outcome of the questionnaire showed that the ethical and moral considerations governing human behavior are extended to encompass the behavior of the robots as well. The social and emotional factors that influence the dynamics of human interactions play a significant role in shaping how individuals perceive and interact with robots[13, 14] and the audience evaluated BB and the lecturer with similar moral norms.
Our findings are in line with [5], where the female characteristics were not perceived as scientific. In our case, the identity and research focus of the lecturer were prominently highlighted across various mediums, including the advertisement flyer, the introductory segment of her lecture, and even within the literature used for the presentation. Despite this comprehensive visibility of her credentials, the audience expressed secpticism regarding her research and authenticity, conjecturing that she might be an actress portraying the role of a scientist. This perception appears to be associated with the lecturer's perceived attractiveness and charm, which garnered significant attention and evaluation from the audience. |
2301.13767 | Multicalibration as Boosting for Regression | We study the connection between multicalibration and boosting for squared
error regression. First we prove a useful characterization of multicalibration
in terms of a ``swap regret'' like condition on squared error. Using this
characterization, we give an exceedingly simple algorithm that can be analyzed
both as a boosting algorithm for regression and as a multicalibration algorithm
for a class H that makes use only of a standard squared error regression oracle
for H. We give a weak learning assumption on H that ensures convergence to
Bayes optimality without the need to make any realizability assumptions --
giving us an agnostic boosting algorithm for regression. We then show that our
weak learning assumption on H is both necessary and sufficient for
multicalibration with respect to H to imply Bayes optimality. We also show that
if H satisfies our weak learning condition relative to another class C then
multicalibration with respect to H implies multicalibration with respect to C.
Finally we investigate the empirical performance of our algorithm
experimentally using an open source implementation that we make available. Our
code repository can be found at
https://github.com/Declancharrison/Level-Set-Boosting. | Ira Globus-Harris, Declan Harrison, Michael Kearns, Aaron Roth, Jessica Sorrell | 2023-01-31T17:05:24Z | http://arxiv.org/abs/2301.13767v1 | # Multicalibration as Boosting for Regression
###### Abstract
We study the connection between multicalibration and boosting for squared error regression. First we prove a useful characterization of multicalibration in terms of a "swap regret" like condition on squared error. Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class \(\mathcal{H}\) that makes use only of a standard squared error regression oracle for \(\mathcal{H}\). We give a weak learning assumption on \(\mathcal{H}\) that ensures convergence to Bayes optimality without the need to make any realizability assumptions -- giving us an agnostic boosting algorithm for regression. We then show that our weak learning assumption on \(\mathcal{H}\) is both necessary and sufficient for multicalibration with respect to \(\mathcal{H}\) to imply Bayes optimality. We also show that if \(\mathcal{H}\) satisfies our weak learning condition relative to another class \(\mathcal{C}\) then multicalibration with respect to \(\mathcal{H}\) implies multicalibration with respect to \(\mathcal{C}\). Finally we investigate the empirical performance of our algorithm experimentally using an open source implementation that we make available on GitHub1.
Footnote 1: Our code repository can be found at [https://github.com/Declancharrison/Level-Set-Boosting](https://github.com/Declancharrison/Level-Set-Boosting)
## 1 Introduction
We revisit the problem of boosting for regression, and develop a new agnostic regression boosting algorithm via a connection to multicalibration. In doing so, we shed additional light on multicalibration, a recent learning objective that has emerged from the algorithmic fairness literature (Hebert-Johnson et al., 2018). In particular, we characterize multicalibration in terms of a "swap-regret" like condition, and use it to answer the question "what property must a collection of functions \(\mathcal{H}\) have so that multicalibration with respect to \(\mathcal{H}\) implies Bayes optimality?", giving a complete answer to problem asked by Burhanpurkar et al. (2021). Using our swap-regret characterization, we derive an especially simple algorithm for learning a multicalibrated predictor for a class of functions \(\mathcal{H}\) by reduction to a standard squared-error regression algorithm for \(\mathcal{H}\). The same algorithm can also be analyzed as a boosting algorithm for squared error regression that makes calls to a weak learner for squared error regression on subsets of the original data distribution _without the need to relabel examples_ (in contrast to Gradient Boosting as well as existing multicalibration algorithms). This lets us specify a weak learning condition that is sufficient for convergence to the Bayes optimal predictor (even if the Bayes optimal predictor does not have zero error), avoiding the kinds of realizability assumptions that are implicit in analyses of boosting algorithms that converge to zero error. We conclude that ensuring multicalibration with respect to \(\mathcal{H}\) corresponds to boosting for squared error regression in which \(\mathcal{H}\) forms the set of weak learners. Finally we define a weak learning condition for \(\mathcal{H}\) relative to a constrained class of functions \(\mathcal{C}\) (rather than with respect to the Bayes optimal predictor). We show that multicalibration with respect to \(\mathcal{H}\) implies multicalibration with respect to \(\mathcal{C}\) if \(\mathcal{H}\) satisfies the weak learning condition with respect to \(\mathcal{C}\), which in turn implies accuracy at least that of the best function in \(\mathcal{C}\).
MulticalibrationConsider a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\) defined over a domain \(\mathcal{Z}=\mathcal{X}\times\mathbb{R}\) of feature vectors \(x\in\mathcal{X}\) paired with real valued labels \(y\). Informally, a regression function \(f:\mathcal{X}\to\mathbb{R}\) is _calibrated_ if for every \(v\) in the range of \(f\), \(\mathbb{E}_{(x,y)\sim\mathcal{D}}[y|f(x)=v]=v\). In other words, \(f(x)\) must be an unbiased estimator of \(y\)
even conditional on the value of its own prediction. Calibration on its own is a weak condition, because it only asks for \(f\) to be unbiased on _average_ over all points \(x\) such that \(f(x)=v\). For example, the constant predictor that predicts \(f(x)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[y]\) is calibrated. Thus calibration does not imply accuracy--a calibrated predictor need not make predictions with lower squared error than the best constant predictor. Calibration also does not imply that \(f\) is equally representative of the label distribution on different subsets of the feature space \(\mathcal{X}\). For example, given a subset of the feature space \(G\subseteq\mathcal{X}\), even if \(f\) is calibrated, it may be that \(f\) is not calibrated on the conditional distribution conditional on \(x\in G\)--it might be e.g. that \(\mathbb{E}[y|f(x)=v,x\in G]\gg v\), and \(\mathbb{E}[y|f(x)=v,x\notin G]\ll v\). To correct this last deficiency, Hebert-Johnson et al. (2018) defined _multi-calibration_, which is a condition parameterized by a subset of groups \(G\subseteq\mathcal{X}\) each defined by an indicator function \(h:\mathcal{X}\to\{0,1\}\) in some class \(\mathcal{H}\). It asks (informally) that for each such \(h\in\mathcal{H}\), and for each \(v\) in the range of \(f\), that \(\mathbb{E}[h(x)(y-v)|f(x)=v]=0\). Since \(h\) is a binary indicator function for some set \(G\), this is equivalent to asking for calibration not just marginally over \(\mathcal{D}\), but simultaneously for calibration over \(\mathcal{D}\) conditional on \(x\in G\). Kim et al. (2019) and Gopalan et al. (2022) generalize multicalibration beyond group indicator functions to arbitrary real valued functions \(h:\mathcal{X}\to\mathbb{R}\). Intuitively, as \(\mathcal{H}\) becomes a richer and richer set of functions, multicalibration becomes an increasingly stringent condition. But if \(\mathcal{H}\) consists of the indicator functions for e.g. even a very large number of _randomly selected_ subsets \(G\subseteq\mathcal{X}\), then the constant predictor \(f(x)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[y]\) will still be approximately multicalibrated with respect to \(\mathcal{H}\). What property of \(\mathcal{H}\) ensures that multicalibration with respect to \(\mathcal{H}\) implies that \(f\) is a Bayes optimal regression function? This question was recently asked by Burhanpurkar et al. (2021) -- and we provide a necessary and sufficient condition.
Boosting for RegressionBoosting refers broadly to a collection of learning techniques that reduce the problem of "strong learning" (informally, finding an error optimal model) to a series of "weak learning" tasks (informally, finding a model that has only a small improvement over a trivial model)--See Schapire and Freund (2013) for a textbook treatment. The vast majority of theoretical work on boosting studies the problem of binary classification, in which a weak learner is a learner that obtains classification error bounded below \(1/2\). Several recent papers Kim et al. (2019); Gopalan et al. (2022) have made connections between algorithms for guaranteeing multicalibration and boosting algorithms for binary classification.
In this paper, we show a direct connection between multicalibration and the much less well-studied problem of boosting for squared error regression (Friedman, 2001; Duffy and Helmbold, 2002). There is not a single established notion for what constitutes a weak learner in the regression setting (Duffy and Helmbold (2002) introduce several different notions), and unlike boosting algorithms for classification problems which often work by calling a weak learner on a reweighting of the data distribution, existing algorithms for boosting for regression typically resort to calling a learning algorithm on _relabelled_ examples. We give a boosting algorithm for regression that only requires calling a squared error regression learning algorithm on subsets of examples from the original distribution (without relabelling), which lets us formulate a weak learning condition that is sufficient to converge to the Bayes optimal predictor, without making the kinds of realizability assumptions implicit in the analysis of boosting algorithms that assume one can drive error to zero.
### Our Results
We focus on classes of real valued functions \(\mathcal{H}\) that are closed under affine transformations -- i.e. classes such that if \(f(x)\in\mathcal{H}\), then for any pair of constants \(a,b\in\mathbb{R}\), \((af(x)+b)\in\mathcal{H}\) as well. Many natural classes of models satisfy this condition already (e.g. linear and polynomial functions and regression trees), and any neural network architecture that does not already satisfy this condition can be made to satisfy it by adding two additional parameters (\(a\) and \(b\)) while maintaining differentiability. Thus we view closure under affine transformations to be a weak assumption that is enforceable if necessary.
First in Section 3 we prove the following characterization for multicalibration over \(\mathcal{H}\), for any class \(\mathcal{H}\) that is closed under affine transformations. Informally, we show that a model \(f\) is multicalibrated with respect
to \(\mathcal{H}\) if and only if, for every \(v\) in the range of \(f\):
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}|f(x)=v]\leq \min_{h\in\mathcal{H}}\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h(x)-y )^{2}|f(x)=v]\]
(See Theorem 3.2 for the formal statement). This is a "swap regret"-like condition (as in Foster and Vohra (1999) and Blum and Mansour (2005)), that states that \(f\) must have lower squared error than any model \(h\in\mathcal{H}\), even conditional on its own prediction. Using this characterization, in Section 4 we give an exceedingly simple algorithm for learning a multicalibrated predictor over \(\mathcal{H}\) given a squared error regression oracle for \(\mathcal{H}\). The algorithm simply repeats the following over \(t\) rounds until convergence, maintaining a model \(f:\mathcal{X}\to\{0,1/m,2/m,\ldots,1\}\) with a discrete range with support over multiples of \(1/m\) for some discretization factor \(m\):
1. For each level set \(v\in\{0,1/m,2/m,\ldots,1\}\), run a regression algorithm to find the \(h_{v}^{t}\in\mathcal{H}\) that minimizes squared error on the distribution \(\mathcal{D}|(f_{t-1}(x)=v)\), the distribution conditional on \(f_{t-1}(x)=v\).
2. Replace each level set \(v\) of \(f_{t-1}(x)\) with \(h_{v}^{t}(x)\) to produce a new model \(f_{t}\), and round its output to the discrete range \(\{0,1/m,2/m,\ldots,1\}\)
Each iteration decreases the squared error of \(f_{t}\), ensuring convergence, and our characterization of multicalibration ensures that we are multicalibrated with respect to \(\mathcal{H}\) at convergence. Compared to existing multicalibration algorithms (e.g. the split and merge algorithm of Gopalan et al. (2022)), our algorithm is exceptionally simple and makes use of a standard squared-error regression oracle on subsets of the original distribution, rather than using a classification oracle or requiring example relabelling.
We can also view the same algorithm as a boosting algorithm for squared error regression. Suppose \(\mathcal{H}\) (or equivalently our weak learning algorithm) satisfies the following weak learning assumption: informally, that on any restriction of \(\mathcal{D}\) on which the Bayes optimal predictor is non-constant, there should be some \(h\in\mathcal{H}\) that obtains squared error better than that of the best constant predictor. Then our algorithm converges to the Bayes optimal predictor. In Section A we give uniform convergence bounds which guarantee that the algorithm's accuracy and multicalibration guarantees generalize out of sample, with sample sizes that are linear in the pseudodimension of \(\mathcal{H}\).
We then show in Section 5 that in a strong sense this is the "right" weak learning assumption: Multicalibration with respect to \(\mathcal{H}\) implies Bayes optimality if and only if \(\mathcal{H}\) satisfies this weak learning condition. This gives a complete answer to the question of when multicalibration implies Bayes optimality.
In Section 6, we generalize our weak learning condition to a weak learning condition relative to a constrained class of functions \(\mathcal{C}\) (rather than relative to the Bayes optimal predictor), and show that if \(\mathcal{H}\) satisfies the weak learning condition relative to \(\mathcal{C}\), then multicalibration with respect to \(\mathcal{H}\) implies multicalibration with respect to \(\mathcal{C}\), and hence error that is competitive with the best model in \(\mathcal{C}\).
We give a fast, parallelizable implementation of our algorithm and in Section 7 demonstrate its convergence to Bayes optimality on two-dimensional datasets useful for visualization, as well as evaluate the accuracy and calibration guarantees of our algorithm on real Census derived data using the Folktables package Ding et al. (2021).
### Additional Related Work
Calibration as a statistical objective dates back at least to Dawid (1982). Foster and Vohra (1999) showed a tight connection between marginal calibration and internal (equivalently swap) regret. We extend this characterization to multicalibration. Multicalibration was introduced by Hebert-Johnson et al. (2018), and variants of the original definition have been studied by a number of works (Kim et al., 2019; Jung et al., 2021; Gopalan et al., 2022; Kim et al., 2022; Roth, 2022). We use the \(\ell_{2}\) variant of multicalibration studied in Roth (2022)--but this definition implies all of the other variants of multicalibration up to a change in parameters. Burhanpurkar et al. (2021) first asked the question "when does multicalibration with respect to \(\mathcal{H}\) imply accuracy", and gave a sufficient condition: when \(\mathcal{H}\) contains (refinements of) the levelsets of the
Bayes optimal regression function, together with techniques for attempting to find these. This can be viewed as a "strong learning" assumption, in contrast to our weak learning assumption on \(\mathcal{H}\).
Boosting for binary classification was introduced by Schapire (1990) and has since become a major topic of both theoretical and empirical study -- see Schapire and Freund (2013) for a textbook overview. Both Kim et al. (2019) and Gopalan et al. (2022) have drawn connections between algorithms for multicalibration and boosting for binary classification. In particular, Gopalan et al. (2022) draw direct connections between their split-and-merge multicalibration algorithm and agnostic boosting algorithms of Kalai (2004); Kanade and Kalai (2009); Kalai et al. (2008). Boosting for squared error regression is much less well studied. Freund and Schapire (1997) give a variant of Adaboost (Adaboost.R) that reduces regression examples to infinite sets of classification examples, and requires a base regressor that optimizes a non-standard loss function. Friedman (2001) introduced the popular gradient boosting method, which for squared error regression corresponds to iteratively fitting the residuals of the current model and then applying an additive update, but did not give a theoretical analysis. Duffy and Helmbold (2002) give a theoretical analysis of several different boosting algorithms for squared error regression under several different weak learning assumptions. Their algorithms require base regression algorithms that can be called (and guaranteed to succeed) on arbitrarily relabelled examples from the training distribution, and given their weak learning assumption, their analysis shows how to drive the error of the final model arbitrarily close to \(0\). Weak learning assumptions in this style implicitly make very strong realizabilty assumptions (that the Bayes error is close to \(0\)), but because the weak learner is called on relabelled samples, it is difficult to enunciate a weak learning condition that is consistent with obtaining Bayes optimal error, but not better. The boosting algorithm we introduce only requires calling a standard regression algorithm on subsets of the examples from the training distribution, which makes it easy for us to define a weak learning condition that lets us drive error to the Bayes optimal rate without realizability assumptions -- thus our results can be viewed as giving an agnostic boosting algorithm for regression.
## 2 Preliminaries
We study prediction tasks over a domain \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\). Here \(\mathcal{X}\) represents the _feature_ domain and \(\mathcal{Y}\) represents the label domain. We focus on the bounded regression setting where \(\mathcal{Y}=[0,1]\) (the scaling to \([0,1]\) is arbitrary). We write \(\mathcal{D}\in\Delta\mathcal{Z}\) to denote a distribution over labelled examples, \(\mathcal{D}_{\mathcal{X}}\) to denote the induced marginal distribution over features, and write \(D\sim\mathcal{D}^{n}\) to denote a dataset consisting of \(n\) labelled examples sampled i.i.d. from \(\mathcal{D}\). We will be interested in the squared error of a model \(f\) with respect to distribution \(\mathcal{D}\), \(\mathbb{E}_{(x,y)\sim\mathcal{D}}[(y-f(x))^{2}].\) We abuse notation and identify datasets \(D=\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\) with the empirical distribution over the examples they contain, and so we can write the empirical squared error over \(D\): as \(\mathbb{E}_{(x,y)\sim D}[(y-f(x))^{2}]=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-f(x_{i} ))^{2}\). When taking expectations over a distribution that is clear from context, we will frequently suppress notation indicating the relevant distribution for readability.
We write \(R(f)\) to denote the range of a function \(f\), and when \(R(f)\) is finite, use \(m\) to denote the cardinality of its range: \(m=|R(f)|\). We are interested in finding models that are _multicalibrated_ with respect to a class of real valued functions \(\mathcal{H}\). We use an \(\ell_{2}\) notion of multicalibration as used in Roth (2022):
**Definition 2.1** (Multicalibration).: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\) and a model \(f:\mathcal{X}\rightarrow[0,1]\) that maps onto a countable subset of its range. Let \(\mathcal{H}\) be an arbitrary collection of real valued functions \(h:\mathcal{X}\rightarrow\mathbb{R}\). We say that \(f\) is \(\alpha\)-approximately multicalibrated with respect to \(\mathcal{D}\) and \(\mathcal{H}\) if for every \(h\in\mathcal{H}\):_
\[K_{2}(f,h,\mathcal{D})=\sum_{v\in R(f)}\Pr_{(x,y)\sim\mathcal{D}}[f(x)=v]\left( \operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[h(x)(y-v)|f(x)=v]\right)^{2} \leqslant\alpha.\]
_We say that \(f\) is \(\alpha\)-approximately calibrated if:_
\[K_{2}(f,\mathcal{D})=\sum_{v\in R(f)}\Pr_{(x,y)\sim\mathcal{D}}[f(x)=v]\left( \operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-v)|f(x)=v]\right)^{2} \leqslant\alpha.\]
_If \(\alpha=0\), then we simply say that a model is multicalibrated or calibrated. We will sometimes refer to \(K_{2}(f,\mathcal{D})\) as the mean squared calibration error of a model \(f\)._
**Remark 2.2**.: _When the functions \(h(x)\) have binary range, we can view them as indicator functions for some subset of the data domain \(S\subseteq\mathcal{X}\), in which case multicalibration corresponds to asking for calibration conditional on membership in these subsets \(S\). Allowing the functions \(h\) to have real valued range is only a more general condition. Our notion of approximate multicalibration takes a weighted average over the level sets \(v\) of the predictor \(f\), weighted by the probability that \(f(x)=v\). This is necessary for any kind of out of sample generalization statement -- otherwise we could not even necessarily measure calibration error from a finite sample. Other work on multicalibration use related measures of multicalibration that we think of as \(\ell_{1}\) or \(\ell_{\infty}\) variants, that we can write as \(K_{1}(f,h,\mathcal{D})=\sum_{v\in R(f)}\Pr_{(x,y)\sim\mathcal{D}}[f(x)=v] \left\|\mathbb{E}_{(x,y)\sim\mathcal{D}}[h(x)(y-v)|f(x)=v]\right|\) and \(K_{\infty}(f,h,\mathcal{D})=\max_{v\in R(f)}\Pr_{(x,y)\sim\mathcal{D}}[f(x)=v ]\left(\mathbb{E}_{(x,y)\sim\mathcal{D}}[h(x)(y-v)|f(x)=v]\right)\). These notions are related to each other: \(K_{2}(f,h,\mathcal{D})\leq K_{1}(f,h,\mathcal{D})\leq\sqrt{K_{2}(f,h,\mathcal{ D})}\) and \(K_{\infty}(f,h,\mathcal{D})\leq K_{1}(f,h,\mathcal{D})\leq mK_{\infty}(f,h, \mathcal{D})\)[Roth, 2022]._
We will characterize the relationship between multicalibration and Bayes optimality.
**Definition 2.3** (Bayes Optimal Predictor).: _Let \(f^{*}:\mathcal{X}\to[0,1]\). We say that \(f^{*}\) is the Bayes optimal predictor for \(\mathcal{D}\) if:_
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-f^{*}(x))^{2}]\leq\min_{f: \mathcal{X}\to[0,1]}[(y-f(x))^{2}]\]
_The Bayes Optimal predictor satisfies: \(f^{*}(x)=\mathbb{E}_{(x^{\prime},y)\sim\mathcal{D}}\left[y|x^{\prime}=x\right]\). We say that a function \(f:\mathcal{X}\to[0,1]\) is \(\gamma\)-approximately Bayes optimal if_
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-f(x))^{2}]\leq\mathop{\mathbb{ E}}_{(x,y)\sim\mathcal{D}}[(y-f^{*}(x))^{2}]+\gamma.\]
Throughout this paper, we will denote the Bayes optimal predictor as \(f^{*}\).
## 3 A Characterization of Multicalibration
In this section we give a simple "swap-regret" like characterization of multicalibration for any class of functions \(\mathcal{H}\) that is closed under affine transformations:
**Definition 3.1**.: _A class of functions \(\mathcal{H}\) is closed under affine transformations if for every \(a,b\in\mathbb{R}\), if \(h(x)\in\mathcal{H}\) then \(h^{\prime}(x):=ah(x)+b\in\mathcal{H}\)._
As already discussed, closure under affine transformation is a mild assumption: it is already satisfied by many classes of functions \(\mathcal{H}\) like linear and polynomial functions and decision trees, and can be enforced for neural network architectures when it is not already satisfied by adding two additional parameters \(a\) and \(b\) without affecting our ability to optimize over the class.
The first direction of our characterization states that if \(f\) fails the multicalibration condition for some \(h\in\mathcal{H}\), then there is some other \(h^{\prime}\in\mathcal{H}\) that improves over \(f\) in terms of squared error, _when restricted to a level set of \(f\)_. The second direction states the opposite: if \(f\) is calibrated (but not necessarily multicalibrated), and if there is some level set of \(f\) on which \(h\) improves over \(f\) in terms of squared error, then in fact \(f\) must fail the multicalibration condition for \(h\).
**Theorem 3.2**.: _Suppose \(\mathcal{H}\) is closed under affine transformation. Fix a model \(f:\mathcal{X}\to\mathbb{R}\) and a levelset \(v\in R(f)\) of \(f\). Then:_
1. _If there exists an_ \(h\in\mathcal{H}\) _such that:_ \[\mathbb{E}[h(x)(y-v)|f(x)=v]\geq\alpha,\] _for_ \(\alpha>0\)_, then there exists an_ \(h^{\prime}\in\mathcal{H}\) _such that:_ \[\mathbb{E}[(f(x)-y)^{2}-(h^{\prime}(x)-y)^{2}|f(x)=v]\geq\frac{\alpha^{2}}{ \mathbb{E}[h(x)^{2}|f(x)=v]},\]
2. _If_ \(f\) _is calibrated and there exists an_ \(h\in\mathcal{H}\) _such that_ \[\mathbb{E}[(f(x)-y)^{2}-(h(x)-y)^{2}|f(x)=v]\geq\alpha,\] _then:_ \[\mathbb{E}[h(x)(y-v)|f(x)=v]\geq\frac{\alpha}{2}.\]
Proof.: We prove each direction in turn.
**Lemma 3.3**.: _Fix a model \(f:\mathcal{X}\rightarrow\mathbb{R}\). Suppose for some \(v\in R(f)\) there is an \(h\in\mathcal{H}\) such that:_
\[\mathbb{E}[h(x)(y-v)|f(x)=v]\geq\alpha\]
_Let \(h^{\prime}=v+\eta h(x)\) for \(\eta=\frac{\alpha}{\mathbb{E}[h(x)^{2}|f(x)=v]}\). Then:_
\[\mathbb{E}[(f(x)-y)^{2}-(h^{\prime}(x)-y)^{2}|f(x)=v]\geq\frac{\alpha^{2}}{ \mathbb{E}[h(x)^{2}|f(x)=v]}\]
Proof.: We calculate:
\[\mathbb{E}[(f(x)-y)^{2}-(h^{\prime}(x)-y)^{2}|f(x)=v]\] \[= \mathbb{E}[(v-y)^{2}-(v+\eta h(x)-y)^{2}|f(x)=v]\] \[= \mathbb{E}[v^{2}-2vy+y^{2}-(v+\eta h(x))^{2}+2y(v+\eta h(x))-y^{2 }|f(x)=v]\] \[= \mathbb{E}[2y\eta h(x)-2v\eta h(x)-\eta^{2}h(x)^{2}|f(x)=v]\] \[= \mathbb{E}[2\eta h(x)(y-v)-\eta^{2}h(x)^{2}|f(x)=v]\] \[\geq 2\eta\alpha-\eta^{2}\,\mathbb{E}[h(x)^{2}|f(x)=v]\] \[= \frac{\alpha^{2}}{\mathbb{E}[h(x)^{2}|f(x)=v]}\]
Where the last line follows from the definition of \(\eta\).
The first direction of Theorem 3.2 follows from Lemma 3.3, and the observation that since \(\mathcal{H}\) is closed under affine transformations, the function \(h^{\prime}\) defined in the statement of Lemma 3.3 is in \(\mathcal{H}\). Now for the second direction.
**Lemma 3.4**.: _Fix a model \(f:\mathcal{X}\rightarrow\mathbb{R}\). Suppose for some \(v\in R(f)\) there is an \(h\in\mathcal{H}\) such that:_
\[\mathbb{E}[(\bar{y}_{v}-y)^{2}-(h(x)-y)^{2}|f(x)=v]\geq\alpha,\]
_where \(\bar{y}_{v}=\mathbb{E}[y\mid f(x)=v]\). Then it must be that:_
\[\mathbb{E}[h(x)(y-\bar{y}_{v})|f(x)=v]\geq\frac{\alpha}{2}\]
Proof.: We calculate:
\[\begin{array}{ll}&\mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[h(x)(y-\bar{ y}_{v})|f(x)=v]\\ =&\mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[h(x)y|f(x)=v]-\bar{y}_{v} \mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[h(x)|f(x)=v]\\ =&\frac{1}{2}\left(2\mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[h(x)y|f (x)=v]-2\bar{y}_{v}\mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[h(x)|f(x) =v]\right)\\ \geqslant&\frac{1}{2}\left(2\mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[h (x)y|f(x)=v]-2\bar{y}_{v}\mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[h( x)|f(x)=v]-\mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[(h(x)-\bar{y}_{v})^{2}|f(x)=v] \right)\\ =&\frac{1}{2}\left(\mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[2h(x)y- h(x)^{2}-\bar{y}_{v}y+\bar{y}_{v}^{2}|f(x)=v]\right)\\ =&\frac{1}{2}\left(\mathop{\mathbb{E}}\limits_{(x,y)\sim\mathcal{D}}[(\bar{y} _{v}-y)^{2}-(h(x)-y)^{2}|f(x)=v]\right)\\ \geqslant&\frac{\alpha}{2}\end{array}\]
where the 3rd to last line follows from adding and subtracting \(\bar{y}_{v}^{2}\).
For any calibrated \(f\) it follows that \(v=\mathbb{E}[y\mid f(x)=v]=\bar{y}_{v}\), and so for calibrated \(f\) we have that if
\[\mathbb{E}[(v-y)^{2}-(h(x)-y)^{2}|f(x)=v]\geqslant\alpha,\]
then:
\[\mathbb{E}[h(x)(y-v)|f(x)=v]\geqslant\frac{\alpha}{2}.\]
## 4 An Algorithm (For Multicalibration And Regression Boosting)
We now give a single algorithm, and then show how to analyze it both as an algorithm for obtaining a multicalibrated predictor \(f\), and as a boosting algorithm for squared error regression.
Let \(m\in\mathbb{N}^{+}\) be a discretization term, and let \([1/m]:=\{0,\frac{1}{m},\ldots,\frac{m-1}{m},1\}\) denote the set of points in \([0,1]\) that are multiples of \(1/m\). We will learn a model \(f\) whose range is \([1/m]\), which we will enforce by _rounding_ its outputs to this range as necessary using the following operation:
**Definition 4.1** (\(\mathrm{Round}(f;m)\)).: _Let \(\mathcal{F}\) be the family of all functions \(f:\mathcal{X}\rightarrow\mathbb{R}\). Let \(\mathrm{Round}:\mathcal{F}\times\mathbb{N}^{+}\rightarrow\mathcal{F}\) be a function such that \(\mathrm{Round}(f;m)\) outputs \(\tilde{h}(x)=\min_{v\in[1/m]}|h(x)-v|\)._
Unlike other algorithms for multicalibration which make use of _agnostic learning_ oracles for binary classification, our algorithm makes use of an algorithm for solving squared-error regression problems over \(\mathcal{H}\):
**Definition 4.2**.: \(A_{\mathcal{H}}\) _is a squared error regression oracle for a class of real valued functions \(\mathcal{H}\) if for every \(\mathcal{D}\in\Delta\mathcal{Z}\), \(A_{\mathcal{H}}(\mathcal{D})\) outputs a function \(h\in\mathcal{H}\) such that_
\[h\in\arg\min_{h^{\prime}\in\mathcal{H}}\mathop{\mathbb{E}}\limits_{(x,y)\sim \mathcal{D}}[(h^{\prime}(x)-y)^{2}].\]
For example, if \(\mathcal{H}\) is the set of all linear functions, then \(A_{\mathcal{H}}\) simply solves a linear regression problem (which has a closed form solution). Algorithm 1 (LSBoost2)repeats the following operation until it no longer decreases overall squared error: it runs squared error regression on each of the level-sets of \(f_{t}\), and then replaces those levelsets with the solutions to the regression problems, and rounds the output to \([1/m]\).
Footnote 2: LSBoost can be taken to stand for either “Level Set Boost” or “Least Squares Boost”, at the reader’s discretion.
We will now analyze the algorithm first as a multicalibration algorithm, and then as a boosting algorithm. For simplicity, in this section we will analyze the algorithm as if it is given direct access to the distribution \(\mathcal{D}\). In practice, the algorithm will be run on the empirical distribution over a dataset \(D\sim\mathcal{D}^{n}\), and the multicalibration guarantees proven in this section will hold for this empirical distribution. In Section A we prove generalization theorems, which allow us to translate our in-sample error and multicalibration guarantees over \(D\) to out-of-sample guarantees over \(\mathcal{D}\).
```
Let \(m=\frac{2B}{\alpha}\). Let \(f_{0}=\mathrm{Round}(f;m)\), \(\mathrm{err}_{0}=\mathbb{E}_{(x,y)\sim\mathcal{D}}[(f_{0}(x)-y)^{2}]\), \(\mathrm{err}_{-1}=\infty\) and \(t=0\). while\((\mathrm{err}_{t-1}-\mathrm{err}_{t})\geqslant\frac{\alpha}{2B}\)do for each \(v\in[1/m]\)do Let \(\mathcal{D}^{t+1}_{v}=\mathcal{D}|(f_{t}(x)=v)\). Let \(h^{t+1}_{v}=A_{\mathcal{H}}(\mathcal{D}^{t+1}_{v})\). Let: \[\tilde{f}_{t+1}(x)=\sum_{v\in[1/m]}\mathbbm{1}[f_{t}(x)=v]\cdot h^{t+1}_{v}(x) \quad f_{t+1}=\mathrm{Round}(\tilde{f}_{t+1},m)\] Let \(\mathrm{err}_{t+1}=\mathbb{E}_{(x,y)\sim\mathcal{D}}[(f_{t+1}(x)-y)^{2}]\) and \(t=t+1\). Output\(f_{t-1}\).
```
**Algorithm 1**LSBoost\((f,\alpha,A_{\mathcal{H}},\mathcal{D},B)\)
### Analysis as a Multicalibration Algorithm
**Theorem 4.3**.: _Fix any distribution \(\mathcal{D}\in\Delta\mathcal{Z}\), any model \(f:\mathcal{X}\rightarrow[0,1]\), any \(\alpha<1\), any class of real valued functions \(\mathcal{H}\) that is closed under affine transformations, and a squared error regression oracle \(A_{\mathcal{H}}\) for \(\mathcal{H}\). For any bound \(B>0\) let:_
\[\mathcal{H}_{B}=\{h\in\mathcal{H}:\max_{x\in\mathcal{X}}h(x)^{2}\leqslant B\}\]
_be the set of functions in \(h\) with squared magnitude bounded by \(B\). Then LSBoost\((f,\alpha,A_{\mathcal{H}},\mathcal{D},B)\) (Algorithm 1) halts after at most \(T\leqslant\frac{2B}{\alpha}\) many iterations and outputs a model \(f_{T-1}\) such that \(f_{T-1}\) is \(\alpha\)-approximately multicalibrated with respect to \(\mathcal{D}\) and \(\mathcal{H}_{B}\)._
**Remark 4.4**.: _Note the form of this theorem -- we do not promise multicalibration at approximation parameter \(\alpha\) for all of \(\mathcal{H}\), but only for \(\mathcal{H}_{B}\) -- i.e. those functions in \(\mathcal{H}\) satisfying a bound on their squared value. This is necessary, since \(\mathcal{H}\) is closed under affine transformations. To see this, note that if \(\mathbb{E}[h(x)(y-v)]\geqslant\alpha\), then it must be that \(\mathbb{E}[c\cdot h(x)(y-v)]\geqslant c\cdot\alpha\). Since \(h^{\prime}(x)=ch(x)\) is also in \(\mathcal{H}\) by assumption, approximate multicalibration bounds must always also be paired with a bound on the norm of the functions for which we promise those bounds._
Proof.: Since \(f_{0}\) takes values in \([0,1]\) and \(y\in[0,1]\), we have \(\mathrm{err}_{0}\leqslant 1\), and by definition \(\mathrm{err}_{T}\geqslant 0\) for all \(T\). By construction, if the algorithm has not halted at round \(t\) it must be that \(\mathrm{err}_{t}\leqslant\mathrm{err}_{t-1}-\frac{\alpha}{2B}\), and so the algorithm must halt after at most \(T\leqslant\frac{2B}{\alpha}\) many iterations to avoid a contradiction.
It remains to show that when the algorithm halts at round \(T\), the model \(f_{T-1}\) that it outputs is \(\alpha\)-approximately multi-calibrated with respect to \(\mathcal{D}\) and \(\mathcal{H}_{B}\). We will show that if this is not the case, then \(\mathrm{err}_{T-1}-\mathrm{err}_{T}>\frac{\alpha}{2B}\), which will be a contradiction to the halting criterion of the algorithm.
Suppose that \(f_{T-1}\) is not \(\alpha\)-approximately multicalibrated with respect to \({\cal D}\) and \({\cal H}_{B}\). This means there must be some \(h\in{\cal H}_{B}\) such that:
\[\sum_{v\in[1/m]}\Pr_{(x,y)\sim{\cal D}}[f_{T-1}(x)=v]\left(\mathop{\mathbb{E}}_ {(x,y)\sim{\cal D}}[h(x)(y-v)|f_{T-1}(x)=v]\right)^{2}>\alpha\]
For each \(v\in[1/m]\) define
\[\alpha_{v}=\Pr_{(x,y)\sim{\cal D}}[f_{T-1}(x)=v]\left(\mathop{\mathbb{E}}_{(x,y)\sim{\cal D}}[h(x)(y-v)|f_{T-1}(x)=v]\right)^{2}\]
So we have \(\sum_{v\in[1/m]}\alpha_{v}>\alpha\).
Applying the 1st part of Theorem 3.2 we learn that for each \(v\), there must be some \(h_{v}\in{\cal H}\) such that:
\[\mathop{\mathbb{E}}[(f_{T-1}(x)-y)^{2}-(h_{v}(x)-y)^{2}|f_{T-1}(x )=v] > \frac{1}{\mathop{\mathbb{E}}[h(x)^{2}|f_{T-1}(x)=v]}\cdot\frac{ \alpha_{v}}{\Pr_{(x,y)\sim{\cal D}}[f_{T-1}(x)=v]}\] \[\geqslant \frac{1}{B}\frac{\alpha_{v}}{\Pr_{(x,y)\sim{\cal D}}[f_{T-1}(x)=v]}\]
where the last inequality follows from the fact that \(h\in{\cal H}_{B}\) Now we can compute:
\[\mathop{\mathbb{E}}_{(x,y)\sim{\cal D}}[(f_{T-1}(x)-y)^{2}-(\tilde {f}_{T}(x)-y)^{2}]\] \[= \sum_{v\in[1/m]}\Pr_{(x,y)\sim{\cal D}}[f_{T-1}(x)=v]\mathop{ \mathbb{E}}_{(x,y)\sim{\cal D}}[(f_{T-1}(x)-y)^{2}-(\tilde{f}_{T}(x)-y)^{2}|f _{T-1}(x)=v]\] \[= \sum_{v\in[1/m]}\Pr_{(x,y)\sim{\cal D}}[f_{T-1}(x)=v]\mathop{ \mathbb{E}}_{(x,y)\sim{\cal D}}[(f_{T-1}(x)-y)^{2}-(h_{v}^{T}(x)-y)^{2}|f_{T-1} (x)=v]\] \[\geqslant \sum_{v\in[1/m]}\frac{\alpha_{v}}{B}\] \[> \frac{\alpha}{B}\]
Here the third line follows from the definition of \(\tilde{f}_{T}\) and the fourth line follows from the fact \(h_{v}\in{\cal H}\) and that \(h_{v}^{T}\) minimizes squared error on \({\cal D}_{v}^{T}\) amongst all \(h\in{\cal H}\).
Finally we calculate:
\[\mathop{\rm err}_{T-1}-\mathop{\rm err}_{T}\] \[= \mathop{\mathbb{E}}_{(x,y)\sim{\cal D}}[(f_{T-1}(x)-y)^{2}-(f_{T}( x)-y)^{2}]\] \[= \mathop{\mathbb{E}}_{(x,y)\sim{\cal D}}[(f_{T-1}(x)-y)^{2}-( \tilde{f}_{T}(x)-y)^{2}]+\mathop{\mathbb{E}}_{(x,y)\sim{\cal D}}[(\tilde{f}_{ T}(x)-y)^{2}-(f_{T}(x)-y)^{2}]\] \[> \frac{\alpha}{B}+\mathop{\mathbb{E}}_{(x,y)\sim{\cal D}}[(\tilde{ f}_{T}(x)-y)^{2}-(f_{T}(x)-y)^{2}]\] \[> \frac{\alpha}{B}-\frac{1}{m}\] \[\geqslant \frac{\alpha}{2B}\]
where the last equality follows from the fact that \(m\geqslant\frac{2B}{\alpha}\).
The 2nd inequality follows from the fact that for every pair \((x,y)\):
\[(\tilde{f}_{T}(x)-y)^{2}-(f_{T}(x)-y)^{2}\geq-\frac{1}{m}\]
To see this we consider two cases. Since \(y\in[0,1]\), if \(\tilde{f}_{T}(x)>1\) or \(\tilde{f}_{T}(x)<0\) then the Round operation decreases squared error and we have \((\tilde{f}_{T}(x)-y)^{2}-(f_{T}(x)-y)^{2}\geq 0\). In the remaining case we have \(f_{T}(x)\in[0,1]\) and \(\Delta=\tilde{f}_{T}(x)-f_{T}(x)\) is such that \(|\Delta|\leq\frac{1}{2m}\). In this case we can compute:
\[(\tilde{f}_{T}(x)-y)^{2}-(f_{T}(x)-y)^{2} = (f_{T}(x)+\Delta-y)^{2}-(f_{T}(x)-y)^{2}\] \[= 2\Delta(f(x)-y)+\Delta^{2}\] \[\geq -2|\Delta|+\Delta^{2}\] \[\geq -\frac{1}{m}\]
### Analysis as a Boosting Algorithm
We now analyze the same algorithm (Algorithm 1) as a boosting algorithm designed to boost a "weak learning" algorithm \(A_{\mathcal{H}}\) to a strong learning algorithm. Often in the boosting literature, a "strong learning" algorithm is one that can obtain accuracy arbitrarily close to perfect, which is only possible under strong realizability assumptions. In this paper, by "strong learning", we mean that Algorithm 1 should output a model that is close to Bayes optimal, which is a goal we can enunciate for any distribution \(\mathcal{D}\) without needing to make realizability assumptions. (Observe that if the Bayes optimal predictor has zero error, then our meaning of strong learning corresponds to the standard meaning, so our analysis is only more general).
We now turn to our definition of weak learning. Intuitively, a weak learning algorithm should return a hypothesis that makes predictions that are slightly better than trivial whenever doing so is possible. We take "trivial" predictions to be those of the best _constant_ predictor as measured by squared error -- i.e. the squared error obtained by simply returning the label mean. A "weak learning" algorithm for us can be run on any restriction of the data distribution \(\mathcal{D}\) to a subset \(S\subseteq\mathcal{X}\), and must return a hypothesis with squared error slightly better than the squared error of the best constant prediction, whenever the Bayes optimal predictor \(f\)* has squared error slightly better than a constant predictor; on restrictions for which the Bayes optimal predictor also does not improve over constant prediction, our weak learning algorithm is not required to do better either.
Traditionally, "weak learning" assumptions do not distinguish between the optimization ability of the algorithm and the representation ability of the hypothesis class it optimizes over. Since we have defined a squared error regression oracle \(A_{\mathcal{H}}\) as exactly optimizing the squared error over some class \(\mathcal{H}\), we will state our weak learning assumption as an assumption on the representation ability of \(\mathcal{H}\)--but this is not important for our analysis here. To prove Theorem 4.6 we could equally well assume that \(A_{\mathcal{H}}\) returns a hypothesis \(h\) that improves over a constant predictor whenever one exists, without assuming that \(h\) optimizes squared error over all of \(\mathcal{H}\).
**Definition 4.5** (Weak Learning Assumption).: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\) and a class of functions \(\mathcal{H}\). Let \(f^{*}(x)=\mathbb{E}_{y\sim\mathcal{D}(x)}[y]\) denote the true conditional label expectation conditional on \(x\). We say that \(\mathcal{H}\) satisfies the \(\gamma\)-weak learning condition relative to \(\mathcal{D}\) if for every \(S\subseteq\mathcal{X}\) with \(\Pr_{x\sim\mathcal{D}_{\mathcal{X}}}[x\in S]>0\), if:_
\[\mathbb{E}[(f^{*}(x)-y)^{2}|x\in S]<\min_{c\in\mathbb{R}}\mathbb{E}[(c-y)^{2}| x\in S]-\gamma\]
_then there exists an \(h\in\mathcal{H}\) such that:_
\[\mathbb{E}[(h(x)-y)^{2}|x\in S]<\min_{c\in\mathbb{R}}\mathbb{E}[(c-y)^{2}|x \in S]-\gamma\]
_When \(\gamma=0\) we simply say that \(\mathcal{H}\) satisfies the weak learning condition relative to \(\mathcal{D}\)._
Observe why our weak learning assumption is "weak": the Bayes optimal predictor \(f^{*}\) may improve arbitrarily over the best constant predictor on some set \(S\) in terms of squared error, but in this case we only require of \(\mathcal{H}\) that it include a hypothesis that improves by some \(\gamma\) which might be very small.
Since the \(\gamma\)-weak learning condition does not make any requirements on \(\mathcal{H}\) on sets for which \(f^{*}(x)\) improves over a constant predictor by less than \(\gamma\), the best we can hope to prove under this assumption is \(\gamma\)-approximate Bayes optimality, which is what we do next.
**Theorem 4.6**.: _Fix any distribution \(\mathcal{D}\in\Delta\mathcal{Z}\), any model \(f:\mathcal{X}\rightarrow[0,1]\), any \(\gamma>0\), any class of real valued functions \(\mathcal{H}\) that satisfies the \(\gamma\)-weak learning condition relative to \(\mathcal{D}\), and a squared error regression oracle \(A_{\mathcal{H}}\) for \(\mathcal{H}\). Let \(\alpha=\gamma\) and \(B=1/\gamma\) (or any pair such that \(\alpha/B=\gamma^{2}\)). Then LSBoost\((f,\alpha,A_{\mathcal{H}},\mathcal{D},B)\) halts after at most \(T\leq\frac{2}{\gamma^{2}}\) many iterations and outputs a model \(f_{T-1}\) such that \(f_{T-1}\) is \(2\gamma\)-approximately Bayes optimal over \(\mathcal{D}\):_
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f_{T-1}(x)-y)^{2}]\leq\mathop{ \mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}(x)-y)^{2}]+2\gamma\]
_where \(f^{*}(x)=\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[y]\) is the function that minimizes squared error over \(\mathcal{D}\)._
Proof.: At each round \(t\) before the algorithm halts, we have by construction that \(\mathrm{err}_{t}\leq\mathrm{err}_{t-1}-\frac{\alpha}{2B}\), and since the squared error of \(f_{0}\) is at most \(1\), and squared error is non-negative, we must have \(T\leq\frac{2B}{\alpha}=\frac{2}{\gamma^{2}}\).
Now suppose the algorithm halts at round \(T\) and outputs \(f_{T-1}\). It must be that \(\mathrm{err}_{T}>\mathrm{err}_{T-1}-\frac{\gamma^{2}}{2}\). Suppose also that \(f_{T-1}\) is not \(2\gamma\)-approximately Bayes optimal:
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f_{T-1}(x)-y)^{2}-(f^{*}(x)-y)^{2 }]>2\gamma\]
We can write this condition as:
\[\sum_{v\in[1/m]}\mathrm{Pr}[f_{T-1}(x)=v]\cdot\mathop{\mathbb{E}}_{(x,y)\sim \mathcal{D}}[(f_{T-1}(x)-y)^{2}-(f^{*}(x)-y)^{2}|f_{T-1}(x)=v]>2\gamma\]
Define the set:
\[S=\{v\in[1/m]:\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f_{T-1}(x)-y)^{2}-( f^{*}(x)-y)^{2}|f_{T-1}(x)=v]\geq\gamma\}\]
to denote the set of values \(v\) in the range of \(f_{T-1}\) such that conditional on \(f_{T-1}(x)=v\), \(f_{T-1}\) is at least \(\gamma\)-sub-optimal. Since we have both \(y\in[0,1]\) and \(f_{T-1}(x)\in[0,1]\), for every \(v\) we must have that \(\mathop{\mathbb{E}}[(f_{T-1}(x)-y)^{2}-(f^{*}(x)-y)^{2}|f_{T-1}(x)=v]\leq 1\). Therefore we can bound:
\[2\gamma < \sum_{v\in[1/m]}\mathrm{Pr}[f_{T-1}(x)=v]\cdot\mathop{\mathbb{E} }_{(x,y)\sim\mathcal{D}}[(f_{T-1}(x)-y)^{2}-(f^{*}(x)-y)^{2}|f_{T-1}(x)=v]\] \[\leq \mathop{\mathrm{Pr}}_{(x,y)\sim\mathcal{D}}[x\in S]+(1-\mathop{ \mathrm{Pr}}_{(x,y)\sim\mathcal{D}}[x\in S])\gamma\]
Solving we learn that:
\[\mathop{\mathrm{Pr}}_{(x,y)\sim\mathcal{D}}[x\in S]\geq\frac{2\gamma-\gamma}{ (1-\gamma)}\geq 2\gamma-\gamma=\gamma\]
Now observe that by the fact that \(\mathcal{H}\) is assumed to satisfy the \(\gamma\)-weak learning assumption with respect to \(\mathcal{D}\), at the final round \(T\) of the algorithm, for every \(v\in S\) we have that \(h_{v}^{T}\) satisfies:
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f_{T-1}(x)-y)^{2}-(h_{v}^{T}(x)-y) ^{2}|f_{T-1}(x)=v]\geq\gamma\]
Let \(\mathrm{e}\mathrm{\tilde{r}}\mathrm{r}_{T}=\mathop{\mathbb{E}}_{(x,y)\sim \mathcal{D}}[(\tilde{f}_{T}(x)-y)^{2}]\) Therefore we have:
\[\mathrm{err}_{T-1}-\mathrm{e}\mathrm{\tilde{r}}\mathrm{r}_{T} = \sum_{v\in[1/m]}\mathop{\mathrm{Pr}}_{(x,y)\sim\mathcal{D}}[f_{ T-1}(x)=v]\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f_{T-1}(x)-y)^{2}-(h_{v}^{T}(x)-y) ^{2}|f_{T-1}(x)=v]\] \[\geq \mathop{\mathrm{Pr}}_{(x,y)\sim\mathcal{D}}[f_{T-1}(x)\in S]\gamma\] \[\geq \gamma^{2}\]
We recall that \(|\mathrm{eir}_{T}-\mathrm{err}_{T}|\leq 1/m=\frac{\gamma^{2}}{2}\) and so we can conclude that
\[\mathrm{err}_{T-1}-\mathrm{err}_{T}\geq\frac{\gamma^{2}}{2}\]
which contradicts the fact that the algorithm halted at round \(T\), completing the proof.
## 5 When Multicalibration Implies Accuracy
We analyzed the same algorithm (Algorithm 1) as both an algorithm for obtaining multicalibration with respect to \(\mathcal{H}\), and, when \(\mathcal{H}\) satisfied the weak learning condition given in Definition 4.5, as a boosting algorithm that converges to the Bayes optimal model. In this section we show that this is no coincidence: multicalibration with respect to \(\mathcal{H}\) implies Bayes optimality if and only if \(\mathcal{H}\) satisfies the weak learning condition from Definition 4.5,
First we define what we mean when we say that multicalibration with respect to \(\mathcal{H}\) implies Bayes optimality. Note that the Bayes optimal model \(f^{\ast}(x)\) is multicalibrated with respect to any set of functions, so it is not enough to require that there _exist_ Bayes optimal functions \(f\) that are multicalibrated with respect to \(\mathcal{H}\). Instead, we have to require that _every_ function that is multicalibrated with respect to \(\mathcal{H}\) is Bayes optimal:
**Definition 5.1**.: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\). We say that multicalibration with respect to \(\mathcal{H}\) implies Bayes optimality over \(\mathcal{D}\) if for every \(f:\mathcal{X}\to\mathbb{R}\) that is multicalibrated with respect to \(\mathcal{D}\) and \(\mathcal{H}\), we have:_
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}]=\mathop{\mathbb{E}} _{(x,y)\sim\mathcal{D}}[(f^{\ast}(x)-y)^{2}]\]
_Where \(f^{\ast}(x)=\mathop{\mathbb{E}}_{y\sim\mathcal{D}(x)}[y]\) is the function that has minimum squared error over the set of all functions._
Recall that when the weak learning parameter \(\gamma\) in Definition 4.5 is set to \(0\), we simply call it the "weak learning condition" relative to \(\mathcal{D}\). We first state and prove our characterization for the exact case when \(\gamma=0\), because it leads to an exceptionally simple statement. We subsequently extend this characterization to relate approximate Bayes optimality and approximate multicalibration under quantitative weakenings of the weak learning condition.
**Theorem 5.2**.: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\). Let \(\mathcal{H}\) be a class of functions that is closed under affine transformation. Multicalibration with respect to \(\mathcal{H}\) implies Bayes optimality over \(\mathcal{D}\) if and only if \(\mathcal{H}\) satisfies the weak learning condition relative to \(\mathcal{D}\)._
Proof.: To avoid measurability issues we assume that models \(f\) have a countable range (which is true in particular whenever \(\mathcal{X}\) is countable).
First we show that if \(\mathcal{H}\) satisfies the weak learning condition relative to \(\mathcal{D}\), then multicalibration with respect to \(\mathcal{H}\) implies Bayes optimality over \(\mathcal{D}\). Suppose not. Then there exists a function \(f\) that is multicalibrated with respect to \(\mathcal{D}\) and \(\mathcal{H}\), but is such that:
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}]>\mathop{\mathbb{E}} _{(x,y)\sim\mathcal{D}}[(f^{\ast}(x)-y)^{2}]\]
By linearity of expectation we have:
\[\sum_{v\in R(f)}\Pr[f(x)=v]\cdot\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f( x)-y)^{2}-(f^{\ast}(x)-y)^{2}|f(x)=v]>0\]
In particular there must be some \(v\in R(f)\) with \(\Pr_{x\sim\mathcal{D}\mathcal{X}}[f(x)=v]>0\) such that:
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}|f(x)=v]>\mathop{ \mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{\ast}(x)-y)^{2}|f(x)=v]\]
Let \(S=\{x:f(x)=v\}\). Observe that if \(\mathcal{H}\) is closed under affine transformation, the constant function \(h(x)=1\) is in \(\mathcal{H}\), and hence multicalibration with respect to \(\mathcal{H}\) implies calibration. Since \(f\) is calibrated, we know that:
\[\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(v-y)^{2}|x\in S]=\min_{c\in \mathbb{R}}\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(c-y)^{2}|x\in S]\]
Thus by the weak learning assumption there must exist some \(h\in\mathcal{H}\) such that:
\[\mathbb{E}[(v-y)^{2}-(h(x)-y)^{2}|x\in S]=\mathbb{E}[(f(x)-y)^{2}-(h(x)-y)^{2} |f(x)=v]>0\]
By Theorem 3.2, there must therefore exist some \(h^{\prime}\in\mathcal{H}\) such that:
\[\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[h^{\prime}(x)(y-v)|f(x)=v]>0\]
implying that \(f\) is _not_ multicalibrated with respect to \(\mathcal{D}\) and \(\mathcal{H}\), a contradiction.
In the reverse direction, we show that for any \(\mathcal{H}\) that does _not_ satisfy the weak learning condition with respect to \(\mathcal{D}\), then multicalibration with respect to \(\mathcal{H}\) and \(\mathcal{D}\) does not imply Bayes optimality over \(\mathcal{D}\). In particular, we exhibit a function \(f\) such that \(f\) is multicalibrated with respect to \(\mathcal{H}\) and \(\mathcal{D}\), but such that:
\[\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(f(x)-y)^{2}]>\underset{(x,y) \sim\mathcal{D}}{\mathbb{E}}[(f^{\ast}(x)-y)^{2}]\]
Since \(\mathcal{H}\) does not satisfy the weak learning assumption over \(\mathcal{D}\), there must exist some set \(S\subseteq\mathcal{X}\) with \(\Pr[x\in S]>0\) such that
\[\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(f^{\ast}(x)-y)^{2}|x\in S]< \min_{c\in\mathbb{R}}\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(c-y)^{2}|x \in S]\]
but for every \(h\in\mathcal{H}\):
\[\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(h(x)-y)^{2}|x\in S]\geq\min_{c \in\mathbb{R}}\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(c-y)^{2}|x\in S]\]
Let \(c(S)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[y|x\in S]\). We define \(f(x)\) as follows:
\[f(x)=\begin{cases}f^{\ast}(x)&x\notin S\\ c(S)&x\in S\end{cases}\]
We can calculate that:
\[\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(f(x)-y)^{2}]\] \[= \underset{(x,y)\sim\mathcal{D}}{\mathbb{Pr}}[x\in S]\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(c(S)-y)^{2}|x\in S]+\underset{(x,y)\sim \mathcal{D}}{\mathbb{Pr}}[x\notin S]\underset{(x,y)\sim\mathcal{D}}{\mathbb{E }}[(f^{\ast}(x)-y)^{2}|x\notin S]\] \[> \underset{(x,y)\sim\mathcal{D}}{\mathbb{Pr}}[x\in S]\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(f^{\ast}(x)-y)^{2}|x\in S]+\underset{(x,y) \sim\mathcal{D}}{\mathbb{Pr}}[x\notin S]\underset{(x,y)\sim\mathcal{D}}{ \mathbb{E}}[(f^{\ast}(x)-y)^{2}|x\notin S]\] \[= \underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(f^{\ast}(x)-y)^{2}]\]
In other words, \(f\) is not Bayes optimal. So if we can demonstrate that \(f\) is multicalibrated with respect to \(\mathcal{H}\) and \(\mathcal{D}\) we are done. Suppose otherwise. Then there exists some \(h\in\mathcal{H}\) and some \(v\in R(f)\) such that
\[\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[h(x)(y-v)|f(x)=v]>0\]
By Theorem 3.2, there exists some \(h^{\prime}\in\mathcal{H}\) such that:
\[\underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(h^{\prime}(x)-y)^{2}|f(x)=v]< \underset{(x,y)\sim\mathcal{D}}{\mathbb{E}}[(f(x)-y)^{2}|f(x)=v]\]
We first observe that it must be that \(v=c(S)\). If this were not the case, by definition of \(f\) we would have that:
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h^{\prime}(x)-y)^{2}|f(x)=v]<\mathop{ \mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}(x)-y)^{2}|f(x)=v]\]
which would contradict the Bayes optimality of \(f^{*}\). Having established that \(v=c(S)\) we can calculate:
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h^{\prime}(x)-y)^{2}| f(x)=c(S)]\] \[= \mathop{\mathrm{Pr}}_{(x,y)\sim\mathcal{D}}[x\in S]\mathop{ \mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h^{\prime}(x)-y)^{2}|x\in S]+\] \[\mathop{\mathrm{Pr}}_{(x,y)\sim\mathcal{D}}[x\notin S,f(x)=c(S)] \mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h^{\prime}(x)-y)^{2}|x\notin S,f( x)=c(S)]\] \[\geq \mathop{\mathrm{Pr}}_{(x,y)\sim\mathcal{D}}[x\in S]\mathop{ \mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h^{\prime}(x)-y)^{2}|x\in S]+\] \[\mathop{\mathrm{Pr}}_{(x,y)\sim\mathcal{D}}[x\notin S,f(x)=c(S)] \mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}|x\notin S,f(x)=c(S)]\]
where in the last inequality we have used the fact that by definition, \(f(x)=f^{*}(x)\) for all \(x\notin S\), and so is pointwise Bayes optimal for all \(x\notin S\).
Hence the only way we can have \(\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h^{\prime}(x)-y)^{2}|f(x)=c(S)]< \mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}|f(x)=c(S)]\) is if:
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h^{\prime}(x)-y)^{2}|x\in S]< \mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(c(S)-y)^{2}|x\in S]\]
But this contradicts our assumption that \(\mathcal{H}\) violates the weak learning condition on \(S\), which completes the proof.
We now turn our attention to deriving a relationship between approximate multicalibration and approximate Bayes optimality. To do so, we'll introduce an even weaker weak learning condition that has one additional parameter \(\rho\), lower bounding the mass of sets \(S\) that we can condition on while still requiring the weak learning condition to hold. We remark that Algorithm 1 can be analyzed as a boosting algorithm under this weaker weak learning assumption as well, with only minor modifications in the analysis.
**Definition 5.3** ( \((\gamma,\rho)\)-weak learning condition).: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\) and let \(\mathcal{H}\) be a class of arbitrary real-valued functions. We say that \(\mathcal{H}\) satisfies the \((\gamma,\rho)\)-weak learning condition for \(\mathcal{D}\) if the following holds. For every set \(S\subseteq\mathcal{X}\) such that \(\Pr_{x\sim\mathcal{D}_{X}}[x\in S]>\rho\), if_
\[\mathop{\mathbb{E}}_{(x,y)\sim D}[(f^{*}-y)^{2}\mid x\in S]<\mathop{\mathbb{ E}}_{(x,y)\sim D}[(\bar{y}_{S}-y)^{2}\mid x\in S]-\gamma,\]
_where \(\bar{y}_{S}=\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[y\mid x\in S]\), then there exists \(h\in\mathcal{H}\) such that_
\[\mathop{\mathbb{E}}_{(x,y)\sim D}[(h(x)-y)^{2}\mid x\in S]<\mathop{\mathbb{ E}}_{(x,y)\sim D}[(\bar{y}_{S}-y)^{2}\mid x\in S]-\gamma.\]
We may now prove our theorem showing that approximate multicalibration with respect to a class \(\mathcal{H}\) implies approximate Bayes optimality if and only if \(\mathcal{H}\) satisfies the \((\gamma,\rho)\)-weak learning condition. We recall Remark 4.4, which notes that we must restrict approximate multicalibration to a bounded subset of \(\mathcal{H}\), as we will assume that \(\mathcal{H}\) is closed under affine transformation.
**Theorem 5.4**.: _Fix any distribution \(\mathcal{D}\in\Delta\mathcal{Z}\), any model \(f:\mathcal{X}\rightarrow[0,1]\), and any class of real valued functions \(\mathcal{H}\) that is closed under affine transformation. Let:_
\[\mathcal{H}_{1}=\{h\in\mathcal{H}:\max_{x\in\mathcal{X}}h(x)^{2}\leq 1\}\]
_be the set of functions in \(\mathcal{H}\) upper-bounded by \(1\) on \(\mathcal{X}\). Let \(m=|R(f)|\), \(\gamma>0\), and \(\alpha\leqslant\frac{\gamma^{3}}{16m}\). Then if \(\mathcal{H}\) satisfies the \((\gamma,\gamma/m)\)-weak learning condition and \(f\) is \(\alpha\)-approximately multicibrated with respect to \(\mathcal{H}_{1}\) on \(\mathcal{D}\), then \(f\) has squared error_
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}]\leqslant \operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}-y)^{2}]+3\gamma.\]
_Conversely, if \(\mathcal{H}\) does not satisfy the \((\gamma,\gamma/m)\)-weak learning condition, there exists a model \(f:\mathcal{X}\to[0,1]\) that is \(\alpha\)-approximately multicalibrated with respect to \(\mathcal{H}_{1}\) on \(\mathcal{D}\), for \(\alpha=\gamma\), and is perfectly calibrated on \(\mathcal{D}\), but \(f\) has squared error_
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}]\geqslant \operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}-y)^{2}]+\gamma^{2}/m.\]
Proof.: We begin by arguing that \(\alpha\)-approximate multicalibration with respect to \(\mathcal{H}_{1}\) on \(\mathcal{D}\) implies approximate Bayes optimality when \(\mathcal{H}\) satisfies the \((\gamma,\gamma/m)\)-weak learning condition. Suppose not, and there exists a function \(f\) that is \(\alpha\)-multicalibrated with respect to \(\mathcal{H}_{1}\), but
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}-y)^{2}]<\operatorname* {\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}]-3\gamma.\]
Then there must exist some \(v\in R(f)\) such that \(\Pr_{(x,y)\sim\mathcal{D}}[f(x)=v]>\gamma/m\) and
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}-y)^{2}\mid f(x)=v]< \operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}\mid f(x)=v]-2\gamma.\]
We observe that since \(\mathcal{H}\) is closed under affine transformation, the constant function \(h(x)=1\) is in \(\mathcal{H}\), and so \(\alpha\)-approximate multicalibration with respect to \(\mathcal{H}_{1}\) implies \(\alpha\)-approximate calibration as well. Thus by definition,
\[\Pr[f(x)=v]\cdot\left(\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[v-y \mid f(x)=v]\right)^{2}\leqslant\alpha.\]
Letting \(\bar{y}_{v}=\operatorname*{\mathbb{E}}[y\mid f(x)=v]\), our lower-bound that \(\Pr[f(x)=v]>\gamma/m\) gives us that \((v-\bar{y}_{v})^{2}<\alpha m/\gamma\leqslant\left(\frac{\gamma}{4}\right)^{2}\). We now use this upper-bound on calibration error in conjuction with our lower-bound on distance from Bayes optimality to show that the squared error of the constant predictor \(\bar{y}_{v}\) must also be far from Bayes optimal.
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}(x)-y)^{2 }\mid f(x)=v] <\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2} \mid f(x)=v]-2\gamma\] \[=\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(v-\bar{y}_{v} +\bar{y}_{v}-y)^{2}\mid f(x)=v]-2\gamma\] \[=\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(\bar{y}_{v}-y )^{2}\mid f(x)=v]+(v-\bar{y}_{v})^{2}-2\gamma\] \[<\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(\bar{y}_{v}-y )^{2}\mid f(x)=v]-\gamma.\]
The \((\gamma,\gamma/m)\)-weak learning condition then guarantees that there exists some \(h\in\mathcal{H}\) such that
\[\operatorname*{\mathbb{E}}_{(x,y)\sim D}[(h-y)^{2}\mid f(x)=v]<\operatorname*{ \mathbb{E}}_{(x,y)\sim D}[(\bar{y}_{v}-y)^{2}\mid f(x)=v]-\gamma.\]
By Lemma 3.4, the fact that \(h\) improves on the squared loss of \(\bar{y}_{v}\) by an additive factor \(\gamma\), on the set of \(x\) such that \(f(x)=v\), implies that \(\operatorname*{\mathbb{E}}[h(x)(y-\bar{y}_{v})\mid f(x)=v]>\gamma/2\). Because \(f\) is \(\alpha\)-approximately calibrated on \(\mathcal{D}\), we can use the existence of such an \(h\) to witness a failure of multicalibration:
\[\operatorname*{\mathbb{E}}[h(y-v) \mid f(x)=v]\] \[=\operatorname*{\mathbb{E}}[h(x)(y-\bar{y}_{v}+\bar{y}_{v}-v) \mid f(x)=v]\] \[=\operatorname*{\mathbb{E}}[h(x)(y-\bar{y}_{v})\mid f(x)=v]+ \operatorname*{\mathbb{E}}[h(x)(\bar{y}_{v}-v)\mid f(x)=v]\] \[>\gamma/2-|\bar{y}_{v}-v|\] \[>\gamma/4.\]
Then
\[\Pr[f(x)=v]\cdot\left(\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[h(x)(y-v) \mid f(x)=v]\right)^{2}>\frac{\gamma^{3}}{16m},\]
contradicting our assumption that \(f\) is \(\alpha\)-approximately multicalibrated with respect to \(\mathcal{H}_{1}\) for \(\alpha<\frac{\gamma^{3}}{16m}\). Therefore approximate multicalibration with respect to \(\mathcal{H}_{1}\) must imply that \(f\) is approximately Bayes optimal.
It remains to show the other direction, that \(\alpha\)-approximate multicalibration with respect to a class \(\mathcal{H}_{1}\) implies approximate Bayes optimality only if \(\mathcal{H}\) satisfies the \((\gamma,\gamma/m)\)-weak learning condition. If this claim were not true for the stated parameters, then there must exist a class \(\mathcal{H}\) such that every predictor \(f\) that:
* is \(\alpha\)-approximately multicalibrated with respect to \(\mathcal{H}_{1}\)
* is perfectly calibrated on \(\mathcal{D}\)
* has range with cardinality \(|R(f)|=m\)
also has squared error within \(\gamma^{2}/m\) of Bayes optimal, but \(\mathcal{H}\) does not satisfy the weak learning condition. We will show that no such class exists by defining, for any class \(\mathcal{H}\) not satisfying the weak learning condition, a predictor \(f\) that is \(\alpha\)-approximately multicalibrated with respect to that class, but has squared error that is not within \(\gamma^{2}/m\) of Bayes optimal.
Recall that if a class \(\mathcal{H}\) does not satisfy the \((\gamma,\gamma/m)\)-weak learning condition, then there must be some set \(S_{\mathcal{H}}\) such that \(\Pr[x\in S_{\mathcal{H}}]>\gamma/m\), there does not exist an \(h\in\mathcal{H}\) such that
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h-y)^{2}\mid x\in S_{ \mathcal{H}}]<\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(\bar{y}_{S_{ \mathcal{H}}}-y)^{2}\mid x\in S_{\mathcal{H}}]-\gamma,\]
but for the Bayes optimal predictor, it holds that its squared loss satisfies
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}-y)^{2}\mid x\in S_{ \mathcal{H}}]<\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(\bar{y}_{S_{ \mathcal{H}}}-y)^{2}\mid x\in S_{\mathcal{H}}]-\gamma,\]
where \(\bar{y}_{S_{\mathcal{H}}}=\operatorname*{\mathbb{E}}[y\mid x\in S_{\mathcal{H}}]\). For some hypothesis class \(\mathcal{H}\) not satisfying the weak learning condition, and associated set \(S_{\mathcal{H}}\), let \(f_{\mathcal{H}}\) be defined as follows:
\[f_{\mathcal{H}}(x)=\begin{cases}f^{*}(x),&x\notin S_{\mathcal{H}}\\ \bar{y}_{S_{\mathcal{H}}},&x\in S_{\mathcal{H}}.\end{cases}.\]
Note that, because \(f_{\mathcal{H}}\) is constant on \(S_{\mathcal{H}}\), there must be some \(v\in R(f)\) such that the level set \(S_{v}=\{x\in\mathcal{X}:f(x)=v\}\) contains \(S_{\mathcal{H}}\). To see that \(f_{\mathcal{H}}\) is \(\alpha\)-approximately multicalibrated with respect to \(\mathcal{H}_{1}\), we first consider the contribution to multicalibration error from the level sets not containing \(S_{\mathcal{H}}\). For all \(h\in\mathcal{H}\) and \(v\in R(f)\) such that \(v\neq\bar{y}_{S_{\mathcal{H}}}\),
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[h(x)(y-f_{ \mathcal{H}}(x))\mid f_{\mathcal{H}}(x)=v]\] \[=\operatorname*{\mathbb{E}}_{x\sim\mathcal{D}_{x}}\operatorname* {\mathbb{E}}_{y\sim\mathcal{D}_{y}(x)}[h(x)y\mid f_{\mathcal{H}}(x)=v]- \operatorname*{\mathbb{E}}_{x\sim\mathcal{D}_{x}}[h(x)f^{*}(x)\mid f_{ \mathcal{H}}(x)=v]\] \[=\operatorname*{\mathbb{E}}_{x\sim\mathcal{D}_{x}}\operatorname* {\mathbb{E}}_{y\sim\mathcal{D}_{y}(x)}[h(x)y\mid f_{\mathcal{H}}(x)=v]- \operatorname*{\mathbb{E}}_{x\sim\mathcal{D}_{x}}[h(x)y\mid f_{\mathcal{H}}(x )=v]\] \[=0.\]
For the level set \(S_{v}\) for which \(S_{\mathcal{H}}\subseteq S_{v}\), we know from the argument above that the elements \(x\in S_{v}\backslash S_{\mathcal{H}}\) contribute nothing to the multicalibration error, as \(f(x)=f^{*}(x)\) on these elements. So,
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[h(x)(y-f_{ \mathcal{H}}(x))\mid f(x)=v] =\operatorname*{\mathbb{Pr}}_{x\sim\mathcal{D}_{x}}[x\in S_{ \mathcal{H}}]\cdot\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[h(x)(y- \bar{y}_{S_{\mathcal{H}}})\mid x\in S_{\mathcal{H}}]\] \[\qquad\qquad+\operatorname*{\mathbb{Pr}}_{x\sim\mathcal{D}_{x}}[x \notin S_{\mathcal{H}}]\cdot\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[ h(x)(y-f^{*}(x))\mid x\in S_{v}\backslash S_{\mathcal{H}}]\] \[=\operatorname*{\mathbb{Pr}}_{x\sim\mathcal{D}_{x}}[x\in S_{ \mathcal{H}}]\cdot\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[h(x)(y- \bar{y}_{S_{\mathcal{H}}})\mid x\in S_{\mathcal{H}}]\]
Therefore if \(f_{\mathcal{H}}\) is not \(\alpha\)-approximately multicalibrated with respect to \(\mathcal{H}_{1}\) on \(\mathcal{D}\), it must be the case that there exists some \(h\in\mathcal{H}_{1}\) such that \(\mathbb{E}[h(x)(y-\bar{y}_{S_{\mathcal{H}}})\mid x\in S_{\mathcal{H}}]>\sqrt{\alpha}\). Then by Theorem 3.2, there must exist a \(h^{\prime}\in\mathcal{H}\) such that
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(\bar{y}_{S_{\mathcal{H}}}-y)^{2}-( h^{\prime}(x)-y)^{2}\mid x\in S_{\mathcal{H}}]>\alpha=\gamma.\]
But \(S_{\mathcal{H}}\) was defined to be a subset of \(\mathcal{X}\) for which no such \(h^{\prime}\) exists and for which \(\Pr[x\in S_{\mathcal{H}}]>\gamma/m\). This would contradict our assumption that \(\mathcal{H}\) does not satisfy the \((\gamma,\gamma/m)\)-weak learning condition on \(\mathcal{D}\), and therefore \(f_{\mathcal{H}}\) is \(\alpha\)-approximately multicalibrated with respect to \(\mathcal{H}_{1}\) on \(\mathcal{D}\).
It remains to prove that \(f_{\mathcal{H}}\) is far from Bayes optimal.
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f_{\mathcal{H}}(x)-y )^{2}] =\mathop{\mathbb{Pr}}_{x\sim\mathcal{D}_{\mathcal{X}}}[x\in S_{ \mathcal{H}}]\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(\bar{y}_{S_{ \mathcal{H}}}-y)^{2}\mid x\in S_{\mathcal{H}}]+\Pr[x\notin S_{\mathcal{H}}] \mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}(x)-y)^{2}\mid x\notin S_{ \mathcal{H}}]\] \[\geq\mathop{\mathbb{Pr}}_{x\sim\mathcal{D}_{\mathcal{X}}}[x\in S _{\mathcal{H}}]\left(\mathop{\mathbb{E}}_{(x,y)\mathcal{D}}[(f^{*}-y)^{2} \mid x\in S_{\mathcal{H}}]+\gamma\right)+\Pr[x\notin S_{\mathcal{H}}]\mathop{ \mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}(x)-y)^{2}\mid x\notin S_{\mathcal{H }}]\] \[=\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}-y)^{2}]+\gamma \mathop{\mathbb{Pr}}_{x\sim\mathcal{D}_{\mathcal{X}}}[x\in S_{\mathcal{H}}]\] \[\geq\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f^{*}-y)^{2}]+ \gamma^{2}/m.\]
## 6 Weak Learners With Respect to Constrained Classes
Thus far we have studied function classes \(\mathcal{H}\) that satisfy a weak learning condition with respect to the Bayes optimal predictor \(f^{*}\). But we can also study function classes \(\mathcal{H}\) that satisfy a weak learning condition defined with respect to another constrained class of real valued functions.
**Definition 6.1** (Weak Learning Assumption Relative to \(\mathcal{C}\)).: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\) and two classes of functions \(\mathcal{H}\) and \(\mathcal{C}\). We say that \(\mathcal{H}\) satisfies the \(\gamma\)-weak learner condition relative to \(\mathcal{C}\) and \(\mathcal{D}\) if for every \(S\subseteq\mathcal{X}\) with \(\Pr_{x\sim\mathcal{D}_{\mathcal{X}}}[x\in S]>0\), if:_
\[\min_{c\in\mathcal{C}}\mathop{\mathbb{E}}_{(x,y)\sim D}[(c(x)-y)^{2}\mid x\in S ]<\mathop{\mathbb{E}}_{(x,y)\sim D}[(\bar{y}_{S}-y)^{2}\mid x\in S]-\gamma,\]
_where \(\bar{y}_{S}=\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[y\mid x\in S]\), then there exists \(h\in\mathcal{H}\) such that_
\[\mathop{\mathbb{E}}_{(x,y)\sim D}[(h(x)-y)^{2}\mid x\in S]<\mathop{\mathbb{E} }_{(x,y)\sim D}[(\bar{y}_{S}-y)^{2}\mid x\in S]-\gamma.\]
_When \(\gamma=0\) we simply say that \(\mathcal{H}\) satisfies the weak learning condition relative to \(\mathcal{C}\) and \(\mathcal{D}\)._
We will show that if a predictor \(f\) is multicalibrated with respect to \(\mathcal{H}\), and \(\mathcal{H}\) satisfies the weak learning assumption with respect to \(\mathcal{C}\), then in fact:
1. \(f\) is multicalibrated with respect to \(\mathcal{C}\), and
2. \(f\) has squared error at most that of the minimum error predictor in \(\mathcal{C}\).
In fact, Gopalan et al. (2022) show that if \(f\) is multicalibrated with respect to \(\mathcal{C}\), then it is an _omnipredictor_ for \(\mathcal{C}\), which implies that \(f\) has loss no more than the best function \(c(x)\in\mathcal{C}\), where loss can be measured with respect to any Lipschitz convex loss function (not just squared error). Thus our results imply that to obtain an omnipredictor for \(\mathcal{C}\), it is sufficient to be multicalibrated with respect to a class \(\mathcal{H}\) that satisfies our weak learning assumption with respect to \(\mathcal{C}\).
**Theorem 6.2**.: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\) and two classes of functions \(\mathcal{H}\) and \(\mathcal{C}\) that are closed under affine transformations. Then if \(f:\mathcal{X}\rightarrow[0,1]\) is multicalibrated with respect to \(\mathcal{D}\) and \(\mathcal{H}\), and if \(\mathcal{H}\) satisfies the weak learning condition relative to \(\mathcal{C}\) and \(\mathcal{D}\), then in fact \(f\) is multicalibrated with respect to \(\mathcal{D}\) and \(\mathcal{C}\) as well._
Proof.: We assume for simplicity that \(f\) has a countable range (which is without loss of generality e.g. whenever \(\mathcal{X}\) is countable). Suppose for contradiction that \(f\) is not multicalibrated with respect to \(\mathcal{C}\) and \(\mathcal{D}\). In this case there must be some \(c\in\mathcal{C}\) such that:
\[\sum_{v\in R(f)}\Pr[f(x)=v]\left(\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{ D}}[c(x)(y-v)|f(x)=v]\right)^{2}>0\]
Since \(\mathcal{C}\) is closed under affine transformations (and so both \(c\) and \(-c\) are in \(\mathcal{C}\)), there must be some \(c^{\prime}\in\mathcal{C}\) and some \(v\in R(f)\) with \(\Pr[f(x)=v]>0\) such that:
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[c^{\prime}(x)(y-v)|f(x)=v]>0\]
Therefore, by the first part of Theorem 3.2, there must be some \(c^{\prime\prime}\in\mathcal{C}\) such that:
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(c^{\prime\prime}(x)-y)^{2} |f(x)=v]<\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(v-y)^{2}|f(x)=v]\]
Since \(\mathcal{H}\) is closed under affine transformations, the function \(h(x)=1\) is in \(\mathcal{H}\) and so multicalibration with respect to \(\mathcal{H}\) implies calibration. Thus \(v=\bar{y}_{S_{v}}\) for \(S_{v}=\{x:f(x)=v\}\). Therefore, the fact that \(\mathcal{H}\) satisfies the weak learning condition relative to \(\mathcal{C}\) and \(\mathcal{D}\) implies that there must be some \(h\in\mathcal{H}\) such that:
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h(x)-y)^{2}|f(x)=v]< \operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(v-y)^{2}|f(x)=v]\]
Finally, the second part of Theorem 3.2 implies that:
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[h(x)(y-v)|f(x)=v]>0\]
which is a violation of our assumption that \(f\) is multicalibrated with respect to \(\mathcal{H}\) and \(\mathcal{D}\), a contradiction.
**Theorem 6.3**.: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\) and two classes of functions \(\mathcal{H}\) and \(\mathcal{C}\). Then if \(f:\mathcal{X}\to[0,1]\) is calibrated and multicibrated with respect to \(\mathcal{D}\) and \(\mathcal{H}\), and if \(\mathcal{H}\) satisfies the weak learning condition relative to \(\mathcal{C}\) and \(\mathcal{D}\), then:_
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}]\leq\min_{c\in \mathcal{C}}\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(c(x)-y)^{2}]\]
Proof.: We assume for simplicity that \(f\) has a countable range (which is without loss of generality e.g. whenever \(\mathcal{X}\) is countable). Suppose for contradiction that there is some \(c\in\mathcal{C}\) such that:
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(c(x)-y)^{2}]<\operatorname* {\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(f(x)-y)^{2}]\]
Then there must be some \(v\in R(f)\) with \(\Pr[f(x)=v]>0\) and:
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(c(x)-y)^{2}|f(x)=v]< \operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(v-y)^{2}|f(x)=v]\]
Since \(f\) is calibrated, \(v=\bar{y}_{S_{v}}\) for \(S_{v}=\{x:f(x)=v\}\). Therefore, the fact that \(\mathcal{H}\) satisfies the weak learning condition relative to \(\mathcal{C}\) and \(\mathcal{D}\) implies that there must be some \(h\in\mathcal{H}\) such that:
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(h(x)-y)^{2}|f(x)=v]< \operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(v-y)^{2}|f(x)=v]\]
Finally, the second part of Theorem 3.2 implies that:
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[h(x)(y-v)|f(x)=v]>0\]
which is a violation of our assumption that \(f\) is multicalibrated with respect to \(\mathcal{H}\) and \(\mathcal{D}\), a contradiction.
We now turn to approximate versions of these statements. To do so, we need a refined version of one direction of Theorem 3.2 that shows us that if \(f\) witnesses a failure of multicalibration with respect to some \(h\in\mathcal{H}\), then there is another function \(h^{\prime}\in\mathcal{H}\) that can be used to improve on \(f\)'s squared error, _while controlling the norm_ of \(h^{\prime}\).
**Lemma 6.4**.: _Suppose \(\mathcal{H}\) is closed under affine transformation. Fix a model \(f:\mathcal{X}\to[0,1]\), a levelset \(v\in R(f)\), and a bound \(B>0\). Then if there exists an \(h\in\mathcal{H}\) such that \(\max_{x\in\mathcal{X}}h(x)^{2}\leqslant B\) and_
\[\mathbb{E}[h(x)(y-v)|f(x)=v]\geqslant\alpha,\]
_for \(\alpha\geqslant 0\), then there exists an \(h^{\prime}\in\mathcal{H}\) such that \(\max_{x\in\mathcal{X}}h^{\prime}(x)^{2}\leqslant(1+\frac{\sqrt{B}}{\alpha})^{2}\) and:_
\[\mathbb{E}[(f(x)-y)^{2}-(h^{\prime}(x)-y)^{2}|f(x)=v]\geqslant\frac{\alpha^ {2}}{B}.\]
Proof.: Let \(h^{\prime}(x)=v+\eta h(x)\) where \(\eta=\frac{\alpha}{\mathbb{E}[h(x)^{2}|f(x)=v]}\), as in Theorem 3.2. Because \(h(x)^{2}\) is uniformly bounded by \(B\) on \(\mathcal{X}\), it follows that \(\mathbb{E}[h(x)^{2}]\leqslant B\), and we have already shown in the proof of Theorem 3.2 that this implies
\[\mathbb{E}[(f(x)-y)^{2}-(h^{\prime}(x)-y)^{2}|f(x)=v]\geqslant\frac{\alpha^{2 }}{B}.\]
It only remains to bound \(\max_{x\in\mathcal{X}}h^{\prime}(x)^{2}\). We begin by lower-bounding \(\mathbb{E}[h(x)^{2}\mid f(x)=v]\) in terms of \(\alpha\).
\[\mathbb{E}[h(x)^{2}\mid f(x)=v] \geqslant\mathbb{E}[h(x)\mid f(x)=v]^{2}\] \[\geqslant\mathbb{E}[h(x)(y-v)\mid f(x)=v]^{2}\] \[\geqslant\alpha^{2}.\]
It follows that \(\eta\leqslant 1/\alpha\), and so
\[\max_{x\in\mathcal{X}}h^{\prime}(x)^{2} =\max_{x\in\mathcal{X}}(v+\eta h(x))^{2}\] \[\leqslant(1+\eta\sqrt{B})^{2}\] \[\leqslant\left(1+\frac{\sqrt{B}}{\alpha}\right)^{2}.\]
We will also need a parameterized version of our weak learning condition. Recalling Remark 4.4, for approximate multicalibration to be meaningful with respect to a class that is closed under affine transformation, we must specify a bounded subset of that class with respect to which a predictor is approximately multicalibrated. Then to show that approximate multicalibration with respect to one potentially unbounded class implies approximate multicalibration with respect to another, we will need to specify the subsets of each class with respect to which a predictor is claimed to be approximately multicalibrated. This motivates a parameterization of our previous weak learning condition relative to a class \(\mathcal{C}\). We will need to assume that whenever there is a \(B\)-bounded function in \(\mathcal{C}\) that improves over the best constant predictor on a restriction of \(\mathcal{D}\), there also exists a \(B\)-bounded function in \(\mathcal{H}\) that improves on the restriction as well.
**Definition 6.5** (\(B\)-Bounded Weak Learning Assumption Relative to \(\mathcal{C}\)).: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\) and two classes of functions \(\mathcal{H}\) and \(\mathcal{C}\). Fix a bound \(B>0\) and let \(\mathcal{H}_{B}\) and \(\mathcal{C}_{B}\) denote the sets_
\[\mathcal{H}_{B}=\{h\in\mathcal{H}:\max_{x\in\mathcal{X}}h(x)^{2}\leqslant B\}\]
_and_
\[\mathcal{C}_{B}=\{c\in\mathcal{C}:\max_{x\in\mathcal{X}}c(x)^{2}\leqslant B\}\]
_respectively. We say that \(\mathcal{H}\) satisfies the \(B\)-bounded \(\gamma\)-bounded weak learning condition relative to \(\mathcal{C}\) and \(\mathcal{D}\) if for every \(S\subseteq\mathcal{X}\) with \(\Pr_{x\sim\mathcal{D}_{\mathcal{X}}}[x\in S]>0\), if:_
\[\min_{c\in\mathcal{C}_{B}}\operatorname*{\mathbb{E}}_{(x,y)\sim D}[(c(x)-y)^{2 }\mid x\in S]<\operatorname*{\mathbb{E}}_{(x,y)\sim D}[(\bar{y}_{S}-y)^{2}\mid x \in S]-\gamma,\]
_where \(\bar{y}_{S}=\operatorname*{\mathbb{E}}[y\mid x\in S]\), then there exists \(h\in\mathcal{H}_{B}\) such that_
\[\operatorname*{\mathbb{E}}_{(x,y)\sim D}[(h(x)-y)^{2}\mid x\in S]< \operatorname*{\mathbb{E}}_{(x,y)\sim D}[(\bar{y}_{S}-y)^{2}\mid x\in S]-\gamma.\]
**Theorem 6.6**.: _Fix a distribution \(\mathcal{D}\in\Delta\mathcal{Z}\) and two classes of functions \(\mathcal{H}\) and \(\mathcal{C}\) that are closed under affine transformations. Fix \(\alpha_{\mathcal{C}},B>0\). Let \(B^{\prime}=(1+\sqrt{\frac{2B}{\alpha_{\mathcal{C}}}})^{2}\) and \(\gamma=\frac{\alpha_{\mathcal{C}}}{4B}\). Fix a function \(f:\mathcal{X}\to[0,1]\) that maps into a countable subset of its range, and let \(m=|R(f)|\), \(\alpha_{\mathcal{H}}<\frac{\alpha_{\mathcal{C}}^{2}}{2^{9}mB^{\prime 2}}\), and \(\alpha<\frac{\alpha_{\mathcal{C}}\gamma^{2}}{32mB^{\prime 2}}\). Then if_
* \(\mathcal{H}\) _satisfies the_ \(B^{\prime}\)_-bounded_ \(\gamma\)_-weak learning condition relative to_ \(\mathcal{C}\) _and_ \(\mathcal{D}\)__
* \(f\) _is_ \(\alpha_{\mathcal{H}}\)_-approximately multicalibrated with respect to_ \(\mathcal{D}\) _and_ \(\mathcal{H}_{B^{\prime}}\)__
* \(f\) _is_ \(\alpha\)_-approximately calibrated on_ \(\mathcal{D}\)_,_
_then \(f\) is \(\alpha_{\mathcal{C}}\)-approximately multicalibrated with respect to \(\mathcal{D}\) and \(\mathcal{C}_{B}\)._
Proof.: Suppose not and there exists some \(c\in\mathcal{C}_{B}\) such that
\[\sum_{v\in R(f)}\operatorname*{\Pr}_{x\sim\mathcal{D}_{x}}[f(x)=v]\cdot\left( \operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[c(x)(y-v)\mid f(x)=v] \right)^{2}>\alpha_{\mathcal{C}}.\]
Then there must exist some \(v\in R(f)\) such that \(\Pr[f(x)=v]>\frac{\alpha_{\mathcal{C}}}{2m}\) and
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[c(x)(y-v)\mid f(x)=v]^{2}> \alpha_{\mathcal{C}}/2.\]
Because \(\mathcal{C}\) is closed under affine transformations, \(\mathcal{C}_{B}\) is closed under negation, so there must also exist some \(c^{\prime}\in\mathcal{C}_{B}\) such that
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[c^{\prime}(x)(y-v)\mid f(x )=v]>\sqrt{\alpha_{\mathcal{C}}/2}.\]
Then Lemma 3.3 shows that there is a \(c^{\prime\prime}\in\mathcal{C}_{(1+\sqrt{\frac{2B}{\alpha}})^{2}}=\mathcal{C} _{B^{\prime}}\) such that
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-f(x))^{2}-(y-c^{\prime \prime}(x))^{2}\mid f(x)=v]\geq\frac{\alpha_{\mathcal{C}}}{2B}=2\gamma.\]
Because \(f\) is \(\alpha\)-calibrated on \(\mathcal{D}\), by definition we have
\[\operatorname*{\Pr}_{x\sim\mathcal{D}_{x}}[f(x)=v]\cdot\left(\operatorname*{ \mathbb{E}}_{(x,y)\sim\mathcal{D}}[v-y\mid f(x)=v]\right)^{2}<\alpha.\]
Letting \(\bar{y}_{v}=\operatorname*{\mathbb{E}}[y\mid f(x)=v]\), our lower-bound that \(\Pr[f(x)=v]>\frac{\alpha_{\mathcal{C}}}{2m}\) gives us that \((v-\bar{y}_{v})^{2}<\frac{2\alpha m}{\alpha_{\mathcal{C}}}\leq\frac{\gamma^{2} }{16B^{\prime 2}}<\gamma\). So, because \(v\) is close to \(\bar{y}_{v}\), we can show the squared error of \(f\) must be close to the squared error of \(\bar{y}_{v}\) on this level set.
\[\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-f(x))^{2}\mid f (x)=v] =\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-\bar{y}_{v}+ \bar{y}_{v}-f(x))^{2}\mid f(x)=v]\] \[=\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-\bar{y}_{v}) ^{2}+2(y-\bar{y}_{v})(\bar{y}_{v}-v)\mid f(x)=v]+(\bar{y}_{v}-v)^{2}\] \[=\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-\bar{y}_{v}) ^{2}\mid f(x)=v]+(\bar{y}_{v}-v)^{2}\] \[<\operatorname*{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-\bar{y}_{v}) ^{2}\mid f(x)=v]+\gamma.\]
Then, because the squared error of \(c^{\prime\prime}\) on this level set is much less than the squared error of \(f\), we find that \(c^{\prime\prime}\) must also have squared error less than that of \(\bar{y}_{v}\):
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-\bar{y}_{v})^{2}-(y -c^{\prime\prime}(x))^{2}\mid f(x)=v] >\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-f(x))^{2}-\gamma-(y -c^{\prime\prime}(x))^{2}\mid f(x)=v]\] \[\geq 2\gamma-\gamma\] \[=\gamma\]
We assumed \(\mathcal{H}\) satisfies the \(B^{\prime}\)-bounded \(\gamma\)-weak learning condition relative to \(\mathcal{C}\), so this gives us a function \(h\in\mathcal{H}_{B^{\prime}}\) such that
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-\bar{y}_{v})^{2}-(y-h(x))^{2} \mid f(x)=v]>\gamma.\]
Then Lemma 3.3 shows that
\[\mathbb{E}[h(x)(y-\bar{y}_{v})\mid f(x)=v]>\gamma/2.\]
So \(h\) witnesses a failure of multicalibration of \(f\), since it follows that
\[\mathbb{E}[h(x)(y-v)\mid f(x)=v] =\mathbb{E}[h(x)(y-\bar{y}_{v})\mid f(x)=v]+\mathbb{E}[h(x)(\bar{ y}_{v}-v)\mid f(x)=v]\] \[>\gamma/2-B^{\prime}\left|\bar{y}_{v}-v\right|\] \[\geq\gamma/2-\frac{B^{\prime}\gamma}{4B^{\prime}}\] \[=\gamma/4\]
and so
\[\Pr_{x\sim\mathcal{D}_{x}}[f(x)=v]\left(\mathop{\mathbb{E}}_{(x,y)\sim \mathcal{D}}[h(x)(y-v)\mid f(x)=v]\right)^{2}>\frac{\alpha_{\mathcal{C}}\gamma ^{2}}{32m}>\alpha_{\mathcal{H}},\]
contradicting \(\alpha_{\mathcal{H}}\)-approximate multicalibration of \(f\) on \(\mathcal{H}_{B^{\prime}}\) and \(\mathcal{D}\).
In Gopalan et al. (2022), Gopalan, Kalai, Reingold, Sharan, and Wieder show that any predictor that is approximately multicalibrated for a class \(\mathcal{H}\) and distribution \(\mathcal{D}\) can be efficiently post-processed to approximately minimize any convex, Lipschitz loss function relative to the class \(\mathcal{H}\). The theorem we have just proved can now be used to extend their result to approximate loss minimization over any other class \(\mathcal{C}\), so long as \(\mathcal{H}\) satisfies the \(B\)-bounded \(\gamma\)-weak learning assumption relative to \(\mathcal{C}\). Intuitively, this follows from the fact that if \(f\) is approximately multicalibrated with respect to \(\mathcal{H}\) on \(\mathcal{D}\), it is also approximately multicalibrated with respect to \(\mathcal{C}\). However, the notion of approximate multicalibration adopted in Gopalan et al. (2022) differs from the one in this work. So, to formalize our intuition above, we will first state the covariance-based definition of approximate multicalibration appearing in Gopalan et al. (2022) and prove a lemma relating it to our own. We note that, going forward, we will restrict ourselves to distributions \(\mathcal{D}\) over \(\mathcal{X}\times\{0,1\}\), as in this case the two definitions of approximate multicalibration are straightforwardly connected.
**Definition 6.7** (Approximate Covariance Multicalibration Gopalan et al. (2022)).: _Fix a distribution \(\mathcal{D}\) over \(\mathcal{X}\times\{0,1\}\) and a function \(f:\mathcal{X}\to[0,1]\) that maps onto a countable subset of its range, denoted \(R(f)\). Let \(\mathcal{H}\) be an arbitrary collection of real valued functions \(h:\mathcal{X}\to\mathbb{R}\). Then \(f\) is \(\alpha\)-approximately covariance multicalibrated with respect to \(\mathcal{H}\) on \(\mathcal{D}\) if_
\[\sum_{v\in R(f)}\Pr_{x\sim\mathcal{D}_{\mathcal{X}}}[f(x)=v]\cdot\big{|} \mathbb{E}[(h(x)-\bar{h}_{v})(y-\bar{y}_{v})\mid f(x)=v]\big{|}\leq\alpha,\]
_where \(\bar{h}_{v}=\mathbb{E}[h(x)\mid f(x)=v]\) and \(\bar{y}_{v}=\mathbb{E}[y\mid f(x)=v]\)._
**Lemma 6.8**.: _Fix a distribution \(\mathcal{D}\) over \(\mathcal{X}\times\{0,1\}\) and a class of functions on \(\mathcal{X}\), \(\mathcal{H}\). Let \(\mathcal{H}_{B}\) denote the subset_
\[\mathcal{H}_{B}=\{h\in\mathcal{H}:\max_{x\in\mathcal{X}}h(x)^{2}\leq B\}.\]
_Fix a function \(f:\mathcal{X}\to[0,1]\) that maps onto a countable subset of its range, denoted \(R(f)\). Then if \(f\) is \(\alpha\)-approximately multiclibrated with respect to \(\mathcal{H}_{B}\) on \(\mathcal{D}\), then \(f\) is \((\sqrt{\alpha}(1+\sqrt{B}))\)-approximately covariance multiclibrated. That is, for all \(h\in\mathcal{H}_{B}\), \(f\) satisfies_
\[\sum_{v\in R(f)}\Pr[f(x)=v]\cdot\big{|}\mathbb{E}[(h(x)-\bar{h}_{v})(y-\bar{y}_ {v})\mid f(x)=v]\big{|}\leqslant\sqrt{\alpha}(1+\sqrt{B}).\]
Proof.: \[\sum_{v\in R(f)}\Pr[f(x)=v]\cdot\big{|}\mathbb{E}[(h(x)-\bar{h}_{v })(y-\bar{y}_{v})\mid f(x)=v]\big{|}\] \[\qquad\qquad\qquad\qquad=\sum_{v\in R(f)}\Pr[f(x)=v]\cdot\big{|} \mathbb{E}[h(x)y\mid f(x)=v]-\bar{y}_{v}\bar{h}_{v}\big{|}\] \[\qquad\qquad\qquad\qquad=\sum_{v\in R(f)}\Pr[f(x)=v]\cdot\big{|} \mathbb{E}[h(x)(y-v)\mid f(x)=v]+\bar{h}_{v}(v-\bar{y}_{v})\big{|}\] \[\qquad\qquad\qquad\qquad\leqslant\sum_{v\in R(f)}\Pr[f(x)=v] \cdot\big{(}|\mathbb{E}[h(x)(y-v)\mid f(x)=v]|+|\bar{h}_{v}(v-\bar{y}_{v})| \big{)}\] \[\qquad\qquad\qquad\qquad\qquad\leqslant\sqrt{\alpha}+\sqrt{B} \sum_{v\in R(f)}\Pr[f(x)=v]\cdot|v-\bar{y}_{v}|\] \[\qquad\qquad\qquad\qquad\qquad\leqslant\sqrt{\alpha}(1+\sqrt{B}).\]
where the second inequality follows from the fact that \(\mathbb{E}[x]\leqslant\sqrt{\mathbb{E}[x^{2}]}\) and the bound \(\max_{x\in\mathcal{X}}h(x)^{2}\leqslant B\).
We now recall a theorem of Gopalan et al. (2022), showing that approximate covariance multiclibration with respect to a class \(\mathcal{H}\) implies approximate loss minimization relative to \(\mathcal{H}\), for convex, Lipschitz losses.
**Theorem 6.9**.: _Fix a distribution \(\mathcal{D}\) over \(\mathcal{X}\times\{0,1\}\) and a class of real-valued functions on \(\mathcal{X}\), \(\mathcal{H}\). Fix a function \(f:\mathcal{X}\to[0,1]\) that maps onto a countable subset of its range, denoted \(R(f)\). Let \(\mathcal{L}\) be a class of functions on \(\{0,1\}\times\mathbb{R}\) that are convex and \(L\)-Lipschitz in their second argument. If \(f\) is \(\alpha\)-approximately covariance multiclibrated with respect to \(\mathcal{H}_{B}\) on \(\mathcal{D}\), then for every \(\ell\in\mathcal{L}\) there exists an efficient post-processing function \(k_{\ell}\) such that_
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[\ell(y,k_{\ell}(f(x)))]\leqslant \min_{h\in\mathcal{H}_{B}}\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[\ell(y,h (x))]+2\alpha L.\]
**Corollary 6.10**.: _Fix a distribution \(\mathcal{D}\) over \(\mathcal{X}\times\{0,1\}\) and two classes of real-valued functions on \(\mathcal{X}\) that are closed under affine transformation, \(\mathcal{H}\) and \(\mathcal{C}\). Fix a function \(f:\mathcal{X}\to[0,1]\) that maps onto a countable subset of its range, denoted \(R(f)\). Let \(\mathcal{L}\) be a class of functions on \(\{0,1\}\times\mathbb{R}\) that are convex and \(L\)-Lipschitz in their second argument. Fix \(\alpha_{\mathcal{C}},B>0\). Let \(B^{\prime}=(1+\sqrt{\frac{2B}{\alpha c}})^{2}\) and \(\gamma=\frac{\alpha_{\mathcal{C}}}{4B}\). Let \(\alpha_{\mathcal{H}}<\frac{\alpha_{\mathcal{C}}^{3}}{2^{9}mB^{\prime 2}}\), and \(\alpha<\frac{\alpha_{\mathcal{C}}\gamma^{2}}{32mB^{\prime 2}}\). Then if_
* \(\mathcal{H}\) _satisfies the_ \(B^{\prime}\)_-bounded_ \(\gamma\)_-weak learning condition relative to_ \(\mathcal{C}\) _and_ \(\mathcal{D}\)__
* \(f\) _is_ \(\alpha_{\mathcal{H}}\)_-approximately multiclibrated with respect to_ \(\mathcal{D}\) _and_ \(\mathcal{H}_{B^{\prime}}\)__
* \(f\) _is_ \(\alpha\)_-approximately calibrated on_ \(\mathcal{D}\)_,_
_then for every \(\ell\in\mathcal{L}\) there exists an efficient post-processing function \(k_{\ell}\) such that_
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[\ell(y,k_{\ell}(f(x)))]\leqslant \min_{c\in\mathcal{C}_{B}}\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[\ell(y,c (x))]+2L\sqrt{\alpha_{\mathcal{C}}}(1+\sqrt{B}).\]
Proof.: We have from Theorem 6.6 that given the assumed conditions, \(f\) will be \(\alpha_{\mathcal{C}}\)-approximately multicalibrated with respect to \(\mathcal{C}_{B}\) on \(\mathcal{D}\). It follows from Lemma 6.8 that \(f\) is \(\sqrt{\alpha_{\mathcal{C}}}(1+\sqrt{B})\)-approximately covariance multicalibrated with respect to \(\mathcal{C}_{B}\) on \(\mathcal{D}\). The result of Gopalan et al. [2022] then gives us that for all \(\ell\in\mathcal{L}\), there exists an efficient post-processing function \(k_{\ell}\) such that
\[\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[\ell(y,k_{\ell}(f(x)))]\leqslant \mathop{\mathrm{min}}_{c\in\mathcal{C}_{B}}\mathop{\mathbb{E}}_{(x,y)\sim \mathcal{D}}[\ell(y,c(x))]+2L\sqrt{\alpha_{\mathcal{C}}}(1+\sqrt{B}).\]
## 7 Empirical Evaluation
In this section, we study Algorithm 1 empirically via an efficient, open-source Python implementation of our algorithm on both synthetic and real regression problems. Our code is available here: [https://github.com/Declancharrison/Level-Set-Boosting](https://github.com/Declancharrison/Level-Set-Boosting). An important feature of Algorithm 1 which distinguishes it from traditional boosting algorithms is the ability to parallelize not only during inference, but also during training.
Let \(f_{t}\) be the model maintained by Algorithm 1 at round \(t\) with \(m\) level sets. Given a data set \(X\), \(f_{t}\) creates a partition of \(X\) defined by \(X_{i}^{t+1}=\{x|f_{t}(x)=v_{i}\}\). Since the \(X_{i}\) are disjoint, each call \(h_{i}^{t+1}=A_{\mathcal{H}}(X_{i}^{t+1})\) can be made on a separate worker followed by a combine and round operation to obtain \(\tilde{f}_{t+1}\) and \(f_{t+1}\) respectively, as shown in Figure 1. A parallel inference pass at round \(t\) works nearly identically, but uses the historical weak learners \(h_{i}^{t+1}\) obtained from training and applies them to each set \(X_{i}^{t+1}\).
### Prediction on Synthetic Data
From Theorem 5.2, we know that multicalibration with respect to a hypothesis class \(\mathcal{H}\) satisfying our weak learning condition implies Bayes optimality. To visualize the fast convergence of our algorithm to Bayes optimality, we create two synthetic datasets; each dataset contains one million samples with two features. We label these points using two functions, \(C_{0}\) and \(C_{1}\), defined below and pictured in Figure 2). We attempt to learn the underlying function with Algorithm 1.
\[C_{0}(x)=\begin{cases}(x+1)^{2}+(y-1)^{2},&\text{if }x\leqslant 0,y\geqslant 0 \\ (x-1)^{2}+(y-1)^{2},&\text{if }x>0,y\geqslant 0\\ (x+1)^{2}+(y+1)^{2},&\text{if }x\leqslant 0,y<0\\ (x-1)^{2}+(y+1)^{2},&\text{if }x>0,y<0\end{cases}\] ( \[C_{0}\] )
Figure 1: The update process at round \(t\) with \(m\) level sets during training.
\[C_{1}(x)=\begin{cases}x+20xy^{2}\cos(-8x)\sin(8y)\left(\frac{(1.5x+4)(x+1)^{2}}{y+3 }+(y-1)^{2}\right),&\text{if }x\leqslant 0,y\geqslant 0\\ x+20xy^{2}\cos(8x)\sin(8y)\left(\frac{(1.5x+4)(x-1)^{2}}{y+3}+(y-1)^{2} \right),&\text{if }x>0,y\geqslant 0\\ x+20xy^{2}\cos(-8x)\sin(8y)\left(\frac{(1.5x+4)(x+1)^{2}}{y+3}+(y+1)^{2} \right),&\text{if }x\leqslant 0,y<0\\ x+20xy^{2}\cos(8x)\sin(8y)\left(\frac{(1.5x+4)(x-1)^{2}}{y+3}+(y+1)^{2} \right),&\text{if }x>0,y<0\end{cases}\] ( \[C_{1}\] )
In Figure 3, we show an example of Algorithm 1 learning \(C_{0}\) using a discretization of five-hundred level sets and a weak learner hypothesis class of depth one decision trees. Each image in figure 3 corresponds to the map produced by Algorithm 1 at the round listed in the top of the image. As the round count increases, the number of non-empty level sets increases until each level set is filled, at which point the updates become more granular. The termination round titled 'final round' occurs at \(T=199\) and paints an approximate map of \(C_{0}\). The image titled 'out of sample' is the map produced on a set of one million points randomly drawn outside of the training sample, and shows that Algorithm 1 is in fact an approximation of the Bayes Optimal \(C_{0}\).
Figure 4 plots the same kind of progression as Figure 3, but with a more complicated underlying function \(C_{1}\) using a variety of weak learner classes. We are able to learn this more complex surface out of sample with all base classes except for linear regression, which results in a noisy out-of-sample plot.
### Prediction on Census Data
We evaluate the empirical performance of Algorithm 1 on US Census data compiled using the Python folktables package Ding et al. (2021). In this dataset, the feature space consists of demographic information about individuals (see Table 1), and the labels correspond to the individual's annual income.
Figure 2: \(C_{0}\) maps \(x_{1},x_{2}\in[-2,2]\) to four cylindrical cones symmetric about the origin. \(C_{1}\) maps \(x_{1},x_{2}\in[-1,1]\) to a hilly terrain from a more complex function.
We cap income at $100,000 and then rescale all labels into \([0,1]\). On an 80/20% train-test split with 500,000 total samples, we compare the performance of Algorithm 1 with Gradient Boosting with two performance metrics: mean squared error (MSE), and mean squared calibration error (MSCE). For less expressive weak learner classes (such as DT(1), see Figure 5), Algorithm 1 has superior MSE out of sample compared to Gradient Boosting through one hundred rounds while maintaining significantly lower MSCE, and converges quicker. However, as the weak learning class becomes more expressive (e.g. increasing decision tree depths), Algorithm 1 is more prone to overfitting than gradient boosting (see Figure 6).
Figure 4: Stages of Algorithm 1 learning \(C_{1}\) with linear regression (LR) and varying depth \(d\) decision trees (DT(\(d\))). In the out of sample plot for linear regression, points are not mapped to their proper position, implying \(C_{1}\) cannot be learned by boosting linear functions. All other hypothesis classes eventually converge to \(C_{1}\).
In Table 2, we compare the time taken to train \(n\) weak learners with Algorithm 1 and with scikit-learn's version of Gradient Boosting on our census data. Recall that our algorithm trains multiple weak learners per round of boosting, and so comparing the two algorithms for a fixed number of calls to the weak learner is distinct from comparing them for a fixed number of rounds. Because models output by Algorithm 1 may be more complex than those produced by Gradient Boosting run for the same number of rounds, we use number of weak learners trained as a proxy for model complexity, and compare the two algorithms holding this measure fixed. We see the trend for Gradient Boosting is linear with respect to number of weak learners, whereas Algorithm 1 does not follow the same linear pattern upfront. This is due to not being able to fully
Figure 5: Comparison of Algorithm 1 (LS) and Gradient Boosting (GB), both using depth 1 regression trees. * indicates termination round of Algorithm 1.
average parallelization of training weak learners in early stages of boosting. At each round, Algorithm 1 calls the weak learner on every large enough level set of the current model, and it is these independent calls that can be easily parallelized. However, in the early rounds of boosting the model may be relatively simple, and so many level sets may be sparsely populated. As the model becomes more expressive over subsequent rounds, the weak learner will be invoked on more sets per round, allowing us to fully utilize parallelizability.
In Figure 6, we measure MSE and MSCE for Algorithm 1 and Gradient Boosting over rounds of training on our census data. Again, we note that one round of Algorithm 1 is not equivalent to one round of Gradient Boosting, but intend to demonstrate error comparisons and rates of convergence. For the linear regression plots, Gradient Boosting does not reduce either error since combinations of linear models are also linear. As the complexity of the underlying model class increases, Gradient Boosting surpasses Algorithm 1 in terms of MSE, though it does not minimize calibration error.
We notice that Algorithm 1, like most machine learning algorithms, is prone to overfitting when allowed. Future performance hueristics we intend to investigate include validating updates, complexity penalties, and weighted mixtures of updates.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\# Weak Learners} & \multicolumn{3}{c|}{DT(1)} & \multicolumn{3}{c|}{DT(2)} & \multicolumn{3}{c|}{DT(3)} \\ \cline{2-10} & LS & GB & Faster? & LS & GB & Faster? & LS & GB & Faster? \\ \hline \multicolumn{10}{|c|}{50 level sets} \\ \hline
100 & 9.11 & 11.97 & ✓ & 5.86 & 23.01 & ✓ & 6.88 & 32.92 & ✓ \\
300 & 18.70 & 35.81 & ✓ & 14.90 & 69.17 & ✓ & 15.64 & 102.14 & ✓ \\
500 & 27.00 & 58.19 & ✓ & 21.74 & 115.65 & ✓ & 24.77 & 169.90 & ✓ \\
1000 & 46.73 & 116.49 & ✓ & 42.92 & 231.74 & ✓ & 46.38 & 336.89 & ✓ \\ \hline \multicolumn{10}{|c|}{100 level sets} \\ \hline
100 & 7.18 & 11.97 & ✓ & 5.29 & 23.01 & ✓ & 5.06 & 32.92 & ✓ \\
300 & 13.08 & 35.81 & ✓ & 13.55 & 69.17 & ✓ & 14.72 & 102.14 & ✓ \\
500 & 21.20 & 58.19 & ✓ & 19.57 & 115.65 & ✓ & 21.79 & 169.90 & ✓ \\
1000 & 41.99 & 116.49 & ✓ & 36.26 & 231.74 & ✓ & 40.92 & 336.89 & ✓ \\ \hline \multicolumn{10}{|c|}{300 level sets} \\ \hline
100 & 5.87 & 11.97 & ✓ & 9.18 & 23.01 & ✓ & 6.54 & 32.92 & ✓ \\
300 & 13.21 & 35.81 & ✓ & 17.46 & 69.17 & ✓ & 11.13 & 102.14 & ✓ \\
500 & 19.05 & 58.19 & ✓ & 22.20 & 115.65 & ✓ & 19.64 & 169.90 & ✓ \\
1000 & 32.80 & 116.49 & ✓ & 36.61 & 231.74 & ✓ & 27.12 & 336.89 & ✓ \\ \hline \end{tabular}
\end{table}
Table 2: Time (in seconds) comparison of Algorithm 1 (LS) with fifty level sets and Gradient Boosting to train certain numbers of estimators for various weak learner classes.
Figure 6: MSE and MSCE comparison of Algorithm 1 (LS) and Gradient Boosting (GB) on linear regression and decision trees of varying depths. * indicates termination round of LS and occurs, from top to bottom, at \(T=41,23,39,20\). |
2309.08203 | Stability of five-dimensional Myers-Perry black holes under massive
scalar perturbation: bound states and quasinormal modes | The stability of five-dimensional singly rotating Myers-Perry Black Holes
against massive scalar perturbations is studied. Both the quasibound states and
quasinormal modes of the massive scalar field are considered. For the
quasibound states, we use an analytical method to discuss the effective
potential felt by the scalar field, and found that there is no potential well
outside the event horizon. Thus, singly rotating Myers-Perry Black Holes are
stable against the perturbation of quasibound states of massive scalar fields.
Then, We use continued fraction method based on solving a seven-term recurrence
relations to compute the spectra of the quasinormal modes. For different values
of the black hole rotation parameter $a$, scalar mass parameter $\mu$ and
angular quantum numbers, all found quasinormal modes are damped. So singly
rotating Myers-Perry Black Holes are also stable against the perturbation of
quasinormal modes of massive scalar fields. Besides, when the scalar mass $\mu$
becomes relatively large, the long-living quasiresonances are also found as in
other rotating black hole models. Our results complement previous arguments on
the stability of five-dimensional singly rotating Myers-Perry black holes
against massive scalar perturbations. | Wenbin Li, Kai-Peng Lu, W. LiMing, Jia-Hui Huang | 2023-09-15T07:12:18Z | http://arxiv.org/abs/2309.08203v2 | Stability of five-dimensional Myers-Perry black holes under massive scalar perturbation: bound states and quasinormal modes
###### Abstract
The stability of five-dimensional singly rotating Myers-Perry Black Holes against massive scalar perturbations is studied. Both the quasibound states and quasinormal modes of the massive scalar field are considered. For the quasibound states, we use an analytical method to discuss the effective potential felt by the scalar field, and found that there is no potential well outside the event horizon. Thus, singly rotating Myers-Perry Black Holes are stable against the perturbation of quasibound states of massive scalar fields. Then, We use continued fraction method based on solving a seven-term recurrence relations to compute the spectra of the quasinormal modes. For different values of the black hole rotation parameter \(a\), scalar mass parameter \(\mu\) and angular quantum numbers, all found quasinormal modes are damped. So singly rotating Myers-Perry Black Holes are also stable against the perturbation of quasinormal modes of massive scalar fields. Besides, when the scalar mass \(\mu\) becomes relatively large, the long-living quasiseonsances are also found as in other rotating black hole models. Our results complement previous arguments on the stability of five-dimensional singly rotating Myers-Perry black holes against massive scalar perturbations.
## I Introduction
The history of black holes (BHs) is quite long. The BH physics plays an essential role in modern theoretical physics and observational physics. According to the BH perturbation theory, when a BH is perturbed by any possible fields (scalar, electromagnetic, or gravitational), these perturbations are usually governed by a pair of second-order ordinary differential equations. The physical requirement of the perturbation fields must be purely ingoing at the event horizon and are all finite near spatial infinity leads to three different boundary conditions. They are respectively called scattering states, quasinormal modes (QNMs) and quasibound states (QBSs). The well-known QNMs play important roles during the ringdown process. They are independent of the initial conditions, reflect the intrinsic properties of the spacetime itself, and are regarded as the characteristic modes of the oscillations of BHs [1, 2].
It was suggested that particles with negative energy can exist in the ergoregion of a rotating BH, so one can imagine a process through which it may be possible to extract energy from a rotating BH, which is the Penrose's process [3, 4]. Similarly, there exists an analog effect for an incident bosonic wave, which is called superradiance [5, 6, 7, 8, 9, 10]. It is said that when a bosonic wave is impinging upon a rotating BH, the reflected wave will be amplified by extract the rotational energy from BHs if the frequency \(\omega\) of the wave satisfies \(\omega<m\Omega_{H}\), where \(m\) is the azimuthal number with respect to the BH rotation axis and \(\Omega_{H}\) represents the angular velocity of the BH event horizon. (For a comprehensive review about superradiance, see [11].) It was pointed out that when putting an artificial mirror outside the event horizon, the amplified waves may be reflected back and forth, thus leading to an instability. This is the so-called "black hole bomb" mechanism [7, 12]. This instability was later realized by imposing (charged) massive scalar perturbations on Kerr (or Kerr-Newman) BH background [13, 14, 15, 16]. In these cases, the mass term of the perturbation field behaves as a natural mirror. Meanwhile, the superradiant (in)stability of asymptotically flat Kerr BHs under massive scalar or vector perturbations has been studied in Ref.[17, 18, 19, 20, 21, 22, 23].
Higher-dimensional spacetimes are also interesting in theoretical physics. On the one hand, higher dimensions are necessary for string theory, compact extra dimensions, brane-world models, gauge/gravity duality, etc. On the other hand, in four-dimensional general relativity, a uniqueness theorem which was proved by Carter and Robinson states that the only possible stationary and axial-symmetric flat spacetime is the Kerr solution [24, 25]. But it does not hold in higher-dimensional spacetime, where there exist a variety of black object solutions such as black strings, branes, rings and so on [26].
Compared to four-dimensional cases, the (in)stability of higher-dimensional BHs are more complicated. The higher-dimensional spherically symmetric Schwarzschild-Tangherlini spacetime and Reissner-Nordstrom (RN) BHs were proven to be stable when perturbed by external fields[27, 28, 29, 30, 31, 32, 33]. However, the black strings and p-branes
constructed from the spherically symmetric BHs are generally unstable, this is known as the Gregory-Laflamme instability [34]. The superradiant conditions of higher-dimensional rotating BHs was studied in Ref.[35; 36]. It was shown that Myers-Perry (MP) BHs under AdS background with equal angular momenta in odd number of dimensions (greater than five) are superradiant unstable under tensor perturbations [37]. The scalar QNMs were numerically explored for \(Kerr-AdS_{5}\) with unequal rotations [38]. Superradiance also leads to gravitational instability in other four and higher-dimensional AdS BHs [39; 40; 41; 42]. It has been recently proven that small MP-AdS BHs in arbitrary dimensions are also superradiant unstable [43]. In Ref.[44], the author has derived a near-extremal QNMs formula of higher-dimensional singly rotating MP-dS BHs with non-minimally coupled scalar fields, and singly rotating MP-dS black hole was found to be stable against the QNMs of massive scalar perturbations[45].
Asymptotically flat Myers-Perry Black Holes (MPBHs) with equal angular momenta are found to be stable under gravitational perturbation in five or seven dimensions, but in nine dimensions, for sufficiently rapid rotation, the authors found that perturbations grow exponentially in time [46]. As for \(D\geq 7\), the stability of simply rotating MPBHs against tensor-type perturbations was studied and no instability was found [47]. Moreover, the QNMs of massless scalar perturbations on \(D=5\) and \(D=6\) MP spacetime with a single rotation parameter were well investigated, and the numerical results shown that they are all stable [48; 49]. The QNMs of massive scalar perturbations on \(D=6\) MPBH with a single rotation parameter has been studied recently and no instability is found[50].
In Ref.[51], the authors considered the stability of singly rotating MPBHs against massive scalar perturbations. Based on the assumption that there are no stable orbits for \(D>4\) MPBHs, and thus the perturbation field can escape to infinity, they argued that singly rotating MPBHs should be stable against massive scalar perturbations. In order to provide a direct and complementary evidence for the stability of a \(5D\) singly rotating MPBH against massive scalar perturbations, in this work, we consider the stability of the MPBH against both the QBSs and QNMs of massive scalar perturbations [52]. We adopt an analytical method to discuss the QBSs and use the continued fraction method to study the QNMs.
This paper is organized as follows. In Sec.II, we briefly review the higher-dimensional singly rotating MPBH solution and the Klein-Gordon equation in a \(5D\) singly rotating MPBH. From the radial EOM, we obtain the effective potential and discuss the relevant boundary conditions. In Sec.III, we consider the asymptotic behavior of the effective potential, then analytically study the effective potential and discuss the stability of \(5D\) MPBH against the QBSs of the massive scalar fiels. In Sec.IV, we first introduce the continued fraction method used here, and then compute and discuss the QNMs of the massive scalar fields for different values of rotation parameter and scalar mass. The final section is devoted to the conclusions.
## II The background metric and effective potential
A higher-dimensional generalization of the Kerr metric to an arbitrary number of dimensions \(D\geq 5\) was discovered by Myers and Perry in Ref.[53].For a \((4+n)\)-dimensional MPBH, there are \(\lfloor\dfrac{n+3}{2}\rfloor\) independent angular momentum components, each of which corresponds to a rotation plane. Throughout this work, we concentrate on the simplest case where the MPBH rotates in just one plane, and we denote the angular momentum (per mass) of the MPBH by \(a\). Thus, the line element of a singly rotating higher-dimensional MPBH in Boyer-Lindquist-type coordinates is given by
\[\mathrm{d}s^{2}= -\frac{\Delta_{n}-a^{2}\sin^{2}\theta}{\Sigma}\mathrm{d}t^{2}- \frac{2aMr^{1-n}\sin^{2}\theta}{\Sigma}\mathrm{d}t\mathrm{d}\varphi\] \[+\frac{(r^{2}+a^{2})^{2}-\Delta_{n}a^{2}\sin^{2}\theta}{\Sigma} \sin^{2}\theta\mathrm{d}\varphi^{2}+\frac{\Delta_{n}}{\Sigma}\mathrm{d}r^{2}\] \[+\Sigma\mathrm{d}\theta^{2}+r^{2}\cos^{2}\theta\mathrm{d}\Omega_ {n}^{2}, \tag{1}\]
where
\[\Sigma =r^{2}+a^{2}\cos^{2}\theta, \tag{2}\] \[\Delta_{n} =r^{2}+a^{2}-Mr^{1-n}. \tag{3}\]
\(\mathrm{d}\Omega_{n}^{2}\) denotes the standard metric of the unit \(n\)-sphere. \(M,a\) are related to the physical mass \(M_{\mathrm{BH}}\) and angular momentum \(J\) of the MPBH as follows[54],
\[M_{\mathrm{BH}}=\frac{(n+2)\mathcal{A}_{n+2}M}{16\pi G},\quad J=\frac{ \mathcal{A}_{n+2}Ma}{8\pi G}, \tag{4}\]
where \(\mathcal{A}_{n+2}=2\pi^{\frac{n+3}{2}}/\Gamma\left(\frac{n+3}{2}\right)\) represents the area of a unit \((n+2)\)-dimensional sphere and \(G\) is the \((4+n)\)-dimensional Newtonian constant of gravitation. The above metric describes an asymptotically flat and rotating vacuum BH solution with spherical topology. Hereafter, without loss of generality, \(M,a>0\) are assumed.
The event horizon of the BH is located at \(r=r_{H}\), which is the largest real root of the equation \(\left.\Delta_{n}\right|_{r=r_{H}}=0\). For \(n=0\), the above metric is just the \(4D\) Kerr BH, and we are familiar with the two event horizons for a Kerr BH. Note that there is only one horizon when \(n\geq 1\). For \(n=1\), which is the case we focus on in this paper, the rotation parameter is bounded above, \(a<\sqrt{M}\).
According to the BH perturbation theory, the dynamical evolution of a massive scalar perturbation field \(\Psi(x)\) with mass \(\mu\) in the background spacetime (1) is governed by the covariant KG equation
\[\Box\Psi(x)=\frac{1}{\sqrt{-g}}\partial_{\alpha}\left[g^{\alpha\beta}\sqrt{-g} \partial_{\beta}\Psi(x)\right]=\mu^{2}\Psi(x), \tag{5}\]
where \(g=\det(g_{\alpha\beta})\) is the determinant of the spacetime metric. Since Eq.(5) is separable in these Boyer-Lindquist-type coordinates \(x^{\mu}=\{t,r,\theta,\varphi,\cdots\}\), so we can decompose the eigenfunction with the following ansatz [48; 49; 51]
\[\Psi(x)=e^{-i\omega t}e^{im\varphi}R(r)S(\theta)Y(\Omega), \tag{6}\]
where \(Y(\Omega)\) is the hyperspherical harmonics on the \(n\)-sphere with eigenvalues \(-j(j+n-1)(j=0,1,2,\cdots)\). The quantum number \(m(=0,\pm 1,\pm 2,\cdots)\) describes the dependence of the perturbation field on the azimuthal direction around the rotation axis. Substitute the ansatz into Eq.(5), the scalar \((4+n)\)-dimensional spheroidal harmonics \(S(\theta)\) satisfies the following angular EOM
\[\frac{1}{\sin\theta\cos^{n}\theta}\bigg{(}\frac{\mathrm{d}}{ \mathrm{d}\theta}\sin\theta\cos^{n}\theta\frac{\mathrm{d}S(\theta)}{\mathrm{d }\theta}\bigg{)}+\bigg{[}c^{2}\cos^{2}\theta-\frac{m^{2}}{\sin^{2}\theta}\] \[-\frac{j(j+n-1)}{\cos^{2}\theta}+\lambda_{kjm}\bigg{]}S(\theta)=0, \tag{7}\]
where \(c=a\sqrt{\omega^{2}-\mu^{2}}\). Different from the four-dimensional Kerr BH case, the angular separation constant \(\lambda_{kjm}\) now depends on three indices \(k,m,j\). The parameter \(k(=0,1,2,\cdots)\) labels the discrete eigenvalues of \(S(\theta)\) for fixed values of \(j\) and \(m\). When \(a\omega\ll 1\) and \(a\mu\ll 1\), the separation constant \(\lambda_{kjm}\) can be expanded as a Taylor series as follows
\[\lambda_{kjm}=\ell(\ell+n+1)+\sum_{p=1}^{\infty}\tilde{f}_{p}c^{p}, \tag{8}\]
where \(\ell=2k+j+|m|\) is an integer which satisfies \(\ell\geq j+|m|\), and the first five terms of \(\tilde{f}_{p}\) are shown explicitly in Ref.[55].
From the KG equation (5), we can also obtain the following radial EOM which is obeyed by \(R(r)\),
\[\frac{1}{r^{n}}\frac{\mathrm{d}}{\mathrm{d}r}\left(r^{n}\Delta_{n}\frac{ \mathrm{d}R(r)}{\mathrm{d}r}\right)+U(r)R(r)=0, \tag{9}\]
where
\[U(r)=\frac{\left[\omega(r^{2}+a^{2})-ma\right]^{2}}{\Delta_{n}}-\bigg{[}\frac {j(j+n-1)a^{2}}{r^{2}}+\lambda_{kjm}-2ma\omega+\mu^{2}r^{2}+a^{2}\omega^{2} \bigg{]}. \tag{10}\]
It's easy to check that the above radial EOM may reduce to the four-dimensional Teukolsky equation [7; 8; 10] when \(n=0\) (hence \(M=2GM_{\mathrm{BH}}\)).
In order to study the appropriate boundary conditions of the scalar perturbation, it is useful to define a tortoise coordinate by \(\frac{\mathrm{d}r_{*}}{\mathrm{d}r}=\frac{r^{2}+a^{2}}{\Delta_{n}}\) and a new radial function \(\tilde{R}(r)=\sqrt{r^{n}(r^{2}+a^{2})}R(r)\). With these new quantities, the radial EOM is transformed into the following Schrodinger-like equation
\[\frac{\mathrm{d}^{2}\tilde{R}(r)}{\mathrm{d}r_{*}^{2}}+\tilde{U}(r)\tilde{R}( r)=0. \tag{11}\]
Then, the asymptotic limits of \(\tilde{U}(r)\) at the spatial infinity and event horizon are
\[\tilde{U}(r)\sim\begin{cases}(\omega-m\Omega_{H})^{2}\,,&r_{*}\to-\infty\,(r \to r_{H}),\\ \omega^{2}-\mu^{2},&r_{*}\to+\infty\,(r\to+\infty),\end{cases} \tag{12}\]
where \(\Omega_{H}=\frac{a}{r_{H}^{2}+a^{2}}\) is the angular velocity of the BH event horizon. The physically accepted boundary condition of the perturbation field at the classical BH horizon is a purely ingoing wave. Then, the asymptotic solutions of the radial equation (11) at the horizon and spatial infinity are chosen as follows,
\[R(r)\sim\begin{cases}(r-r_{H})^{-i\sigma},&r\to r_{H},\\ r^{-(n+2)/2}\mathrm{e}^{qr},&r\to\infty,\end{cases} \tag{13}\]
where
\[q^{2}=\mu^{2}-\omega^{2},\quad\sigma=\frac{\left[\left(r_{H}^{2}+a^{2}\right) \omega-ma\right]r_{H}}{(n-1)\left(r_{H}^{2}+a^{2}\right)+2r_{H}^{2}}. \tag{14}\]
At spatial infinity, two physical boundary conditions may exist. One is the renowned QNM condition which imposes purely outgoing modes at \(r\to\infty\). In this case, \(\mathrm{Re}(\omega)>\mu\) and \(q=i\sqrt{\omega^{2}-\mu^{2}}\). The other is the quasi-bound state condition which requires decaying modes at \(r\to\infty\). In this case, \(\mathrm{Re}(\omega)<\mu\) and \(q=-\sqrt{\mu^{2}-\omega^{2}}\).
The radial equation (9) and chosen boundary conditions single out a discrete set of complex frequencies \(\{\omega_{n}\}\) ( \(\omega_{n}\equiv\omega_{R}-i\omega_{I}\)). In our convention, \(\omega_{I}>0\) means that the perturbation field is a decaying and stable mode, while \(\omega_{I}<0\) implies an growing and instable mode [1; 2].
## III Stability analysis of the QBSS
In this section, we consider the bound state condition and analytically study the superradiant stability regime
of the \(5D\) rotating MPBH under massive scalar perturbations. The event horizon of the MPBH is the solution of \(\Delta_{1}=0\), i.e. \(r_{H}=\sqrt{M-a^{2}}\). For bound state, the radial function is decaying at spatial infinity, i.e. \(q=-\sqrt{\mu^{2}-\omega^{2}}\) in the asymptotic solution (13). In order for a superradiant scattering occurring, the angular frequency \(\omega\) should satisfy the following inequality
\[\omega<\omega_{c}\equiv m\Omega_{H}=\frac{ma}{M}. \tag{15}\]
Since the massive scalar field is considered as a perturbation of the MPBH, it is required that the mass of the scalar field is in fact much less than the mass of the MPBH. In our \(5D\) MPBH case, this requirement means \(\mu^{2}M\ll 1\).
### Asymptotic Analysis of the effective potential
According to equation (9), the radial EOM of a scalar field in \(5D\) MPBH (\(n=1\)) can be written as
\[\Delta_{1}\frac{1}{r}\frac{\mathrm{d}}{\mathrm{d}r}\left(r\Delta_{1}\frac{ \mathrm{d}R(r)}{\mathrm{d}r}\right)+W(r)R(r)=0, \tag{16}\]
where
\[W(r)= \left[\omega(r^{2}+a^{2})-ma\right]^{2}\] \[-\Delta_{1}\bigg{(}\frac{j^{2}a^{2}}{r^{2}}+\lambda_{kjm}-2ma \omega+\mu^{2}r^{2}\bigg{)}, \tag{17}\]
Defining a new function \(\psi(r)=\sqrt{r\Delta_{1}}R(r)\)[18], the radial EOM can be rewritten as
\[\frac{\mathrm{d}^{2}\psi(r)}{\mathrm{d}r^{2}}+\left[\omega^{2}-V_{\mathrm{eff} }(r)\right]\psi(r)=0, \tag{18}\]
where the effective potential is
\[V_{\mathrm{eff}}(r)=\omega^{2}+\frac{-4r^{2}W(r)-\Delta_{1}^{2}+4r^{2}\left(2 \Delta_{1}-r^{2}\right)}{4r^{2}\Delta_{1}^{2}}. \tag{19}\]
The superradiant (in)stability can be judged by analyzing whether the effective potential has a potential well outside the event horizon [18; 51]. If there was a potential well, the superradiant QBS may be trapped and scattered back and forth (black hole "bomb" mechanism[7; 12]), which leads to the superradiant instability of the system. Here it is worth emphasizing that the existence of a trapping potential well is a necessary condition (but not a sufficient one) for the instability of the system. On the other hand, if there was no potential well, one could conclude that the system is superradiantly stable.
Note that in principle it's not enough to analyze the existence of the potential well by just considering the asymptotic behaviour of the effective potential at spatial infinity. As discussed in Ref.[56], one may find a potential well sandwiched between two barriers. So one needs to analyze the existence of the potential well in the whole spatial region outside the event horizon.
We first consider the asymptotic behaviors of the effective potential (19) and its derivative at the horizon and spatial infinity, which are
\[V_{\mathrm{eff}}(r\to r_{H}) \longrightarrow-\infty,\quad V_{\mathrm{eff}}^{\prime}(r\to r_{H}) \longrightarrow+\infty, \tag{20}\] \[V_{\mathrm{eff}}(r\rightarrow+\infty) \longrightarrow\mu^{2}+\frac{M\left(\mu^{2}-2\omega^{2}\right)-a^ {2}\mu^{2}+\lambda_{kjm}+\frac{3}{4}}{r^{2}}+\mathcal{O}\left(\frac{1}{r^{3}} \right),\] (21) \[V_{\mathrm{eff}}^{\prime}(r\rightarrow+\infty) \longrightarrow\frac{-2M\left(\mu^{2}-2\omega^{2}\right)+2a^{2} \mu^{2}-2\lambda_{kjm}-\frac{3}{2}}{r^{3}}+\mathcal{O}\left(\frac{1}{r^{4}} \right). \tag{22}\]
It is easy to see that the effective potential approaches to a constant \(\mu^{2}\) at spatial infinity. As we mentioned below equation (15), perturbation analysis requires that \(\mu^{2}M\ll 1\). For bound state, \(\omega^{2}<\mu^{2}\), thus \(\omega^{2}M\ll 1\). Since \(a^{2}<M\), we have \(a^{2}\mu^{2}\ll 1\). Under these conditions, \(\lambda_{kjm}\simeq\ell(\ell+n+1)>0\). Now it's easy to see the numerator of the leading order of \(V_{\mathrm{eff}}^{\prime}\) is negative, i.e. \(-2M\left(\mu^{2}-2\omega^{2}\right)+2a^{2}\mu^{2}-2\lambda_{kjm}-\frac{3}{2}<0\).
Based on the above analysis, we find that \(V_{\mathrm{eff}}^{\prime}\to 0^{-}\) as \(r\rightarrow\infty\), i.e. there is no trapping well for the effective potential near spatial infinity. Then, according to the asymptotic behaviours of the effective potential at the event horizon and spatial infinity, we can infer that there is at least one maximum between the event horizon and spatial infinity, i.e. there is at least one potential barrier between the event horizon and spatial infinity. However, as discussed before, this is not sufficient to conclude that the system is superradiantly stable. We need a further analysis on the shape of the effective potential between the event horizon and the spatial infinity.
In the next subsection, we use an analytic method based on Descartes' rule of signs to show that there is no trapping well for the effective potential outside the event horizon.
### Analysis of the Derivative of the Effective Potential
In this subsection, we analyze the shape of the effective potential by considering the positive real roots of the algebraic equation \(V^{\prime}_{\text{eff}}(r)=0\) in the physically allowed interval \((r_{H},+\infty)\). The derivative of the effective potential is
\[V^{\prime}_{\text{eff}}(r)=-\frac{f(r)}{2r^{3}\Delta_{1}^{3}}, \tag{23}\]
where
\[f(r)= A_{1}r^{6}+B_{1}r^{5}+C_{1}r^{4}+D_{1}r^{3}+E_{1}r^{2}+F_{1}r+G_{1}\] \[= \left[4M\left(\mu^{2}-2\omega^{2}\right)+4\lambda_{kjm}+3\right] r^{6}\] \[+\left[16amM\omega-M\left(4M\mu^{2}+4\lambda_{kjm}+9\right)\right.\] \[\left.\qquad+a^{2}\left(8j^{2}-8m^{2}+4\lambda_{kjm}+9\right) \right]r^{4}\] \[-\left\{3r_{H}^{2}\left[a^{2}\left(4j^{2}-1\right)+M\right] \right\}r^{2}\] \[+r_{H}^{4}\left[a^{2}\left(4j^{2}-1\right)+M\right]. \tag{24}\]
Because we are interested in the real roots of \(V^{\prime}_{\text{eff}}(r)=0\) when \(r>r_{H}\), we can ignore the nonzero denominator of \(V^{\prime}_{\text{eff}}(r)\) and equivalently consider the real roots of \(f(r)=0\). Defining a new variable, \(z\equiv r-r_{H}\), \(f(r)\) can be rewritten as
\[f(r)=f_{1}(z)=Az^{6}+Bz^{5}+Cz^{4}+Dz^{3}+Ez^{2}+Fz+G, \tag{25}\]
where
\[A= \,4M\left(\mu^{2}-2\omega^{2}\right)+4\lambda_{kjm}+3, \tag{26}\] \[B= \,6r_{H}\left[4M\left(\mu^{2}-2\omega^{2}\right)+4\lambda_{kjm}+3 \right],\] (27) \[C= \,16amM\omega+4M\left[14M\left(\mu^{2}-2\omega^{2}\right)-2M \omega^{2}+14\lambda_{kjm}+9\right]\] \[-a^{2}\left(-2j^{2}+2m^{2}+14\lambda_{kjm}+9\right),\] (28) \[D= \,8r_{H}\left\{8amM\omega+M\left[8M\left(\mu^{2}-2\omega^{2} \right)-4M\omega^{2}+8\lambda_{kjm}+3\right]\right.\] \[-a^{2}\left(-4j^{2}+4m^{2}+8\lambda_{kjm}+3\right)\right\},\] (29) \[E= \,12r_{H}^{2}\left\{8amM\omega+M\left[3M(\mu^{2}-2\omega^{2})-4M \omega^{2}+3\lambda_{kjm}-1\right]\right.\] \[-\left.a^{2}\left(-3j^{2}+4m^{2}+3\lambda_{kjm}-1\right)\right\},\] (30) \[F= \,8r_{H}^{3}\left\{8amM\omega+M\left[M\left(\mu^{2}-6\omega^{2} \right)+\lambda_{kjm}-3\right]\right.\] \[-\left.a^{2}\left[-j^{2}+4m^{2}+\lambda_{kjm}-3\right]\right\},\] (31) \[G= -8r_{H}^{4}\left[-2amM\omega+M\left(M\omega^{2}+1\right)+a^{2} \left(m^{2}-1\right)\right]. \tag{32}\]
The real roots of \(f(r)=0\) with \(r>r_{H}\) are one-to-one corresponding to the positive real roots of \(f_{1}(z)=0\) with \(z>0\).
In order to use the method based on the Descartes' rule of signs, we analyze the signs or sign relations of the coefficients in \(f_{1}(z)\). First, given the inequalities \(\omega^{2}M\ll 1,\mu^{2}M\ll 1\), it is easy to get the following results
\[A>0,\quad B>0. \tag{33}\]
Then, considering \(G\) as a quadratic function of \(\omega\), it is easy to see that it opens downward with discriminant \(\Delta_{G}=-256M^{2}r_{H}^{10}<0\). Therefore, we have
\[G<0. \tag{34}\]
It is not easy to directly judge the signs of the other coefficients. In the next, we study the sign relations between pairs of adjacent coefficients. We introduce the following scaled new coefficients
\[C^{\prime}=\frac{C}{2},\quad D^{\prime}=\frac{D}{8r_{H}},\quad E^{\prime}= \frac{E}{12r_{H}^{2}},\quad F^{\prime}=\frac{F}{8r_{H}^{3}}. \tag{35}\]
It is worth noting that all scaling factors in the above equations are positive. Therefore, \(C\) and \(C^{\prime}\) ( \(D\) and \(D^{\prime},\cdots\)) possess the same sign, i.e. they are simultaneously positive or negative. Taking the difference between \(C^{\prime}\) and \(D^{\prime}\), given the inequalities \(\omega^{2}M\ll 1,\mu^{2}M\ll 1\)
we can obtain
\[C^{\prime}-D^{\prime}= \frac{C}{2}-\frac{D}{8r_{H}}\] \[= 5r_{H}^{2}\left[4M\left(\mu^{2}-2\omega^{2}\right)+4\lambda_{kjm}+ 3\right]>0. \tag{36}\]
Next, let's calculate the difference between \(D^{\prime}\) and \(E^{\prime}\),
\[D^{\prime}-E^{\prime}= \frac{D}{8r_{H}}-\frac{E}{12r_{H}^{2}}\] \[= M\left[5M\left(\mu^{2}-2\omega^{2}\right)+5\lambda_{kjm}+4\right]\] \[-a^{2}\left(-j^{2}+5\lambda_{kjm}+4\right). \tag{37}\]
Given the inequalities \(\omega^{2}M\ll 1,\mu^{2}M\ll 1\), it's obvious that the coefficient of \(M\) is greater than the coefficient of \(a^{2}\). Together with the inequality \(M>a^{2}\), so we have
\[D^{\prime}>E^{\prime}. \tag{38}\]
Similarly, we can also obtain
\[E^{\prime}-F^{\prime}= \frac{E}{12r_{H}^{2}}-\frac{F}{8r_{H}^{3}}\] \[= 2\big{\{}M\big{[}M\big{(}\mu^{2}-2\omega^{2}\big{)}+\lambda_{ kjm}+1\big{]}\] \[-a^{2}\big{(}-j^{2}+\lambda_{kjm}+1\big{)}\big{\}}>0, \tag{39}\]
According to the above three inequalities (36),(38) and (39), we finally obtain
\[C^{\prime}>D^{\prime}>E^{\prime}>F^{\prime}. \tag{40}\]
In mathematics, Descartes' rule of signs provides a practical theorem to determine the possible number of positive real roots of a polynomial equation. It says that if a polynomial with real coefficients is arranged in descending order of powers, then the number of positive real roots of the polynomial is either equal to the number of sign changes between adjacent non-zero coefficients, or is less than it by an even number. For the polynomial equation \(f_{1}(z)=0\), we show all possible signs of its coefficients in Table 1. It is obvious that the number of sign changes of the coefficients \((A,B,C,D,E,F,G)\) is always 1. So there is at most one positive real root for \(f_{1}(z)=0\), equivalently, for \(f(r)=0\) with \(r>r_{H}\). And according to the previous asymptotic analysis, we know that there is at least one maximum for the effective potential outside the event horizon.
We conclude that there is only one maximum point (potential barrier) outside the horizon \(r_{H}\) for the effective potential. As an illustration, we show a typical effective potential in Fig.1. In this situation, the black hole "bomb" mechanism is not triggered, and the system consisting of MPBH and massive scalar perturbation is superradiantly stable.
## IV Numerical analysis about QNMs
In this section, we consider the QNMs boundary condition, i.e. \(q=i\sqrt{\omega^{2}-\mu^{2}}\) in Eq.(13), and numerically study the QNMs of the system consisting of MPBH and massive scalar perturbation. As mentioned in the reviews [1; 2], there are a series of methods to calculate QNMs. Here we adopt Leaver's continued fraction method [57], which is the most accurate technique to date to compute quasinormal frequencies. For simplicity, we set \(M=1\) in this section, so that \(r\) and \(a\) are measured in units of \(M^{1/2}\), while \(\omega\) and \(\mu\) are scaled with \(M^{-1/2}\).
### Continued Fraction Method
Defining a new variable, \(u\equiv\cos\theta\), the \(5D\) angular EOM (7) can be written as follows
\[\frac{1}{u}\bigg{[}\frac{\partial}{\partial u}u\left(1-u^{2} \right)\frac{\partial}{\partial u}\bigg{]}S(\theta)\\ +\bigg{[}c^{2}u^{2}+\lambda_{kjm}-\frac{m^{2}}{1-u^{2}}-\frac{j^{ 2}}{u^{2}}\bigg{]}S(\theta)=0. \tag{41}\]
Figure 1: Typical shape of the effective potential in the radial EOM. The parameters are chosen as \(M=1,a=0.4,k=j=0,m=1,\omega=0.1,\mu=0.2\).
The angular function \(S(\theta)\) can be assumed to have a series expansion form,
\[S=(1-u^{2})^{\frac{|m|}{2}}u^{j}\sum_{k=0}^{\infty}a_{k}u^{2k}. \tag{42}\]
This series (if convergent) automatically satisfies regular boundary conditions at three singular points \(\theta=0,\pi/2,\pi\)[55].
Substituting Eq.(42) into Eq.(41), we obtain a three-term recurrence relations
\[\alpha_{0}^{\theta}a_{1}+\beta_{0}^{\theta}a_{0}=0,\] \[\alpha_{k}^{\theta}a_{k+1}+\beta_{k}^{\theta}a_{k}+\gamma_{k}^{ \theta}a_{k-1}=0,\qquad(k=2,3,\cdots), \tag{43}\]
where the superscript \(\theta\) indicates that these symbols are used to describe the recurrence coefficients of the angular equation. The explicit forms of these coefficients are given by
\[\alpha_{k}^{\theta} =-4(1+k)(j+k+1),\] \[\beta_{k}^{\theta} =(2k+j+|m|)\left(2k+j+|m|+2\right)-\lambda_{kjm},\] \[\gamma_{k}^{\theta} =-c^{2}. \tag{44}\]
Then the continued fraction equation for the separation constant \(\lambda_{kjm}\) has the same form as in the \(4D\) Kerr case [57],
\[\beta_{0}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{1}^{ \theta}-}\frac{\alpha_{1}^{\theta}\gamma_{2}^{\theta}}{\beta_{2}^{\theta}-} \frac{\alpha_{0}^{\theta}\gamma_{3}^{\theta}}{\beta_{3}^{\theta}-}\cdots\equiv \beta_{0}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{1}^{ \theta}-\frac{\alpha_{1}^{\theta}\gamma_{2}^{\theta}}{\beta_{2}^{\theta}- \frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha _{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{2}^{\theta}-\frac{\alpha_{0}^{\theta} \gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{ \beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{ \theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}- \frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha _{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta} \gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{ \beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{ \theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}- \frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha _{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta} \gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{ \beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{ \theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}- \frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha _{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta} \gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{ \beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{ \theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}- \frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha _{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta }\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta _{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}- \frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha _{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta }\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta _{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta} -\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha _{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta} \gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{ 3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}- \frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0 }^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{ 3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}- \frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0 }^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta} \gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{ \beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta} -\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0 }^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{ \theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{ \alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{ \theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{ 3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}- \frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0 }^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta} \gamma_{1}^{\theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{ \theta}}{\beta_{3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{ 3}^{\theta}-\frac{\alpha_{0}^{\theta}\gamma_{1}^{\theta}}{\beta_{3}^{\theta}- \frac{\
as the scalar mass \(\mu\) increases. These long-living modes, called quasiresonances, are qualitatively the same as that found in four-dimensional Kerr BH cases [15; 16] and in Schwarzschild BH cases [59; 60; 61]. For larger quantum numbers, it is also found that the imaginary part \(\text{Im}(\omega)\) has a more slower tendency to zero. To calculate the cases with large enough scalar masses \(\mu\), one needs a smaller step of \(\Delta\mu\) and the continued fraction method should be improved by the Nollert technique [62].
In Fig.3, the dependence of the QNM frequencies of the lowest state (\(\ell=m=1\)) on the rotation parameter \(a\) is plotted. The four curves correspond to the QNM modes with different scalar masses, \(\mu=0,0.3,0.6\) and \(0.9\). In each curve, it is plotted from left to right ten points with \(a=0,0.1,0.2,\cdots,0.9\), respectively. First, we see that the QNMs of the massive scalar perturbation are all decaying modes and no instability appears.
Second, the damping rate, which is determined by \(-\text{Im}(\omega)\), is monotonically decreasing for massless QNMs as \(a\) is increasing. However, the damping rate of the QNMs of massive scalar perturbation is obviously non-monotonic with respect to \(a\) when scalar mass \(\mu\) is relatively large. It increases first and then decreases as \(a\) increases, which is shown by the black curve in Fig.3. We also find that for a fixed set of quantum numbers \(\{\ell,m,j\}\), this non-monotonic phenomenon becomes more and more obvious as \(\mu\) increases. In contrast, for a fixed scalar mass \(\mu\), the increasing of the quantum numbers suppresses this non-monotonic phenomenon, which is shown in Fig.4.
Finally, the real parts of the QNMs of massless and massive scalar perturbation both monotonically increase as \(a\) increases, which is also shown in Fig.4. It is noticeable that the dependence on the rotation parameter \(a\) of the real parts and imaginary parts of the QNMs of massive scalar perturbation is qualitatively the same as that in a \(4D\) Kerr BH case [15; 16].
To find possible instability, we further calculate the fundamental QNM frequencies with another two sets of quantum numbers for different values of rotation parameter \(a\) and scalar mass \(\mu\). In Table.3, the three quantum numbers are chosen as \(j=0,k=m=1\), and \(k=j=m=1\) in Table.4. All found modes in the two tables are kept to the first six digits after the decimal point. It is easy to see that the imaginary parts of the QNMs are all negative, so they are all decaying modes and no unstable mode is found.
## V Conclusions
In this paper, we study the stability of a \(5D\) singly rotating MPBH under massive scalar perturbations. It is found that a \(5D\) singly rotating MPBH is stable against both the QBS modes and QNMs of the massive scalar perturbation.
We first consider the QBS modes which might lead to superradiant instability of the system through the black hole "bomb" mechanism. In the context of the perturbation theory of BH, we consider the constrains on the parameters of the system, which are several important inequalities below Eqs.(15)(22). Given these inequalities, we use an analytic method based on Descartes' rule of signs to show that there is no potential well outside the event horizon of the MPBH. This means the \(5D\) singly rotating MPBH is stable against the QBS modes of the
\begin{table}
\begin{tabular}{c c c c c c c} \(k\) & \(j\) & \(m\) & \(\omega_{\text{Num}}\) & \(\omega_{\text{Sch}}\) & \% Re & \% Im \\ \hline
0 & 0 & 0 & \(0.54126-0.39585i\) & \(0.53384-0.38338i\) & 1.37 & 3.15 \\
0 & 0 & 1 & \(1.01627-0.36352i\) & \(1.01602-0.36233i\) & 0.02 & 0.33 \\
0 & 1 & 1 & \(1.51058-0.35776i\) & \(1.51057-0.35754i\) & 0.00 & 0.06 \\ \end{tabular}
\end{table}
Table 2: Comparison between our numerical results of the fundamental QNMs of massless scalar field on \(5D\) MPBH with \(a=0\) and the results for \(5D\) Schwarzschild BHs obtained in Ref.[61].
Figure 3: The fundamental QNM frequencies with quantum number \(k=j=0,m=1\) are shown as a function of the rotation parameter \(a\) of the MPBH. Four curves correspond to different values of the scalar mass \(\mu\).
Figure 2: The fundamental scalar QNMs for different values of \(\mu\). The points on each curve are plotted with the step of \(\Delta\mu=0.1\), starting from \(\mu=0\). The other parameters are chosen as \(M=1,a=0.4\).
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \(a\) & \(\mu=0\) & \(\mu=0.3\) & \(\mu=0.6\) & \(\mu=0.9\) \\ \hline
0.0 & \(2.506329-0.354964i\) & \(2.514979-0.353107i\) & \(2.540942-0.347553i\) & \(2.584258-0.338349i\) \\
0.1 & \(2.533676-0.354510i\) & \(2.542141-23352709i\) & \(2.567551-0.347321i\) & \(2.609954-0.338382i\) \\
0.2 & \(2.565177-0.353047i\) & \(2.573425-0.351318i\) & \(2.598188-0.346140i\) & \(2.639524-0.337539i\) \\
0.3 & \(2.601175-0.350423i\) & \(2.609174-0.348709i\) & \(2.633194-0.343858i\) & \(2.673306-0.335669i\) \\
0.4 & \(2.642123-0.346411i\) & \(2.696021-0.339265i\) & \(2.673016-0.340252i\) & \(2.711739-0.332554i\) \\
0.5 & \(2.688624-0.340684i\) & \(2.748526-0.331472i\) & \(2.718248-0.338002i\) & \(2.755404-0.327881i\) \\
0.6 & \(2.741487-0.332752i\) & \(2.808427-0.320663i\) & \(2.769681-0.327625i\) & \(2.805070-0.321181i\) \\
0.7 & \(2.801796-0.321780i\) & \(2.877230-0.304663i\) & \(2.828366-0.317298i\) & \(2.861745-0.311642i\) \\
0.8 & \(2.871067-0.305600i\) & \(2.877230-0.304663i\) & \(2.895770-0.301832i\) & \(2.926841-0.297043i\) \\
0.9 & \(2.963258-0.276453i\) & \(2.969035-0.275735i\) & \(2.986438-0.273555i\) & \(3.015686-0.269837i\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The fundamental QNMs with \(k=j=m=1\) for different values of \(a\) and \(\mu\).
Figure 4: The values of \(\text{Re}(\omega)\) and \(\text{Im}(\omega)\) of QNM frequencies against different values of the rotation parameter \(a\). Four sets of results with different quantum numbers \(\{\ell,j,m\}\) are shown. The other parameters are chosen as \(M=1,\mu=0.9\).
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \(a\) & \(\mu=0\) & \(\mu=0.3\) & \(\mu=0.6\) & \(\mu=0.9\) \\ \hline
0.0 & \(2.007961-0.355712i\) & \(2.018538-0.352855i\) & \(2.050308-0.344314i\) & \(2.103385-0.330189i\) \\
0.1 & \(2.035047-0.355261i\) & \(2.045350-0.352508i\) & \(2.076303-0.344270i\) & \(2.128036-0.330621i\) \\
0.2 & \(2.065353-0.353768i\) & \(2.075349-0.351140i\) & \(2.105385-0.343267i\) & \(2.155608-0.330193i\) \\
0.3 & \(2.099136-0.351055i\) & \(2.108790-0.348572i\) & \(2.137805-0.341127i\) & \(2.186351-0.328733i\) \\
0.4 & \(2.136696-0.346867i\) & \(2.145971-0.344553i\) & \(2.173861-0.337603i\) & \(2.220555-0.325998i\) \\
0.5 & \(2.178379-0.340853i\) & \(2.187241-0.338731i\) & \(2.213895-0.332348i\) & \(2.258557-0.321654i\) \\
0.6 & \(2.224575-0.332514i\) & \(2.232984-0.330612i\) & \(2.258290-0.324880i\) & \(2.300734-0.315236i\) \\
0.7 & \(2.275690-0.321149i\) & \(2.283607-0.319502i\) & \(2.307446-0.314522i\) & \(2.347478-0.306104i\) \\
0.8 & \(2.332014-0.305846i\) & \(2.339397-0.304499i\) & \(2.361642-0.300416i\) & \(2.399048-0.293467i\) \\
0.9 & \(2.392369-0.286127i\) & \(2.399148-0.285159i\) & \(2.419587-0.282204i\) & \(2.453993-0.277111i\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The fundamental QNMs with \(j=0,k=m=1\) for different values of \(a\) and \(\mu\).
massive scalar perturbations.
Then, we use Leaver's continued fraction method to numerically compute the QNMs of the massive scalar perturbation. We first introduce the specific steps of this method, and then make extensive calculation for the fundamental QNMs when the scalar mass \(\mu\) is relatively small. We summarize our numerical results in several tables and figures. It is found that all the obtained fundamental QNMs are decaying modes, i.e. they are all stable.
The damping rates of the QNMs are decreasing with the increasing of the scalar mass \(\mu\). The fundamental QNMs become quasiresonances with infinitely long lifetime when the scalar mass becomes relatively large. The larger the scalar mass \(\mu\) is, the more low-lying modes (modes with small quantum numbers) become the quasiresonances. These properties are qualitatively the same as that found in other rotating BH cases [50; 15].
It is also found that the real parts of the fundamental QNMs monotonically increase with the increasing of the rotation parameter \(a\), while the imaginary parts are not. However, the imaginary parts are always bounded within the region of negative values for different values of \(a\).
###### Acknowledgements.
The authors are grateful to R. A. Konoplya and A. Zhidenko for their insightful comments. We also would like to thank Zhan-Feng Mai and Lihang Zhou for their valuable discussions. This work is partially supported by Guangdong Major Project of Basic and Applied Basic Research (No.2020B0301030008).
|
2309.06548 | Online Infinite-Dimensional Regression: Learning Linear Operators | We consider the problem of learning linear operators under squared loss
between two infinite-dimensional Hilbert spaces in the online setting. We show
that the class of linear operators with uniformly bounded $p$-Schatten norm is
online learnable for any $p \in [1, \infty)$. On the other hand, we prove an
impossibility result by showing that the class of uniformly bounded linear
operators with respect to the operator norm is \textit{not} online learnable.
Moreover, we show a separation between sequential uniform convergence and
online learnability by identifying a class of bounded linear operators that is
online learnable but uniform convergence does not hold. Finally, we prove that
the impossibility result and the separation between uniform convergence and
learnability also hold in the batch setting. | Vinod Raman, Unique Subedi, Ambuj Tewari | 2023-09-08T21:34:52Z | http://arxiv.org/abs/2309.06548v3 | # Online Infinite-Dimensional Regression: Learning Linear Operators
###### Abstract
We consider the problem of learning linear operators under squared loss between two infinite-dimensional Hilbert spaces in the online setting. We show that the class of linear operators with uniformly bounded \(p\)-Schatten norm is online learnable for any \(p\in[1,\infty)\). On the other hand, we prove an impossibility result by showing that the class of uniformly bounded linear operators with respect to the operator norm is _not_ online learnable. Moreover, we show a separation between online uniform convergence and online learnability by identifying a class of bounded linear operators that is online learnable but uniform convergence does not hold. Finally, we prove that the impossibility result and the separation between uniform convergence and learnability also hold in the agnostic PAC setting.
## 1 Introduction
Learning operators between infinite-dimensional spaces is of fundamental importance in many scientific and engineering applications. For instance, the classical inverse problem is often modeled as learning an inverse mapping from a function space of observed data to the function space of underlying latent parameters, both of which are infinite-dimensional spaces (Kirsch, 2011; Tarantola, 2005). Such inverse problems have found widespread applicability in domains ranging from image processing, X-ray tomography, seismic inversion, and so forth (Neto and da Silva Neto, 2012; Uhlmann, 2003). In addition, the solution to a partial differential equation is an operator from a space of functions specifying boundary conditions to the space of solution functions (Kovachki et al., 2021; Li et al., 2020). Moreover, many of the traditional learning settings such as multi-task learning, matrix completion, and collaborative filtering can be modeled by learning operators between infinite-dimensional spaces (Abernethy et al., 2009). Finally, many modern supervised learning applications involve working with datasets, where both the features and labels lie in high-dimensional spaces (Deng et al., 2009; Santhanam et al., 2017). Thus, it is desirable to construct learning algorithms whose guarantees do not scale with the ambient dimensions of the problem.
Most of the existing work in operator learning assumes some stochastic model for the data, which can be unrealistic in many applications. For instance, the majority of applications of operator learning are in the scientific domain where the data often comes from experiments (Lin et al., 2021). Since experiments are costly, the data arrives sequentially and usually with strong temporal dependence, which may not be adequately captured by a stochastic model. Additionally, given the high-dimensional nature of the data, one typically uses pre-processing techniques like PCA to project the data onto a low-dimensional space (Bhattacharya et al., 2021; Lanthaler, 2023). Even if the original data has some stochastic nature, the preprocessing step introduces non-trivial dependencies in the observations that may be difficult to capture by a model. Accordingly, it is desirable to construct algorithms that can handle _arbitrary_ dependencies in the data. In fact, for continuous problems such as scalar-valued regression, one can obtain guarantees similar to that of i.i.d. setting without making any stochastic assumptions on the data (Rakhlin and Sridharan, 2014).
In this paper, we study linear operator learning between two Hilbert spaces \(\mathcal{V}\) and \(\mathcal{W}\) in the _adversarial online setting_, where one makes no assumptions on the data generating process (Cesa-Bianchi and Lugosi, 2006). In this model, a potentially adversarial nature plays a sequential game with the learner over
rounds. In each round \(t\in[T]\), nature selects a pair of vectors \((v_{t},w_{t})\in\mathcal{V}\times\mathcal{W}\) and reveals \(v_{t}\) to the learner. The learner then makes a prediction \(\hat{w}_{t}\in\mathcal{W}.\) Finally, the adversary reveals the target \(w_{t}\), and the learner suffers the loss \(\left\|\hat{w}_{t}-w_{t}\right\|_{\mathcal{W}}^{2}\). A linear operator class \(\mathcal{F}\subset\mathcal{W}^{\mathcal{V}}\) is online learnable if there exists an online learning algorithm such that for any sequence of labeled examples, the difference in cumulative loss between its predictions and the predictions of the best-fixed operator in \(\mathcal{F}\) is small. In this work, we study the online learnability of linear operators and make the following contributions:
1. We show that the class of linear operators with uniformly bounded \(p\)-Schatten norm is online learnable with regret \(O(T^{\max\{\frac{1}{p},1-\frac{1}{p}\}})\). For any \(\varepsilon>0\), we also provide a lower bound of \(\Omega(T^{1-\frac{1}{p}-\varepsilon})\), which essentially matches the upperbound for \(p\geq 2\).
2. We prove that the class of linear operators with uniformly bounded operator norm is not online learnable. Furthermore, we show that this impossibility result also holds in the PAC setting, thus rigorously establishing the claim made by Tabaghi et al. (2019, Remark 1).
3. Recently, there is a growing interest in understanding when uniform convergence and learnability are not equivalent (Montasser et al., 2019, Hanneke et al., 2023). Along this direction, we give a subset of bounded linear operators for which online learnability and uniform convergence are not equivalent. We further show that this separation does not occur when the output space is finite-dimensional.
To prove contribution (1), we upperbound the sequential Rademacher complexity of the loss class to show that online uniform convergence holds for the \(p\)-Schatten class for \(p\in[1,\infty)\). For our hardness result stated in contribution (2), we construct a class with a uniformly bounded operator norm that is not online learnable. Our construction in contribution (3) is inspired by and generalizes the example of Natarajan (1989, Page 22), which shows a gap between uniform convergence and PAC learnability for multiclass classification. The argument showing that uniform convergence does not hold is a simple adaptation of the existing proof (Natarajan, 1989). However, since our loss is real-valued, showing that the class is learnable requires some novel algorithmic ideas, which can be of independent interest.
### Related Works
Regression between two infinite-dimensional function spaces is a classical statistical problem often studied in functional data analysis (FDA) (Wang et al., 2016, Ferraty, 2006). In FDA, one typically considers \(\mathcal{V}\) and \(\mathcal{W}\) to be \(L^{2}[0,1]\), the space of square-integrable functions, and the hypothesis class is usually the class of kernel integral operators. We discuss the implication of our results to learning kernel integral operators in Section 3.1. Recently, de Hoop et al. (2023), Nelsen and Stuart (2021), Mollenhauer et al. (2022) study learning more general classes of linear operators. However, all of these works are in the i.i.d. setting and assume a data-generating process. Additionally, there is a line of work that uses deep neural networks to learn neural operators between function spaces (Kovachki et al., 2021, Li et al., 2020). Unfortunately, there are no known learning guarantees for these neural operators. Closer to the spirit of our work is that of Tabaghi et al. (2019), who consider the agnostic PAC learnability of \(p\)-Schatten operators. They show that \(p\)-Schatten classes are agnostic PAC learnable. In this work, we complement their work by showing that \(p\)-Schatten classes are also _online_ learnable. Going beyond i.i.d., there is also a line of work that focuses on learning specific classes of operators from time series data (Brunton et al., 2016, Klus et al., 2020).
## 2 Preliminaries
### Hilbert Space Basics
Let \(\mathcal{V}\) and \(\mathcal{W}\) be real, separable, and infinite-dimensional Hilbert spaces. Recall that a Hilbert space is separable if it admits a countable orthonormal basis. Throughout the paper, we let \(\{e_{n}\}_{n=1}^{\infty}\) and \(\{\psi_{n}\}_{n=1}^{\infty}\) denote a set of orthonormal basis for \(\mathcal{V}\) and \(\mathcal{W}\) respectively. Then, any element \(v\in\mathcal{V}\) and \(w\in\mathcal{W}\) can be written as \(v=\sum_{n=1}^{\infty}\beta_{n}e_{n}\) and \(w=\sum_{n=1}^{\infty}\alpha_{n}\psi_{n}\) for sequences \(\{\beta_{n}\}_{n\in\mathbb{N}}\) and \(\{\alpha_{n}\}_{n=1}^{\infty}\) that are \(\ell_{2}\) summable.
Consider \(w_{1},w_{2}\in\mathcal{W}\) such that \(w_{1}=\sum_{n=1}^{\infty}\alpha_{n,1}\,\psi_{n}\) and \(\sum_{n=1}^{\infty}\alpha_{n,2}\,\psi_{n}\). Then, the inner product between \(w_{1}\) and \(w_{2}\) is defined as \(\langle w_{1},w_{2}\rangle_{\mathcal{W}}:=\sum_{n=1}^{\infty}\alpha_{n,1}\alpha_ {n,2}\), and it induces the norm
\(\sqrt{\left\langle w_{1},w_{1}\right\rangle}_{\mathcal{W}}=\sqrt{\sum_{n=1}^{ \infty}\alpha_{n,1}^{2}}\). One can equivalently define \(\left\langle\cdot,\cdot\right\rangle_{\mathcal{V}}\) and \(\left\|\cdot\right\|_{\mathcal{V}}\) to be the inner-product and the induced norm in the Hilbert space \(\mathcal{V}\). When the context is clear, we drop the subscript and simply write \(\left\langle\cdot,\cdot\right\rangle\) and \(\left\|\cdot\right\|\).
A linear operator \(f:\mathcal{V}\rightarrow\mathcal{W}\) is a mapping that preserves the linear structure of the input. That is, \(f(c_{1}v_{1}+c_{2}v_{2})=c_{1}f(v_{1})+c_{2}f(v_{2})\) for any \(c_{1},c_{2}\in\mathbb{R}\) and \(v_{1},v_{2}\in\mathcal{V}\). Let \(\mathcal{L}(\mathcal{V},\mathcal{W})\) denote the set of all linear operators from \(\mathcal{V}\) to \(\mathcal{W}\). A linear operator \(f:\mathcal{V}\rightarrow\mathcal{W}\) is bounded if there exists a constant \(c>0\) such that \(\left\|f(v)\right\|\leq c\left\|v\right\|\) for all \(v\in\mathcal{V}\). The quantity \(\left\|f\right\|_{\mathrm{op}}:=\inf\{c\geq 0\,:\left\|f(v)\right\|\leq c \left\|v\right\|,\forall v\in\mathcal{V}\}\) is called the operator norm of \(f\). The operator norm induces the set of bounded linear operators, \(\mathcal{B}(\mathcal{V},\mathcal{W})=\{f\in\mathcal{L}(\mathcal{V},\mathcal{ W})\,\,\left|\,\left\|f\right\|_{\mathrm{op}}<\infty\right\}\), which is a Banach space with \(\left\|\cdot\right\|_{\mathrm{op}}\) as the norm.
For an operator \(f\in\mathcal{L}(\mathcal{V},\mathcal{W})\), let \(f^{*}:\mathcal{W}\rightarrow\mathcal{V}\) denote the adjoint of \(f\). We can use \(f\) and \(f^{*}\) to define a self-adjoint, non-negative operator \(f^{*}f:\mathcal{V}\rightarrow\mathcal{V}\). Moreover, the absolute value operator is defined as \(\left|f\right|:=(f^{*}f)^{\frac{1}{2}}\), which is the unique non-negative operator such that \(\left|f\right|\circ\left|f\right|=f^{*}f\). Given any operator \(g:\mathcal{V}\rightarrow\mathcal{V}\), the trace of \(g\) is defined as \(\mathrm{tr}(g)=\sum_{n=1}^{\infty}\left\langle g(e_{n}),e_{n}\right\rangle,\) where \(\{e_{n}\}_{n=1}^{\infty}\) is any orthonormal basis of \(\mathcal{V}\). The notion of trace and absolute value allows us to define the \(p\)-Schatten norm of \(f\),
\[\left\|f\right\|_{p}=\Big{(}\operatorname{tr}(\left|f\right|^{p})\Big{)}^{ \frac{1}{p}},\]
for all \(p\in[1,\infty)\). Accordingly, we can define the \(p\)-Schatten class as
\[S_{p}(\mathcal{V},\mathcal{W})=\{f\in\mathcal{L}(\mathcal{V},\mathcal{W})\, \left|\,\left\|f\right\|_{p}<\infty\}.\]
A linear operator \(f:\mathcal{V}\rightarrow\mathcal{W}\) is compact if the closure of the set \(\{f(v)\,\left|\,v\in\mathcal{V},\left\|v\right\|\leq 1\}\) is compact. For a compact linear operator \(f:\mathcal{V}\rightarrow\mathcal{W}\), there exists a sequence of orthonormal basis \(\{\phi_{n}\}_{n=1}^{\infty}\subset\mathcal{V}\) and \(\{\varphi_{n}\}_{n=1}^{\infty}\subset\mathcal{W}\) such that \(f=\sum_{n=1}^{\infty}s_{n}(f)\;\varphi_{n}\otimes\phi_{n}\), where \(s_{n}(f)\downarrow 0\) and \(\varphi_{n}\otimes\phi_{n}\) denote the tensor product between \(\varphi_{n}\) and \(\phi_{n}\). This is the singular value decomposition of \(f\) and the sequence \(\{s_{n}(f)\}_{n=1}^{\infty}\) are the singular values of \(f\). For \(p\in[1,\infty)\), the \(p\)-Schatten norm of a compact operator is equal to the \(\ell_{p}\) norm of the sequence \(\{s_{n}(f)\}_{n\geq 1}\),
\[\left\|f\right\|_{p}=\left(\sum_{n=1}^{\infty}s_{n}(f)^{p}\right)^{\frac{1}{p}}.\]
Moreover, \(\left\|f\right\|_{p}<\infty\) if and only if \(f\) is a compact operator. Thus, every operator in \(S_{p}(\mathcal{V},\mathcal{W})\) is a compact operator. On the other hand, for a compact operator \(f\), the \(\ell_{\infty}\) norm of its singular values is equal to its operator norm, \(\left\|f\right\|_{\mathrm{op}}=\left\|f\right\|_{\infty}=\sup_{n\geq 1}\left|s_{n} (f)\right|.\) Accordingly, for compact operators, the operator norm is referred to as \(\infty\)-Schatten norm, which induces the class
\[S_{\infty}(\mathcal{V},\mathcal{W})=\{f\in\mathcal{L}(\mathcal{V},\mathcal{W}) \,\left|\,\,\text{$f$ is compact and }\left\|f\right\|_{\infty}<\infty\}.\]
Therefore, \(S_{\infty}(\mathcal{V},\mathcal{W})\subset\mathcal{B}(\mathcal{V},\mathcal{W})\). For a comprehensive treatment of the theory of Hilbert spaces and linear operators, we refer the reader to Conway [1990] and Weidmann [2012].
### Online Learning
Let \(\mathcal{X}\subseteq\mathcal{V}\) denote the instance space, \(\mathcal{Y}\subseteq\mathcal{W}\) denote the target space, and \(\mathcal{F}\subseteq\mathcal{L}(\mathcal{V},\mathcal{W})\) denote the hypothesis class. In online linear operator learning, a potentially adversarial nature plays a sequential game with the learner over \(T\) rounds. In each round \(t\in[T]\), the nature selects a labeled instance \((x_{t},y_{t})\in\mathcal{X}\times\mathcal{Y}\) and reveals \(x_{t}\) to the learner. The learner then makes a prediction \(\hat{y}_{t}\in\mathcal{Y}.\) Finally, the adversary reveals the target \(y_{t}\), and the learner suffers the loss \(\left\|\hat{y}_{t}-y_{t}\right\|_{\mathcal{W}}^{2}\).
We say that the linear operator class \(\mathcal{F}\) is online learnable if there exists an online learning algorithm such that for any sequence of labeled examples, \((x_{1},y_{1}),...,(x_{T},y_{T})\), the difference in expected cumulative loss between its predictions and the predictions of the best-fixed operator in \(\mathcal{F}\) is small. We formalize this notion in the following definition.
**Definition 1** (Online Linear Operator Learnability).: _A linear operator class \(\mathcal{F}\subseteq\mathcal{L}(\mathcal{V},\mathcal{W})\) is online learnable if there exists an algorithm \(\mathcal{A}\) such that for any adaptively chosen sequence of labeled examples
\((x_{t},y_{t})\in\mathcal{X}\times\mathcal{Y}\), the algorithm outputs \(\mathcal{A}(x_{t})\in\mathcal{Y}\) at every iteration \(t\in[T]\) such that its expected regret,_
\[R_{\mathcal{A}}(T,\mathcal{F}):=\mathbb{E}\left[\sum_{t=1}^{T}\left\|\mathcal{A }(x_{t})-y_{t}\right\|^{2}-\inf_{f\in\mathcal{F}}\sum_{t=1}^{T}\left\|f(x_{t}) -y_{t}\right\|^{2}\right]\]
_is a non-decreasing sublinear function of \(T\)._
Unlike when \(\mathcal{V}\) is finite-dimensional, the class \(\mathcal{F}=\mathcal{L}(\mathcal{V},\mathcal{W})\) is not online learnable when \(\mathcal{V}\) is infinite-dimensional. Accordingly, we are interested in understanding for which subsets \(\mathcal{F}\subset\mathcal{L}(\mathcal{V},\mathcal{W})\) is online learning possible. A general notion of complexity measure called the sequential Rademacher complexity characterizes online uniform convergence Rakhlin et al. (2015a,b), and thus provides a sufficient condition for online learnability.
**Definition 2** (Sequential Rademacher Complexity).: _Let \(\sigma=\{\sigma_{i}\}_{i=1}^{T}\) be a sequence of independent Rademacher random variables and \((x,y)=\{(x_{t},y_{t})\}_{t=1}^{T}\) be a sequence of functions \((x_{t},y_{t}):\{-1,1\}^{t-1}\to\mathcal{X}\times\mathcal{Y}\). Then, the sequential Rademacher complexity of the loss class \(\{(v,w)\mapsto\left\|f(v)-w\right\|^{2}:\,f\in\mathcal{F}\}\) is defined as_
\[\mathrm{Rad}_{T}(\mathcal{F})=\sup_{x,y}\,\mathbb{E}\left[\sup_{f\in\mathcal{ F}}\sum_{t=1}^{T}\sigma_{t}\left\|f(x_{t}(\sigma_{<t}))-y_{t}(\sigma_{<t}) \right\|^{2}\right],\]
_where \(\sigma_{<t}=(\sigma_{1},\ldots,\sigma_{t-1})\)._
If there exists a \(B>0\) such that \(\sup_{f,v,w}\left\|f(v)-w\right\|^{2}\leq B\), then Theorem 1 of Rakhlin et al. (2015b) implies that the online uniformconvergence holds for the loss class \(\{(v,w)\mapsto\left\|f(v)-w\right\|^{2}:\,f\in\mathcal{F}\}\) if and only if \(\mathrm{Rad}_{T}(\mathcal{F})=o(T)\).
## 3 \(p\)-Schatten Operators are Online Learnable
In this section, we show that the class \(S_{p}(\mathcal{V},\mathcal{W})\) is online learnable. Despite not making any distributional assumptions, the rates in Theorem 1 match the rates for the PAC setting obtained by Tabaghi et al. (2019). This complements the results by Rakhlin and Sridharan (2014), who show that the rates for scalar-valued regression with squared loss are similar for online and PAC learning.
**Theorem 1** (Uniformly Bounded Subsets of \(S_{p}(\mathcal{V},\mathcal{W})\) are Online Learnable).: _Fix \(c>0\). Let \(\mathcal{X}=\{v\in\mathcal{V}\mid\left\|v\right\|\leq 1\}\) denote the instance space, \(\mathcal{Y}=\{w\in\mathcal{W}\mid\left\|w\right\|\leq c\}\) denote the target space, and \(\mathcal{F}_{p}=\{f\in S_{p}(\mathcal{V},\mathcal{W})\mid\left\|f\right\|_{p} \leq c\}\) be the hypothesis class. Then, for any labeled stream, there exists an algorithm \(\mathcal{A}\) such that its expected regret is_
\[\mathbb{E}\left[\sum_{t=1}^{T}\left\|\mathcal{A}(x_{t})-y_{t}\right\|^{2}-\inf _{f\in\mathcal{F}_{p}}\sum_{t=1}^{T}\left\|f(x_{t})-y_{t}\right\|^{2}\right] \leq 2\,\mathrm{Rad}_{T}(\mathcal{F}_{p})\leq 6c^{2}\,T^{\max\{\frac{1}{2},1- \frac{1}{p}\}}.\]
Theorem 1 implies the regret \(O(\sqrt{T})\) for \(p\in[1,2]\) and the regret \(O(T^{1-\frac{1}{p}})\) for \(p>2\). When \(p=\infty\), the regret bound implied by Theorem 1 is vacuous. A similar phase transition was observed in the PAC setting by Tabaghi et al. (2019). They provide intuition on why \(S_{\infty}(\mathcal{V},\mathcal{W})\) may not be PAC learnable, but do not provide a proof. In Section 4, we prove that \(S_{\infty}(\mathcal{V},\mathcal{W})\) is neither PAC nor online learnable, thus rigorously establishing a statement by Tabaghi et al. (2019, Remark 1).
Our proof of Theorem 1 relies on Lemma 2 which shows that the \(q\)-Schatten norm of Rademacher sums of rank-1 operators concentrates. The proof of Lemma 2 is in Appendix A.
**Lemma 2** (Rademacher Sums of Rank-1 Operators).: _Let \(\sigma=\{\sigma_{i}\}_{i=1}^{T}\) be a sequence of independent Rademacher random variables and \(\{(v_{t},w_{t})\}_{t=1}^{T}\) be any sequence of functions \((v_{t},w_{t}):\{-1,1\}^{t-1}\to\{v\in\mathcal{V}:\left\|v\right\|\leq c_{1}\} \times\{w\in\mathcal{W}:\left\|w\right\|\leq c_{2}\}\). Then, for any \(q\geq 1\), we have_
\[\mathbb{E}\left[\left\|\sum_{t=1}^{T}\sigma_{t}\,v_{t}(\sigma_{<t})\otimes w_{t }(\sigma_{<t})\right\|_{q}\right]\leq c_{1}\,c_{2}\,T^{\max\left\{\frac{1}{2}, \frac{1}{q}\right\}}\]
Lemma 2 extends Lemma 1 in (Tabaghi et al., 2019) to the non-i.i.d. setting. In particular, the rank-1 operator indexed by \(t\) can depend on the Rademacher subsequence \(\sigma_{<t}\), whereas Tabaghi et al. (2019) only consider the case when the rank-1 operators are independent of the Rademacher sequence. In addition, Tabaghi et al. (2019) use a non-trivial result from convex analysis, namely the fact that \(A\mapsto\operatorname{tr}(h(F))\) is a convex functional on the set \(\{F\in\mathcal{T}\mid\operatorname{spectra}(F)\subseteq[\alpha,\beta]\}\) for any convex function \(h\) and the class of finite-rank self-adjoint operators \(\mathcal{T}\). Our proof of Lemma 2, on the other hand, only uses standard inequalities.
Equipped with Lemma 2, our proof of Theorem 1 follows by upper bounding the sequential Rademacher complexity of the loss class. Although this proof of online learnability is non-constructive, we can use Proposition 1 from (Rakhlin et al., 2012) to design an explicit online learner that achieves the matching regret given access to an oracle that computes the sequential Rademacher complexity of the class.
### Examples of \(p\)-Schatten class
In this section, we provide examples of operator classes with uniformly bounded \(p\)-Schatten norm.
Uniformly bounded operators w.r.t. \(\left\lVert\cdot\right\rVert_{\text{op}}\) when either \(\mathcal{V}\) or \(\mathcal{W}\) is finite-dimensional.If either the input space \(\mathcal{V}\) or the output space \(\mathcal{W}\) is finite-dimensional, then the class of bounded linear operators \(\mathcal{B}(\mathcal{V},\mathcal{W})\) is \(p\)-Schatten class for every \(p\in[1,\infty]\). This is immediate because for every \(f\in\mathcal{B}(\mathcal{V},\mathcal{W})\), either the operator \(f^{\star}f:\mathcal{V}\to\mathcal{V}\) or \(ff^{\star}:\mathcal{W}\to\mathcal{W}\) is a bounded operator that maps between two finite-dimensional spaces. Let \(\left\lVert f\right\rVert_{\text{op}}\leq c\) and \(\min\{\dim(\mathcal{V}),\dim(\mathcal{W})\}=d<\infty\). Since \(f^{\star}f\) and \(ff^{\star}\) have the same singular values and one of them has rank at most \(d\), both of them must have rank at most \(d\). Let \(s_{1}\geq s_{2}\ldots\geq s_{d}\geq 0\) denote all singular values of \(f^{\star}f\). Then, \(\left\lVert f\right\rVert_{p}=\left(\sum_{i=1}^{d}s_{i}^{p}\right)^{\frac{1}{p }}\leq c\,d^{\frac{1}{p}}<\infty\), where we use the fact that \(s_{i}\leq c\) for all \(i\). Since \(\left\lVert f\right\rVert_{2}\leq c\,\sqrt{d}\), Theorem 1 implies that \(\mathcal{F}=\{f\in\mathcal{B}(\mathcal{V},\mathcal{W})\,\mid\left\lVert f \right\rVert_{\text{op}}\leq c\}\) is online learnable with regret at most \(6c^{2}d\sqrt{T}\).
Kernel Integral Operators.Let \(\mathcal{V}\) denote a Hilbert space of functions defined on some domain \(\Omega\). Then, a kernel \(K:\Omega\times\Omega\to\mathbb{R}\) defines an integral operator \(f_{K}:\mathcal{V}\to\mathcal{W}\) such that \(f_{K}(v(r))=\int_{\Omega}K(r,s)\,v(s)\,d\mu(s)\), for some measure space \((\Omega,\mu)\). Now define a class of integral operators,
\[\mathcal{F}=\left\{f_{K}\,:\,\int_{\Omega}\int_{\Omega}\,\left|K(r,s)\right|^{ 2}d\mu(r)\,d\mu(s)\leq c^{2}\right\},\]
induced by all the kernels whose \(L^{2}\) norm is bounded by \(c\). It is well known that \(\left\lVert f\right\rVert_{2}\leq c\) for every \(f\in\mathcal{F}\)(see (Conway, 1990; Page 267) and (Weidmann, 2012, Theorem 6.11) ). Thus, Theorem 1 implies that \(\mathcal{F}\) is online learnable with regret \(6c^{2}\sqrt{T}\).
## 4 Lower Bounds and Hardness Results
In this section, we establish lower bounds for learning various subsets of \(\mathcal{L}(\mathcal{V},\mathcal{W})\). We first establish a generic lower bound for a class \(\mathcal{G}^{\gamma}\subset\mathcal{L}(\mathcal{V},\mathcal{W})\) in terms of a class parameter \(\gamma\). Then, for each \(p\in[1,\infty]\), we show that we can pick a \(\gamma\) such that \(\mathcal{G}^{\gamma}\subseteq S_{p}(\mathcal{V},\mathcal{W})\). Thus, the lower bound for learning \(\mathcal{G}^{\gamma}\) implies the lower bound for learning \(S_{p}(\mathcal{V},\mathcal{W})\).
**Lemma 3** (Lower Bounds for Operator Classes with Fixed Decomposition).: _Let \(\mathcal{V}\) and \(\mathcal{W}\) be Hilbert spaces with orthonormal basis \(\{e_{i}\}_{i\in\mathbb{N}}\) and \(\{\psi_{i}\}_{i\in\mathbb{N}}\) respectively. Let \(\mathcal{X}=\{v\in\mathcal{V}\mid\left\lVert v\right\rVert\leq 1\}\) denote the instance space and \(\mathcal{Y}=\{w\in\mathcal{V}\mid\left\lVert w\right\rVert\leq 1\}\) denote the target space. For a sequence \(\gamma=\{\gamma_{n}\}_{n\in\mathbb{N}}\) such that \(\gamma_{n}\in[0,1]\), define the linear operator class_
\[\mathcal{G}^{\gamma}=\cup_{n\in\mathbb{N}}\,\mathcal{G}_{n}^{\gamma}\quad\text { where }\quad\mathcal{G}_{n}^{\gamma}=\left\{\sum_{i=1}^{n}\sigma_{i}\,\gamma_{i}\,\psi_{i }\otimes e_{i}\mid\sigma\in\{\pm 1\}^{n}\right\}. \tag{1}\]
_Then, for any algorithm, there exists a stream such that its expected regret is at least \(\sum_{t=1}^{T}\gamma_{t}\)._
Operator classes like \(\mathcal{G}^{\gamma}\), where the singular value decomposition is known to the learner apriori, are important from a practical standpoint because learning them reduces to learning the sequence of singular values (de Hoop et al., 2023).
Proof.: Fix an algorithm \(\mathcal{A}\), and consider a labeled stream \(\{(e_{t},\sigma_{t}\psi_{t})\}_{t=1}^{T}\) where \(\sigma_{t}\sim\text{Unif}(\{-1,1\})\). Then, the expected loss of \(\mathcal{A}\) is
\[\mathbb{E}\left[\sum_{t=1}^{T}\left\|\mathcal{A}(e_{t})-\sigma_{t }\psi_{t}\right\|^{2}\right]\geq\sum_{t=1}^{T}\left(\mathbb{E}_{\mathcal{A}} \left[\frac{1}{2}\left\|\mathcal{A}(x_{t})-\psi_{t}\right\|+\frac{1}{2}\left\| \mathcal{A}(x_{t})+\psi_{t}\right\|\right]\right)^{2}\] \[\geq\sum_{t=1}^{T}\left(\frac{1}{2}\left\|\psi_{t}-(-\psi_{t}) \right\|\right)^{2}=\sum_{t=1}^{T}\left\|\psi_{t}\right\|^{2}=T.\]
The first inequality above is due to Jensen's, whereas the second inequality is the reverse triangle inequality.
To establish the upper bound on the optimal cumulative loss amongst operators in \(\mathcal{G}^{\gamma}\), consider the operator \(g_{\sigma}^{\gamma}:=\sum_{t=1}^{T}\sigma_{i}\,\gamma_{i}\,\psi_{i}\otimes e_ {i}\). Clearly, \(g_{\sigma}^{\gamma}\in\mathcal{G}_{T}^{\gamma}\subset\mathcal{G}^{\gamma}\). Note that \(g_{\sigma}^{\gamma}(e_{t})=\sum_{i=1}^{T}\sigma_{i}\,\gamma_{i}\,\langle e_{i},e_{t}\rangle\,\psi_{i}=\sigma_{t}\gamma_{i}\,\psi_{t}\) because \(\langle e_{i},e_{t}\rangle=0\) for all \(i\neq t\). Thus, we obtain
\[\mathbb{E}\left[\inf_{g\in\mathcal{G}^{\gamma}}\sum_{t=1}^{T}\left\|g(e_{t})- \sigma_{t}\psi_{t}\right\|^{2}\right]\leq\mathbb{E}\left[\sum_{t=1}^{T}\left\| g_{\sigma}^{\gamma}(e_{t})-\sigma_{t}\psi_{t}\right\|^{2}\right]=\mathbb{E} \left[\sum_{t=1}^{T}\left\|\sigma_{t}\,\gamma_{t}\psi_{t}-\sigma_{t}\psi_{t} \right\|^{2}\right]=\sum_{t=1}^{T}\left(\gamma_{t}-1\right)^{2},\]
where the final step follows because \(\left\|\sigma_{t}\,\psi_{t}\right\|^{2}=1\). Therefore, since \(\gamma_{t}^{2}\leq\gamma_{t}\), the regret of \(\mathcal{A}\) is
\[\mathbb{E}\left[\sum_{t=1}^{T}\left\|\mathcal{A}(e_{t})-\sigma_{t}\psi_{t} \right\|^{2}-\inf_{g\in\mathcal{G}^{\gamma}}\sum_{t=1}^{T}\left\|g(e_{t})- \sigma_{t}\psi_{t}\right\|^{2}\right]\geq T-\sum_{t=1}^{T}\left(\gamma_{t}-1 \right)^{2}=\sum_{t=1}^{T}\left(2\gamma_{t}-\gamma_{t}^{2}\right)\geq\sum_{t=1 }^{T}\gamma_{t}.\]
By carefully picking the sequence \(\gamma\), we use Lemma 3 to give a lower bound for \(S_{p}(\mathcal{V},\mathcal{W})\) and to show that \(S_{\infty}(\mathcal{V},\mathcal{W})\) is not online learnable. Since \(S_{\infty}(\mathcal{V},\mathcal{W})\) is the set of compact operators in \(\mathcal{B}(\mathcal{V},\mathcal{W})\), our result thus implies that the set of bounded linear operators is not online learnable. We defer the proofs of Theorems 4 and 5 to Appendix B.
**Theorem 4** (Lower Bounds for \(S_{p}(\mathcal{V},\mathcal{W})\)).: _Let \(\mathcal{X}=\{v\in\mathcal{V}\mid\left\|v\right\|\leq 1\}\) denote the instance space, \(\mathcal{Y}=\{w\in\mathcal{W}\mid\left\|w\right\|\leq 1\}\) denote the target space, and \(\mathcal{F}_{p}=\{f\in S_{p}(\mathcal{V},\mathcal{W})\mid\left\|f\right\|_{p} \leq 1\}\) be the hypothesis class for \(p\in[1,\infty)\). Then, for any algorithm \(\mathcal{A}\) and \(\varepsilon>0\), there exists a labeled stream such that the expected regret of any algorithm \(\mathcal{A}\) is \(\Omega(T^{1-\frac{1}{p}-\varepsilon})\)._
**Theorem 5** (Uniformly Bounded Subsets of \(S_{\infty}(\mathcal{V},\mathcal{W})\) are Not Online Learnable).: _Let \(\mathcal{X}=\{v\in\mathcal{V}\mid\left\|v\right\|\leq 1\}\) denote the instance space, \(\mathcal{Y}=\{w\in\mathcal{W}\mid\left\|w\right\|\leq 1\}\) denote the target space, and \(\mathcal{F}_{\infty}=\{f\in S_{\infty}(\mathcal{V},\mathcal{W})\mid\left\|f \right\|_{\infty}\leq 1\}\) be the hypothesis class. Then, there exists a labeled stream such that the expected regret of any algorithm \(\mathcal{A}\) is at least \(T\)._
### Lower Bounds in the Agnostic PAC setting
In this section, we study the lower bounds of learning linear operators in the agnostic PAC framework (see Definition 3). Our result here complements the work of Tabaghi et al. (2019), who study the agnostic PAC upper bounds of learning \(p\)-Schatten class for \(p\in[1,\infty)\).
**Theorem 6** (Agnostic PAC Lower Bounds for \(S_{p}(\mathcal{V},\mathcal{W})\)).: _Let \(\mathcal{X}=\{v\in\mathcal{V}\mid\left\|v\right\|\leq 1\}\) denote the instance space, \(\mathcal{Y}=\{w\in\mathcal{W}\mid\left\|w\right\|\leq 1\}\) denote the target space, and \(\mathcal{F}_{p}=\{f\in S_{p}(\mathcal{V},\mathcal{W})\mid\left\|f\right\|_{p} \leq 1\}\) be the hypothesis class. For any \(n\in\mathbb{N},\varepsilon>0\) and agnostic PAC learning algorithm \(\mathcal{P}\), there exists a distribution \(\mathcal{D}\) on \(\mathcal{X}\times\mathcal{Y}\) such that with probability at least \(\frac{1}{8}\), the excess risk of \(\mathcal{P}\) trained on \(n\) i.i.d. samples from the source distribution \(\mathcal{D}\) is at least \(\Omega(n^{-\frac{1}{p}-\varepsilon})\)._
We note that the lower bound in Theorem 6 essentially matches the upperbound provided by Tabaghi et al. (2019). Next, we show that bounded linear operators are not learnable in the agnostic PAC setting. This shows that the vacuousness of the bound obtained by Tabaghi et al. (2019, Theorem 2) for \(p=\infty\) is not an artifact of their proof technique but an expected phase transition given our impossibility result.
**Theorem 7** (Uniformly Bounded Subsets of \(S_{\infty}(\mathcal{V},\mathcal{W})\) are Not Agnostic PAC Learnable).: _Let \(\mathcal{X}=\{v\in\mathcal{V}\mid\|v\|\leq 1\}\) denote the instance space, \(\mathcal{Y}=\{w\in\mathcal{W}\mid\|w\|\leq 1\}\) denote the target space, and \(\mathcal{F}_{\infty}=\{f\in S_{\infty}(\mathcal{V},\mathcal{W})\mid\|f\|_{ \infty}\leq 1\}\) be the hypothesis class. For any \(n\in\mathbb{N}\) and agnostic PAC learning algorithm \(\mathcal{P}\), there exists a distribution \(\mathcal{D}\) on \(\mathcal{X}\times\mathcal{Y}\) such that with probability at least \(\frac{1}{8}\), the excess risk of \(\mathcal{P}\) trained on \(n\) i.i.d. samples from the source distribution \(\mathcal{D}\) is at least \(\frac{1}{4}\)._
## 5 Online Learnability without Online Uniform Convergence
In learning theory, the uniform law of large numbers is intimately related to the learnability of a hypothesis class. For instance, a binary hypothesis class is PAC learnable if and only if the hypothesis class satisfies the i.i.d. uniform law of large numbers (Shalev-Shwartz and Ben-David, 2014). An online equivalent of this result states that a binary hypothesis class is _online_ learnable if and only if the hypothesis class satisfies the online uniform law of large numbers (Rakhlin et al., 2015). However, in a recent work, Hanneke et al. (2023) show that uniform convergence and learnability are not equivalent for online multiclass classification. A key factor in Hanneke et al. (2023)'s proof is the unboundedness of the size of the label space. This unboundedness is critical as the equivalence between uniform convergence and learnability continues to hold for multiclass classification with a finite number of labels (Daniely et al., 2011). Nevertheless, the number of labels alone cannot imply a separation. This is true because a real-valued function class (say \(\mathcal{G}\subseteq[-1,1]^{\mathcal{X}}\) where the size of label space is uncountably infinite) is online learnable with respect to absolute/squared-loss if and only if the uniform convergence holds (Rakhlin et al., 2015). In this section, we show an analogous separation between uniform convergence and learnability for online linear operator learning. As the unbounded label space was to Hanneke et al. (2023), the infinite-dimensional nature of the target space is critical to our construction exhibiting this separation. Mathematically, a unifying property of Hanneke et al. (2023)'s and our construction is the fact that the target space \(\mathcal{Y}\) is not _totally bounded_ with respect to the pseudometric defined by the loss function.
The following result establishes a separation between uniform convergence and learnability for online linear operator learning. In particular, we show that there exists a class of linear operators \(\mathcal{F}\) such that the online uniform law of large numbers does not hold, but \(\mathcal{F}\) is still online learnable.
**Theorem 8** (Online Uniform Convergence \(\not\equiv\) Online Learnability).: _Let \(\mathcal{X}=\{v\in\mathcal{V}\mid\sum_{n=1}^{\infty}|c_{n}|\leq 1\text{ where }v=\sum_{n=1}^{\infty}c_{n}e_{n}\}\) be the instance space and \(\mathcal{Y}=\{v\in\mathcal{V}\mid\|v\|\leq 1\}\) be the target space. Then, there exists a function class \(\mathcal{F}\subset S_{1}(\mathcal{V},\mathcal{V})\subset S_{\infty}(\mathcal{V },\mathcal{V})\) such that the following holds:_
1. \(\operatorname{Rad}_{T}(\mathcal{F})\geq\frac{T}{2}\)__
2. _There exists an online learner for_ \(\mathcal{F}\) _such that its expected regret on any stream is at most_ \(2+8\sqrt{T\log{(2T)}}\)_, a sublinear function of_ \(T\)_._
Proof.: For a natural number \(k\in\mathbb{N}\), define an operator \(f_{k}:\mathcal{V}\rightarrow\mathcal{V}\) as
\[f_{k}:=\sum_{n=1}^{\infty}b_{k}[n]\;e_{k}\otimes e_{n}=e_{k}\otimes\sum_{n=1} ^{\infty}b_{k}[n]\,e_{n} \tag{2}\]
where \(b_{k}\) is the binary representation of the natural number \(k\) and \(b_{k}[n]\) is its \(n^{th}\) bit. Define \(\mathcal{F}=\{f_{k}\mid k\in\mathbb{N}\}\cup\{f_{0}\}\) where \(f_{0}=0\).
We begin by showing that \(\mathcal{F}\subset S_{1}(\mathcal{V},\mathcal{V})\). For any \(\alpha,\beta\in\mathbb{R}\) and \(v_{1},v_{2}\in\mathcal{V}\), we have
\[f_{k}(\alpha v_{1}+\beta v_{2})=\sum_{n=1}^{\infty}b_{k}[n]\;\left\langle e_{n},\alpha v_{1}+\beta v_{2}\right\rangle e_{k}=\alpha f_{k}(v_{1})+\beta f_{k}(v _{2}).\]
Thus, \(f_{k}\) is a linear operator. Note that \(f_{k}\) is defined in terms of singular value decomposition, and has only one non-zero singular value along the direction of \(e_{k}\). Therefore,
\[\left\|f_{k}\right\|_{1}=\sum_{n=1}^{\infty}b_{k}[n]\leq\log_{2}(k)+1,\]
where we use the fact that there can be at most \(\log_{2}(k)+1\) non-zero bits in the binary representation of \(k\). This further implies that \(\left\|f_{k}\right\|_{p}\leq\left\|f_{k}\right\|_{1}\leq\log_{2}(k)+1<\infty\) for all \(p\in[1,\infty]\). Note that each \(f_{k}\) maps a unit ball in \(\mathcal{V}\) to a subset of \(\{\alpha\,e_{k}\,:\,|\alpha|\leq\log_{2}(k)+1\}\), which is a compact set for every \(k\in\mathbb{N}\). Thus, for every \(k\in\mathbb{N}\), \(f_{k}\) is a compact operator and \(f_{k}\in S_{1}(\mathcal{V},\mathcal{V})\). We trivially have \(f_{0}\in S_{1}(\mathcal{V},\mathcal{V})\).
**Proof of (i)**. Let \(\sigma=\{\sigma_{t}\}_{t=1}^{T}\) be a sequence of i.i.d. Rademacher random variables. Consider a sequence of functions \((x,y)=\{x_{t},y_{t}\}_{t=1}^{T}\) such that \(x_{t}(\sigma_{<t})=e_{t}\) and \(y_{t}(\sigma_{<t})=0\) for all \(t\in[T]\). Note that our sequence \(\{e_{t}\}_{t=1}^{T}\subseteq\mathcal{X}\). Then, the sequential Rademacher complexity of the loss class is
\[\mathrm{Rad}_{T}(\mathcal{F})=\sup_{x,y}\,\mathbb{E}\left[\sup_{f \in\mathcal{F}}\sum_{t=1}^{T}\sigma_{t}\left\|f(x_{t}(\sigma_{<t}))-y_{t}( \sigma_{<t})\right\|^{2}\right] \geq\mathbb{E}\left[\sup_{k\in\mathbb{N}}\sum_{t=1}^{T}\sigma_{t} \left\|f_{k}(e_{t})\right\|^{2}\right]\] \[=\mathbb{E}\left[\sup_{k\in\mathbb{N}}\sum_{t=1}^{T}\sigma_{t}\,b _{k}[t]\right]\] \[\geq\mathbb{E}\left[\sum_{t=1}^{T}\mathbb{1}\{\sigma_{t}=1\} \right]=\frac{T}{2}.\]
Here, we use the fact that \(f_{k}(e_{t})=b_{k}[t]\,e_{k}\) and \(\mathbb{P}[\sigma_{t}=1]=\frac{1}{2}\). As for the inequality \(\sup_{k\in\mathbb{N}}\sum_{t=1}^{T}\sigma_{t}\,b_{k}[t]\geq\sum_{t=1}^{T} \mathbb{1}\{\sigma_{t}=1\}\), note that for any sequence \(\{\sigma_{t}\}_{t=1}^{T}\), there exists a \(k\in\mathbb{N}\) (possibly of the order \(\sim 2^{T}\)) such that \(b_{k}[t]=1\) whenever \(\sigma_{t}=1\) and \(b_{k}[t]=0\) whenever \(\sigma_{t}=-1\).
**Proof of (ii)**. We now construct an online learner for \(\mathcal{F}\). Let \((x_{1},y_{1})\ldots,(x_{T},y_{T})\in\mathcal{X}\times\mathcal{Y}\) denote the data stream. Since \(y_{t}\) is an element of unit ball of \(\mathcal{V}\), we can write \(y_{t}=\sum_{n=1}^{\infty}c_{n}(t)e_{n}\) such that \(\sum_{n=1}^{\infty}c_{n}^{2}(t)\leq 1\). For each \(t\in[T]\), define a set of indices \(S_{t}=\{n\in\mathbb{N}\,:\,|c_{n}(t)|\geq\frac{1}{2\sqrt{T}}\}\). Since
\[1\geq\|y_{t}\|^{2}=\sum_{n=1}^{\infty}c_{n}^{2}(t)\geq\sum_{n\in S_{t}}c_{n}^ {2}(t)\geq\sum_{n\in S_{t}}\frac{1}{4T}=\frac{|S_{t}|}{4T},\]
we have \(|S_{t}|\leq 4T\). Let \(\mathrm{sort}(S_{i})\) denote the ordered list of size \(4T\) that contains elements of \(S_{i}\) in descending order. If \(S_{i}\) does not contain \(4T\) indices, append \(0\)'s to the end of \(\mathrm{sort}(S_{i})\). We let \(\mathrm{sort}(S_{i})[j]\) denote the \(j^{th}\) element of the ordered list \(\mathrm{sort}(S_{i})\).
For each \(i\in[T]\) and \(j\in[4T]\), define an expert \(E_{i}^{j}\) such that
\[E_{i}^{j}(x_{t})=\begin{cases}0,&t\leq i\\ f_{k}(x_{t}),&t>i\end{cases},\qquad\text{ where }k=\mathrm{sort}(S_{i})[j].\]
An online learner \(\mathcal{A}\) for \(\mathcal{F}\) runs multiplicative weights algorithm using the set of experts \(\mathcal{E}=\{E_{i}^{j}\,\,|\,\,i\in[T],j\in[4T]\}\). It is easy to see that \(\left\|f_{k}(x)\right\|\leq 1\) for all \(x\in\mathcal{X}\). Thus, for any \(\hat{y}_{t},y_{t}\in\mathcal{Y}\), we have \(\left\|\hat{y}_{t}-y_{t}\right\|^{2}\leq 4\). Thus, for an appropriately chosen learning rate, the multiplicative weights algorithm guarantees (see Theorem 21.11 in Shalev-Shwartz and Ben-David (2014)) that the regret of \(\mathcal{A}\) satisfies
\[\mathbb{E}\left[\sum_{t=1}^{T}\left\|\mathcal{A}(x_{t})-y_{t}\right\|^{2} \right]\leq\inf_{E\in\mathcal{E}}\sum_{t=1}^{T}\left\|E(x_{t})-y_{t}\right\|^{ 2}+4\sqrt{2T\ln(|\mathcal{E}|)}.\]
Note that \(|\mathcal{E}|\leq 4T^{2}\), which implies \(4\sqrt{2T\ln(|\mathcal{E}|)}\leq 8\sqrt{T\ln(2T)}\). We now show that
\[\inf_{E\in\mathcal{E}}\sum_{t=1}^{T}\left\|E(x_{t})-y_{t}\right\|^{2}\leq\inf _{f\in\mathcal{F}}\sum_{t=1}^{T}\left\|f(x_{t})-y_{t}\right\|^{2}+2.\]
Together, these two inequalities imply that the expected regret of \(\mathcal{A}\) is \(\leq 2+8\sqrt{T\ln(2T)}\). The rest of the proof is dedicated to proving the latter inequality.
Let \(f_{k^{\star}}\in\arg\min_{f\in\mathcal{F}}\sum_{t=1}^{T}\left\|f(x_{t})-y_{t} \right\|^{2}\). Let \(t^{\star}\in[T]\) be the first time point such that \(k^{\star}\in S_{t^{\star}}\) and suppose it exists. Let \(r^{\star}\in[4T]\) be such that \(k^{\star}=\text{sort}(S_{t^{\star}})[r^{\star}]\). By definition of the experts, we have
\[E_{t^{\star}}^{r^{\star}}(x_{t})=f_{k^{\star}}(x_{t})\quad\text{ for }t>t^{\star},\]
thus implying that \(\sum_{t>t^{\star}}\left\|E_{t^{\star}}^{r^{\star}}(x_{t})-y_{t}\right\|^{2}= \sum_{t>t^{\star}}\left\|f_{k^{\star}}(x_{t})-y_{t}\right\|^{2}\). Therefore, it suffices to show that
\[\sum_{t\leq t^{\star}}\left\|E_{t^{\star}}^{r^{\star}}(x_{t})-y_{t}\right\|^{2 }\leq\sum_{t\leq t^{\star}}\left\|f_{k^{\star}}(x_{t})-y_{t}\right\|^{2}+2.\]
As \(E_{t^{\star}}^{r^{\star}}(x_{t})=0\) for all \(t\leq t^{\star}\), proving the inequality above is equivalent to showing
\[\sum_{t\leq t^{\star}}\left\|y_{t}\right\|^{2}\leq\sum_{t\leq t^{\star}}\left\| f_{k^{\star}}(x_{t})-y_{t}\right\|^{2}+2.\]
Since \(\left\|y_{t^{\star}}\right\|^{2}\leq 1\), we trivially have \(\left\|y_{t^{\star}}\right\|^{2}\leq\left\|f_{k^{\star}}(x_{t^{\star}})-y_{t^{ \star}}\right\|^{2}+1\). Thus, by expanding the squared norm, the problem reduces to showing
\[\sum_{t<t^{\star}}\left(2\left\langle f_{k^{\star}}(x_{t}),y_{t}\right\rangle- \left\|f_{k^{\star}}(x_{t})\right\|^{2}\right)\leq 1.\]
We prove the inequality above by establishing
\[2\left\langle f_{k^{\star}}(x_{t}),y_{t}\right\rangle-\left\|f_{k^{\star}}(x_{ t})\right\|^{2}\leq\frac{1}{T}\quad\text{ for all }t<t^{\star}.\]
Let \(x_{t}=\sum_{n=1}^{\infty}\alpha_{n}(t)e_{n}\). We have \(f_{k^{\star}}(x_{t})=\sum_{n=1}^{\infty}b_{k^{\star}}[n]\left\langle x_{t},e_{ n}\right\rangle e_{k^{\star}}=\left(\sum_{n=1}^{\infty}b_{k^{\star}}[n] \alpha_{n}(t)\right)e_{k^{\star}}\). Defining \(a_{k^{\star}}(t)=\left(\sum_{n=1}^{\infty}b_{k^{\star}}[n]\alpha_{n}(t)\right)\), we can write
\[f_{k^{\star}}(x_{t})=a_{k^{\star}}(t)e_{k^{\star}}\quad\text{ and }\quad\left\|f_{k^{\star}}(x_{t}) \right\|=|a_{k^{\star}}(t)|.\]
So, it suffices to show that \(2\,a_{k^{\star}}(t)\,c_{k^{\star}}(t)-|a_{k^{\star}}(t)|^{2}\leq\frac{1}{T}\) for all \(t<t^{\star}.\) To prove this inequality, we consider the following two cases:
1. Suppose \(|a_{k^{\star}}(t)|>2|c_{k^{\star}}(t)|\). Then, \(2\,a_{k^{\star}}(t)\,c_{k^{\star}}(t)-|a_{k^{\star}}(t)|^{2}<|a_{k^{\star}}(t) |^{2}-|a_{k^{\star}}(t)|^{2}=0.\)
2. Suppose \(|a_{k^{\star}}(t)|\leq 2|c_{k^{\star}}(t)|\). Then, \(2\,a_{k^{\star}}(t)\,c_{k^{\star}}(t)-|a_{k^{\star}}(t)|^{2}\leq 4\,|c_{k^{\star}}(t) |^{2}<4\,\left(\frac{1}{2\sqrt{T}}\right)^{2}=\frac{1}{T}\) because \(k^{\star}\notin S_{t}\) for all \(t<t^{\star}.\)
In either case, \(2\,a_{k^{\star}}(t)\,c_{k^{\star}}(t)-|a_{k^{\star}}(t)|^{2}\leq\frac{1}{T}\quad \text{for all }t<t^{\star}.\)
Finally, suppose that such a \(t^{\star}\) does not exist. Then, our analysis for the case \(t\leq t^{\star}\) above shows that the expert \(E_{T}^{1}\) that predicts \(E_{T}^{1}(x_{t})=0\) for all \(t\leq T\) satisfies \(\sum_{t=1}^{T}\left\|E_{T}^{1}(x_{t})-y_{t}\right\|^{2}\leq\sum_{t=1}^{T} \left\|f_{k^{\star}}(x_{t})-y_{t}\right\|^{2}+2\). \(\blacksquare\)
### Agnostic PAC Learnability without Batch Uniform Convergence
Although we state Theorem 8 in the online setting, an analogous result also holds in the agnostic PAC setting. To establish the agnostic PAC analog of Theorem 8, consider \(f_{k}\) defined in (2) and define a class \(\mathcal{F}=\{f_{k}\mid k\in\mathbb{N}\}\cup\{f_{0}\}\) where \(f_{0}=0\). This is the same class considered in the proof of Theorem 8. Recall that in our proof of Theorem 8 (i), we choose a sequence of labeled examples \(\{e_{t},0\}_{t=1}^{T}\) that is independent of the sequence of Rademacher random variables \(\{\sigma_{t}\}_{t=1}^{T}\). Thus, our proof shows that the i.i.d. version of the Rademacher complexity of \(\mathcal{F}\), where the labeled samples are independent of Rademacher variables, is also lower bounded by \(\frac{T}{2}\). This implies that the class \(\mathcal{F}\) does not satisfy the uniform law of large numbers in the i.i.d. setting. However, using the standard online-to-batch conversion technique, we can convert our online learner for \(\mathcal{F}\) to an agnostic PAC learner for \(\mathcal{F}\)[5]. Therefore, a result analogous to Theorem 3 holds in the PAC setting as well. This shows a separation between uniform convergence and PAC learnability of a (sub)-class of linear operators.
Discussion and Open Questions
In this work, we study the online learnability of linear operators between two infinite-dimensional Hilbert spaces. In Theorems 1 and 4, we showed that
\[\Omega(T^{1-\frac{1}{p}-\varepsilon})\leq\inf_{\mathcal{A}}\,R_{\mathcal{A}}(T, \mathcal{F}_{p})\leq O(T^{\max\{\frac{1}{2},1-\frac{1}{p}\}}),\]
for every \(\varepsilon>0\) and \(p\in[1,\infty)\), where \(\mathcal{F}_{p}:=\{f\in S_{p}(\mathcal{V},\mathcal{W})\,:\,\left\|f\right\|_{p }\leq 1\}\). Note that the lower bound and upper bound are essentially matching for \(p\geq 2\). However, for \(p\in[1,2)\), the upperbound saturates at \(\sqrt{T}\), while the lower bound gets progressively worse as \(p\) decreases. Given this gap, we leave it open to resolve the following question.
What is the optimal regret of learning \(\mathcal{F}_{p}\) for \(p\in[1,2)\)?
We conjecture that our upperbound in Theorem 1 is loose for \(p\in[1,2)\), and one can obtain faster rates using some adaptation of the seminal Vovk-Azoury-Warmuth forecaster (Vovk, 2001; Azoury and Warmuth, 2001).
Section 5 shows a separation between online uniform convergence and online learnability for linear operator learning. The separation is exhibited by a class that lies in \(S_{1}(\mathcal{V},\mathcal{W})\subset S_{\infty}(\mathcal{V},\mathcal{W})\), but is _not_ a uniformly bounded subset. We know that there is no separation between online learnability and online uniform convergence for any subset of \(S_{p}(\mathcal{V},\mathcal{W})\) with uniformly bounded \(p\)-Schatten norm for \(p\in[1,\infty)\). However, it is unknown whether this is also true for \(S_{\infty}(\mathcal{V},\mathcal{W})\). This raises the following natural question.
Is online uniform convergence and online learnability equivalent for every
\[\mathcal{F}\subseteq\{f\in S_{\infty}(\mathcal{V},\mathcal{W})\mid\left\|f \right\|_{\infty}\leq 1\}?\]
In this work, we studied the learnability of uniformly bounded linear operators between two infinite-dimensional Hilbert spaces. For a given class, we show that a uniform bound on the \(p\)-Schatten norm for any \(p\in[1,\infty)\) is sufficient for online learnability. However, the example in Theorem 8 shows that uniform upper bounds on the \(p\)-Schatten norm for any \(p\in[1,\infty]\) is not necessary for online learnability. It is an interesting future direction to fully characterize the landscape of learnability for bounded linear operators. In addition, it is also of interest to extend these results to nonlinear operators.
## Acknowledgements
We acknowledge the assistance of Judy McDonald in locating a misdelivered package containing (Shalev-Shwartz and Ben-David, 2014). Without the benefit of ideas in (Shalev-Shwartz and Ben-David, 2014), this paper would have never been written. VR acknowledges the support of the NSF Graduate Research Fellowship.
|
2308.16472 | Logical Berkovich Geometry: A Point-free Perspective | Extending our insights from \cite{NVOstrowski}, we apply point-free
techniques to sharpen a foundational result in Berkovich geometry. In our
language, given the ring $\mathcal{A}:=K\{R^{-1}T\}$ of convergent power series
over a suitable non-Archimedean field $K$, the points of its Berkovich Spectrum
$\mathcal{M}(\mathcal{A})$ correspond to $R$-good filters. The surprise is
that, unlike the original result by Berkovich, we do not require the field $K$
to be non-trivially valued. Our investigations into non-Archimedean geometry
can be understood as being framed by the question: what is the relationship
between topology and logic? | Ming Ng | 2023-08-31T05:27:01Z | http://arxiv.org/abs/2308.16472v1 | # Logical Berkovich geometry: a point-free perspective
###### Abstract.
Extending our insights from [23], we apply point-free techniques to sharpen a foundational result in Berkovich geometry. In our language, given the ring \(\mathcal{A}:=K\{R^{-1}T\}\) of convergent power series over a suitable non-Archimedean field \(K\), the points of its Berkovich Spectrum \(\mathcal{M}(\mathcal{A})\) correspond to \(R\)-good filters. The surprise is that, unlike the original result by Berkovich, we do not require the field \(K\) to be non-trivially valued. Our investigations into non-Archimedean geometry can be understood as being framed by the question: what is the relationship between topology and logic?
[MISSING_PAGE_POST]
_;_
2. _A space whose points are defined by a sequence of nested closed discs_ \(D_{r_{1}}(k_{1})\supseteq D_{r_{2}}(k_{2})\supseteq\dots\) _contained in_ \(K\)_;_
3. _The space of types over_ \(K\)_, concentrating on_ \(\mathbb{A}^{1}_{K}\)_, that are "almost orthogonal to_ \(\Gamma\)_";_3__ Footnote 3: This is a technical definition which the non-logician may wish to treat as a black box — it will not be needed to understand the main results of this thesis. For the model theorist: \(K\) here is taken to be a model of the theory \(\operatorname{ACVF}\), which is a 3-sorted theory comprising \(\operatorname{VF}\) as the value field sort, \(\Gamma\) as the value group sort, and \(\kappa\) as the residue field sort. Let \(\mathbb{U}\) be the monster model. If \(C\subset\mathbb{U}\) and \(p\) is a \(C\)-type, then we say \(p\) is _almost orthogonal to \(\Gamma\)_ if for any realisation \(a\) of \(p\) we have that \(\Gamma(C(a))=\Gamma(C)\).
4. A profinite \(\mathbb{R}\)-tree.
Proof.: (i) is the definition of \(\mathbb{A}^{1}_{\operatorname{Berk}}\). (ii) can be proved similarly to [1, Example 1.4.4]. (iii) is a special case of what was proved in [1, pp. 187-188]. (iv) essentially follows from [1, Theorem 2.20].
For the algebraic geometer, the different characterisations of \(\mathbb{A}^{1}_{\operatorname{Berk}}\) in Summary Theorem 0.1 reflect the variety of methods that have been used when studying Berkovich spaces. In more detail:
* The equivalence of items (i) and (ii), a foundational result in Berkovich geometry, sets up the classification of points of \(\mathbb{A}^{1}_{\operatorname{Berk}}\).
* The language of "almost orthogonal types" reflects the model-theoretic methods pioneered by Hrushovski and Loeser [1]. This perspective was particularly useful for establishing the topological "tameness" of Berkovich spaces under very mild hypotheses (see e.g. Theorem 1.28).
* Viewing \(\mathbb{A}^{1}_{\operatorname{Berk}}\) as a profinite \(\mathbb{R}\)-tree emphasises its semilattice structure: given any \(x,y\in\mathbb{A}^{1}_{\operatorname{Berk}}\) there exists a unique least upper bound \(x\lor y\in\mathbb{A}^{1}_{\operatorname{Berk}}\) [with respect to the partial order that \(x\leq y\) iff \(|f|_{x}\leq|f|_{y}\) for \(f\in K[T]\)]. A key insight of Baker and Rumely [1] was that this structure on \(\mathbb{A}^{1}_{\operatorname{Berk}}\) (in fact, on \(\mathbb{P}^{1}_{\operatorname{Berk}}\)) can be used to define a Laplacian operator, laying the foundations for a non-Archimedean analogue of complex potential theory. Their work later found surprising applications in the analysis of preperiodic points of complex dynamical systems [1].
For the topos theorist, however, Summary Theorem 0.1 is suggestive because it mirrors the different representations of a point-free space: as a universe of (algebraic) models axiomatised by a first-order theory, as a certain space of prime filters, or as a distributive lattice. One may therefore wonder if the listed characterisations of \(\mathbb{A}^{1}_{\operatorname{Berk}}\) reflect a constellation of perspectives on Berkovich spaces that move together in a tightly-connected way.
It is this intuition that will guide us to the main result of this paper, Theorem 2.21, where point-free techniques are used to reformulate the equivalence of items (i) and (ii) in Summary Theorem 0.1.4 The main surprise is that, unlike Berkovich's original result, the point-free approach works equally well for both the trivially and non-trivially valued fields. This indicates that the _algebraic_ hypothesis of \(K\) being non-trivially valued is in fact a _point-set_ hypothesis, and is not essential to the underlying mathematics. Our result thus gives an interesting proof of concept regarding the clarifying potential of the point-free perspective within non-Archimedean geometry.
Footnote 4: Technically, Theorem 2.21 works with multiplicative seminorms on the ring of convergent power series \(K\{R^{-1}T\}\) and not those on \(K[T]\), but in fact the result extends to the latter setting by Remark 1.33.
**How to read this paper.** For the reader primarily interested in non-Archimedean geometry & its topology: no category theory or logic will be needed to understand the proof of the main result. As such, this reader may wish to examine the definition of an upper real (Definition 1.12), review Example 1.32 to recall why the radii of rigid discs fail to be well-defined when the base field is trivially-valued, before proceeding to Section 2. Looming in the background, however, are a series of deep interactions between topology & logic, which we discuss more fully in Section 1.1. The curious non-logician may wish to make note that filters play a role in these interactions, and treat the rest as a black box. The model theorist, however, may be interested to learn that topos theorists work with an infinitary fragment of positive logic known as _geometric logic_, and what shows up as _types_ in the context of model theory sometimes shows up as honest _models_ of a geometric theory (e.g. Dedekind reals). Clarifying this connection in the setting of non-Archimedean geometry motivates a very interesting series of test problems, which we discuss in Section 3.
Finally, for those interested in topos theory: our result in Berkovich geometry can be regarded as an advertisement for point-free topology, in particular, how point-free techniques may resolve a problem by eliminating some of the set-theoretic noise. For those interested in constructive mathematics, we remark
that the full strength of Theorem 2.21 relies on LEM. A weaker version of our result holds (Theorem 2.20) if one wishes to avoid classical assumptions, but there's some fine print - see Summary 2.24.
**Acknowledgements.** This paper benefitted from various mathematical exchanges with Matt Baker, Rob Benedetto, Vladimir Berkovich, Artem Chernikov, Eric Finster, Udi Hrushovski, Francois Loeser, Michael Temkin, Steve Vickers, and Vincent (Jinhe) Ye. Expository aspects of this work also benefitted from presentation at the "Type Theory, Constructive Mathematics and Geometric Logic" Conference at CIRM, as well as at the Logic Working Group hosted by the University of Torino - thank you all.
## 1. Preliminaries
### The Point-free Perspective
As a first approximation, point-free topology can be understood as the study of localic spaces.
**Definition 1.1**.:
1. A _frame_ is a complete lattice \(A\) possessing all small joins \(\bigvee\) and all finite meets \(\wedge\), such that the following distributivity law holds \[a\wedge\bigvee S=\bigvee\{a\wedge b\,|\,b\in S\}\] where \(a\in A,S\subseteq A\).
2. A _frame homomorphism_ is a function between frames that preserves arbitrary joins and finite meets. Frames and frame homomorphisms form the category \(\mathbf{Frm}\). We define the category \(\mathrm{Loc}:=\mathbf{Frm}^{\mathrm{op}}\). We refer to the objects in \(\mathrm{Loc}\) as _localic spaces_.5
Footnote 5: Elsewhere, this is often referred to as a _locale_; our terminology was chosen for suggestiveness.
The main difference between the localic perspective and point-set topology is one of priority: what are the basic units for defining a space? In the case of a point-set space, one starts with a set of elements, before defining the topology on it in the usual way. On the other hand, the localic perspective treats the opens as the basic units for defining a space. Once the topological structure has been defined, we can then recover its points:
**Definition 1.2** (Points).: Let \(\Omega X\) be a frame. Denote \(\Omega:=\Omega\mathbf{1}\) to be the powerset of the singleton.
1. A _global point_ is a frame homomorphism \(\Omega X\to\Omega\).
2. A _generalised point_ is a frame homomorphism \(\Omega X\to\Omega Y\), where \(\Omega Y\) is any frame.
Definition 1.1 states that localic spaces and frames are the same thing, but conceptually we shall prefer to view _frames_ as corresponding to the opens of a space and the localic space as corresponding to the universe of all its (generalised) points. Hereafter, whenever we have a localic space \(X\), we shall denote its corresponding frame of opens as \(\Omega X\).
**Remark 1.3**.: Classically, \(\Omega\) of course corresponds to the two element Boolean algebra \(\{0,1\}\). As such, \(x\colon\Omega X\to\Omega\) can be regarded as a way of sorting out which opens the point \(x\) belongs to and which ones it doesn't. However, the assertion \(\Omega\mathbf{1}\cong\{0,1\}\) relies on the Law of Excluded Middle (LEM); in particular, if we opt to work constructively, then the isomorphism no longer holds.
#### 1.1.1. Geometric Logic
The view from topos theory emphasises the logical nature of this point-free perspective. We start with the notion of a geometric theory:
**Definition 1.4** (Geometric Theories).: Let \(\Sigma\) be a (many-sorted) first-order signature (or vocabulary).
* Let \(\vec{x}\) be a finite vector of variables, each with a given sort. A _geometric formula_ in context \(\vec{x}\) is a formula built up using symbols from \(\Sigma\) via the following logical connectives: \(=\), \(\top\) (true), \(\wedge\) (**finite** conjunction), \(\bigvee\) (**arbitrary** disjunction), \(\exists\).
* A _geometric theory over \(\Sigma\)_ is a set of axioms of the form \[\forall\vec{x}.(\phi\to\psi),\] where \(\phi\) and \(\psi\) are geometric formulae.
**Remark 1.5**.: The geometric syntax already reveals key differences with classical first-order logic, namely:
* The absence of negation \(\neg\)
* Allowing arbitrary (possibly infinite) disjunction
* Disallowing nested implications and universal quantification in the axioms.
This not only affects what we are able to express in this logical language, but also how we understand "truth". In classical logic, one may appeal to the LEM and assert that \(p\vee\neg p\) for any proposition \(p\). In geometric logic, we are unable to even express this statement due to the lack of negation.
**Convention 1.6**.: Hereafter, the unqualified term "theory" shall always mean a geometric theory. If we wish to refer to theories from classical first-order logic, we shall signpost this explicitly.
Of particular interest to us is a special class of geometric theories known as _essentially propositional theories_. Before explaining the qualifier _essentially_, we start with the basic definition:
**Definition 1.7**.: A (geometric) theory \(\mathbb{T}\) is called a _propositional theory_ if its signature \(\Sigma\) has no sorts [so there can be no variables or terms, nor existential quantification]. In particular, its axioms are constructed only from constant symbols in \(\Sigma\), \(\top\) (true), finite \(\wedge\) and arbitrary \(\bigvee\).
The connection between geometric logic and locales now becomes apparent: the propositional formulae of \(\mathbb{T}\) correspond to the opens, the finite \(\wedge\) to the finite intersections of opens and the arbitrary \(\bigvee\) to their arbitrary unions. Further, given any frame \(A\), one can define a theory \(\mathbb{T}_{A}\) such that its models correspond to the points of the corresponding locale space.
**Definition 1.8**.: Let \(A\) be a frame. Define \(\mathbb{T}_{A}\) as follows:
**Signature \(\Sigma\)**: A propositional symbol \(P_{a}\) for each \(a\in A\)
**Axioms**:
1. \(P_{a}\to P_{b}\), for \(a\leq b\) in \(A\)
2. \(P_{a}\wedge P_{b}\to P_{a\wedge b}\), for \(a,b\in A\)
3. \(\top\to P_{1}\)
4. \(P_{\bigvee\,S}\rightarrow\bigvee_{a\in S}P_{a}\), for \(S\subseteq A\)
These axioms ensure that meets and joins of the frame correspond to conjunctions and disjunctions in the logic, in particular that frame \(A\) corresponds to the Lindenbaum Algebra of \(\mathbb{T}_{A}\). Given two propositional theories \(\mathbb{T}\) and \(\mathbb{T}^{\prime}\), define a \(\mathbb{T}\)_-model in \(\mathbb{T}^{\prime}\)_ to be a Lindenbaum algebra homomorphism between their corresponding Lindenbaum algebras \(\Omega_{\mathbb{T}}\rightarrow\Omega_{\mathbb{T}^{\prime}}\). A straightforward check then verifies that a \(\mathbb{T}_{A}\)-model in \(\mathbb{T}_{B}\) corresponds to frame homomorphism \(A\to B\), for any frames \(A,B\) (for details, see [20, SS2]).
Developing Remark 1.3, we can regard the models of \(\mathbb{T}_{A}\) as _completely prime filters_. Recall: a _filter_ is a collection of subsets satisfying certain formal properties also satisfied by the collection of open neighbourhoods of any point \(x\in X\) in a point-set topological space. The hypothesis of _completely prime_ enforces a kind of coherence within the filter with respect to \(\bigvee\). More precisely:
**Definition 1.9** (Filters).: Let \(S\) be an infinite set.
1. A _filter on_\(S\) is a collection of subsets \(\mathcal{F}\subseteq\mathcal{P}(S)\) such that: 1. \(A\subseteq B\subseteq S\) and \(A\in\mathcal{F}\) implies \(B\in\mathcal{F}\); 2. \(A,B\in\mathcal{F}\) implies \(A\cap B\in\mathcal{F}\); and 3. \(S\in\mathcal{F}\)
2. A filter \(\mathcal{F}\) is _prime_ if for every finite set index set \(I\): \(\bigcup_{i}A_{i}\in\mathcal{F}\) implies there exists some \(j\in I\) such that \(A_{j}\in\mathcal{F}\). A filter \(\mathcal{F}\) is _completely prime_ if the same holds true for _any_ index set \(I\) (including when \(I\) is infinite).
3. A filter \(\mathcal{F}\) is called an _ultrafilter_ if it has an opinion on all subsets of \(S\): If \(A\subset S\), then either \(A\) or its complement \(S\setminus A\) belongs to \(\mathcal{F}\) (but not both).
The translation of Definition 1.9 to the setting of locales is straightforward (although there's a subtlety when it comes to ultrafilters6). Since every frame \(A\) gives rise to a propositional theory \(\mathbb{T}_{A}\), and the Lindenbaum Algebra of every propositional theory \(\mathbb{T}\) gives rise to a frame \(\Omega_{\mathbb{T}}\), this justifies the view that the
universe of all \(\mathbb{T}\)-models for any propositional \(\mathbb{T}\) can be regarded as a localic space, and that these \(\mathbb{T}\)-models correspond to completely prime filters.7
Footnote 7: More precisely, they correspond to the _standard models_ of \(\mathbb{T}_{A}\) (equivalently, the global points of \(A\)) that correspond to completely prime filters. A standard model of \(\mathbb{T}_{A}\) tells us which propositional symbols \(P_{a}\) are assigned truth value \(\top\), and hence describes a subset \(F\subseteq A\) satisfying the axioms of \(\mathbb{T}_{A}\). Re-examining \(\mathbb{T}_{A}\) in light of Definition 1.9, the first three axioms tell us that \(F\) is a filter, the fourth tells us that \(F\) is completely prime.
How does this picture extend to predicate theories? This is where the topos theory comes in. Here's the basic gist. Given any (geometric) theory \(\mathbb{T}\), one uses categorical logic to define what a \(\mathbb{T}\)-model means in this setting. The topos theory then says: the universe of all \(\mathbb{T}\)-models can be regarded as a kind of generalised space - let us say a _point-free space_ - whose points correspond to the \(\mathbb{T}\)-models. We shall treat this framework as a black box in this paper (for details on the underlying topos theory, see e.g. [23, Ch. 2], or [24, 25]). Let us instead record two guiding remarks which the reader may find illuminating. First, compare this point-free perspective with the model-theoretic perspective. For the model theorist, the theory of groups \(\mathbb{T}_{\mathrm{grD}}\) gives rise to a class of structures satisfying its axioms: the elementary class of groups. By contrast, in topos theory, the theory of groups gives rise to a (pseudo-)functor of models, analogous to the "functor of points" approach in algebraic geometry (see [13, B4.2]). It is in this sense that the universe of all \(\mathbb{T}_{\mathrm{grD}}\)-models may be regarded as a generalised space.
Second, topos theory also tells us when these generalised spaces of models are equivalent.8 We can therefore define a more general class of theories, including the original propositional theories, whose space of models are equivalent to those of a propositional theory. Call such theories _essentially propositional_, and let us also refer to their corresponding space of models as _localic spaces_.9 An important class of essentially propositional theories are those whose sorts are all free algebra constructions -- e.g. the natural numbers \(\mathbb{N}\), the integers \(\mathbb{Z}\), the rationals \(\mathbb{Q}\). A key example would be the localic reals, which we turn to next.
Footnote 9: More precisely, two theories \(\mathbb{T},\mathbb{T}^{\prime}\) define equivalent spaces of models iff their classifying toposes \(\mathcal{S}[\mathbb{T}]\simeq\mathcal{S}[\mathbb{T}^{\prime}]\) are equivalent. For details, see [23, Prop. 2.1.28].
#### 1.1.2. Localic Reals
This paper uses two different types of reals: the usual Dedekind reals, and the upper reals. Both reals are built up from the rationals but in different ways; this results in different topologies and therefore different subtleties in their analysis.
**Definition 1.10**.: Define the propositional theory of Dedekind reals \(\mathbb{T}_{\mathbb{R}}\) as follows:
**Signature \(\Sigma\)**: Propositional symbols \(P_{q,r}\), where \(q,r\in\mathbb{Q}\)
**Axioms**:
1. \(P_{q,r}\wedge P_{q^{\prime},r^{\prime}}\leftrightarrow\bigvee\{P_{s,t}\,|\, \max(q,q^{\prime})<s<t<\min(r,r^{\prime})\}\)
2. \(\top\to\bigvee\{\{P_{q-\epsilon,q+\epsilon}\,|\,q\in\mathbb{Q}\}\) if \(0<\epsilon\in\mathbb{Q}\)
**Discussion 1.11** (The points of \(\mathbb{T}_{\mathbb{R}}\)).: Notice again the difference with point-set topology. In point-set topology, we start with the underlying set of reals before equipping it with the usual topology; in the point-free setting, we start with a collection of symbols \(\{P_{qr}\}_{q,r\in\mathbb{Q}}\), before applying axioms to ensure that they formally behave like the rational-ended open intervals in \(\mathbb{R}\). The usual Dedekinds can then be obtained as completely prime filters of the Lindenbaum Algebra of \(\mathbb{T}_{\mathbb{R}}\). This illustrates what was mentioned before: the basic units for defining the space \(\mathbb{R}\) are the opens, no longer its underlying set of points.
**Definition 1.12** (Upper & Lower Reals).: Denote \(x\) to be a subset of the rationals. For suggestiveness, write \(x<q\) to mean \(q\in x\). An _upper real_ is a subset \(x\subseteq\mathbb{Q}\) that satisfies:
1. _Upward-closure._ If \(x<q\) and \(q<q^{\prime}\), then \(x<q^{\prime}\).
2. _Roundedness._ If \(x<q\), then there exists \(q^{\prime}<q\) such that \(x<q^{\prime}\).
for all \(q,q^{\prime}\in\mathbb{Q}\). Analogously, a _lower real_ is a subset \(x^{\prime}\subseteq\mathbb{Q}\) that is downward-closed and rounded.
In fact, we can extend Definition 1.12 to give a more explicit description of the Dedekinds.
**Definition 1.13**.: A Dedekind real is a pair \((L,R)\) of subsets of \(\mathbb{Q}\) whereby
* \(R\) is an _inhabited10_ upper real, and \(L\) is an _inhabited_ lower real. Footnote 10: That is, there exists \(r\in\mathbb{Q}\) such that \(R<r\).
* \((L,R)\) are _separated_: \((L,R)\) are disjoint subsets of \(\mathbb{Q}\).
* \((L,R)\) are _located_: for any rationals \(q,r\), either \(q<L\) or \(R<r\).
**Remark 1.14**.: We have suppressed the syntactic details, but it is clear that Definitions 1.12 and 1.13 define predicate theories featuring \(\mathbb{Q}\) as a sort in their signatures. Nonetheless, since \(\mathbb{Q}\) is a free algebra construction, the theories are actually essentially propositional. One can also deduce this explicitly for Definition 1.13 by checking that its space of models is equivalent to those of \(\mathbb{T}_{\mathbb{R}}\) - see [20, SS4.7].
**Convention 1.15**.: We denote the space of Dedekinds as \(\mathbb{R}\). By Remark 1.14, the points of \(\mathbb{R}\) may be regarded as models of \(\mathbb{T}_{\mathbb{R}}\) or the theory described by Definition 1.13. For suggestiveness, whenever we want to work with a generic Dedekind \(x\), we write \(x\in\mathbb{R}\).
We conclude with some basic facts about the upper reals. Notice Definition 1.12 allows the upper real \(x=\emptyset\) or \(x=\mathbb{Q}\) (corresponding to \(\infty\) and \(-\infty\) respectively). Excluding these cases, an upper real can be thought of as approximating a number from above whereas a Dedekind real approximates the number from both above and below. In particular:
**Fact 1.16**.: Denote \(\widehat{[\infty,\infty)}\) to be the space of upper reals excluding \(\infty\).11
Footnote 11: In other words, the space of inhabited upper reals – see the first bullet point of Definition 1.13.
1. There exists a natural map \[R\colon\mathbb{R}\longrightarrow\widehat{[-\infty,\infty)}\] where given a Dedekind real \(x=(L_{x},R_{x})\), \(R\) sends \(x\mapsto R_{x}\). One can define an analogous map from the Dedekinds to the lower reals by sending \(x\mapsto L_{x}\). We remark that the maps \(x\mapsto R_{x}\) and \(x\mapsto L_{x}\) are monic, but they do not define embeddings: \(\mathbb{R}\) is most certainly not a subspace \(\widehat{[-\infty,\infty)}\) due to the difference in topology. For details, see [20, Corollary 1.27].
2. Given an indexed set of reals \(\{x_{i}\}_{i\in I}\), we define \[\inf_{i\in I}x_{i}:=\bigcup_{i\in I}\{q\in\mathbb{Q}\,|\,x_{i}<q\}\] \[\sup_{i\in I}x_{i}:=\bigcup_{i\in I}\{q\in\mathbb{Q}\,|\,q<x_{i}\}\] Classically, bounded infima/suprema of Dedekinds are regarded as defining Dedekinds; constructively, however, this is no longer the case. Examining the above definitions, notice that a set-indexed \(\inf\) of reals (Dedekind or upper) defines an upper real whereas a set-indexed \(\sup\) of reals (Dedekind or lower) defines a lower real. For details on why bounded upper/lower reals cannot be (constructively) considered as Dedekinds, see Discussion 2.23.
We enforce the following conventions when describing subspaces of the upper reals; the reader unfamiliar with Scott-continuity may wish to treat them as a black box (for details, see e.g. [20, SS1] or [20]).
**Convention 1.17** (Subspaces of Upper Reals).:
1. The space of upper reals comes equipped with a Scott topology: \(x\sqsubseteq y\) iff \(x\geq y\). Notice that \(\sqsubseteq\) is opposite to the usual numerical order. To reflect this, we use arrows on top of the spaces to show the direction of the refinement under their respective specialisation orders - e.g. \(\widehat{[0,\infty)}\), \(\widehat{[-\infty,\infty)}\) etc.
2. Notice that the previous one-sided intervals were closed at the arrowhead -- e.g. \(0\) was included in \(\widehat{[0,\infty)}\) and \(-\infty\) in \(\widehat{[-\infty,\infty)}\). In fact, this is canonical -- _all_ upper real intervals must be closed at the arrowhead. Why? Answer: the upper reals possess the Scott topology, and so all subspaces of the uppers must be closed under arbitrary directed joins with respect to \(\sqsubseteq\).
#### 1.1.3. Interlude: Classical vs. Point-free Perspective
The claim that Dedekind reals can be characterised as models of a first-order theory will be provocative to the model theorist. In classical model theory, Dedekind reals typically arise not as models but as _types_ over the model \(M=(\mathbb{Q},<)\), i.e. the rationals considered as a dense linear order. We contextualise this via the language of filters:
**Discussion 1.18** (Types vs. Models as Filters).: Informally, a (complete) type \(p\) over a model \(M\) corresponds to an ultrafilter of the Boolean Algebra of definable subsets of \(M\), which we denote as \(\mathcal{B}_{M}\). Analogously, Section 1.1.1 tells us that models of a propositional theory \(\mathbb{T}\) correspond to the completely prime filters of its Lindenbaum Algebra \(\Omega_{\mathbb{T}}\). The appearance of filters in both contexts is suggestive, but there is a subtlety. Since the geometric syntax does not have negation (cf. Remark 1.5), the Lindenbaum Algebra \(\Omega_{\mathbb{T}}\) is generally not Boolean (and so its completely prime filters are generally not ultrafilters either). However, when \(\Omega_{\mathbb{T}}\) is in fact Boolean, then the prime filters of \(\Omega_{\mathbb{T}}\) turn out to be precisely its ultrafilters and so the two notions coincide.12
Footnote 12: Some care, however, needs to be taken regarding the distinction between prime vs. completely prime filters.
We can also phrase this in the language of points (in the sense of Definition 1.2). A complete type over \(M\) may be characterised as a Boolean homomorphism \(\mathcal{B}_{M}\to\{0,1\}\) to the two-element Boolean algebra, whereas a global point of a localic space \([\mathbb{T}]\) corresponds to a frame homomorphism \(\Omega_{\mathbb{T}}\to\Omega\), where \(\Omega\) is the frame of truth values. Classically, these notions more or less coincide since LEM implies \(\Omega\cong\{0,1\}\), but constructively this is no longer the case (cf. Remark 1.3).
**Discussion 1.19**.: Viewed logically, the example of the Dedekind reals brings into focus a challenging connection between the model theorist's type spaces and the topos theorist's point-free spaces. Parsing their similarities and differences reflects the contrasting legacies of Shelah vs. Grothendieck on the development of modern logic.
On the one hand, the model theorist and the topos theorist have developed very different understandings of logical complexity. Whereas model theorists typically tie the logical complexity of the theory to combinatorial questions (e.g. number of types, number of non-isomorphic models etc.), the topos theorist typically ties the logical complexity of a theory to its syntactic expressiveness (e.g. coherent, geometric, propositional vs. predicate etc. -- see [10, Remark D1.4.14]). In the case of Dedekind reals, the model theorist regards them as evidence that the theory \(\operatorname{Th}(\mathbb{Q},<)\) is complex [more precisely, that \(\operatorname{Th}(\mathbb{Q},<)\) is unstable], whereas the topos theorist regards the theory of Dedekind reals as being particularly well-behaved [more precisely, it is an essentially propositional theory].
On the other hand, there appears to be some convergence in attitudes regarding the desired geometry of these logical spaces (compare e.g. the Galois-theoretic ideas in Joyal-Tierney's monograph [13] with Hrushovski's recent preprints [12, 13]). Further development of these structural connections appear important, and will be the subject of future work.
**Warning 1.20**.: One should not jump to conclusions about the space of Dedekinds being equivalent to the Stone space of 1-types \(\operatorname{S}_{1}(\mathbb{Q},<)\). Unlike the latter, the space of Dedekinds does **not** contain infinities or infinitesimals -- its models are just the real numbers belonging to the interval \((-\infty,\infty)\). We can also understand the difference topologically: \(\operatorname{S}_{1}(\mathbb{Q},<)\) is a totally disconnected compact space whereas the space of Dedekind reals \(\mathbb{R}\) is connected and non-compact.
Aside from Warning 1.20, there is a more fundamental reason to be cautious about overextending the informal picture of "Models of a geometric theory \(\approx\) Types arising from model theory". As already mentioned, completely prime filters are not ultrafilters in general. An illuminating example would be the upper reals from before.
**Discussion 1.21**.: Recall that an upper real is a real number that only records the rationals strictly larger than itself and nothing else. This results in some striking differences with the Dedekinds. Certainly the standard upper real does not correspond to a complete type over \((\mathbb{Q},<)\) (unless, of course, the upper real happens to be \(\infty\) or \(-\infty\)). Further, since an upper real is blind to the rationals less than itself, we also cannot say when one upper real is _strictly_ smaller/greater than another. Moreover, recall that an upper reals is defined as a well-behaved subset of \(\mathbb{Q}\), which are collectively ordered by subset inclusion. As such, the upper reals combine to form a non-Hausdorff space whose points 'contain' each other.
**Discussion 1.22**.: This paper is interested in analysing the multiplicative seminorms on certain rings, natural in the context of Berkovich geometry. So why might the upper reals be a natural choice to work with as opposed to the usual Dedekinds? The reason will become clearer in due course. For now, just for orientation, let us note that Summary Theorem 0.1 tells us that multiplicative seminorms on \(K[T]\) correspond to a sequence of nested discs \(D_{r_{1}}(k_{1})\supseteq D_{r_{2}}(k_{2})\dots\), suggesting that these multiplicative seminorms are being approximated "from above". Topologically, this matches more closely with the upper reals than the Dedekinds (cf. our comments before Fact 1.16.)
### The Berkovich Perspective
Let us review in broad strokes the motivation behind Berkovich geometry. In complex algebraic geometry, complex varieties can be regarded as complex manifolds and thus can be studied using powerful tools from complex analysis and differential geometry (see e.g. [10, Appendix B]). The main barrier to playing the same game for non-Archimedean varieties is that the base field \(K\) is totally disconnected. Berkovich's solution to this problem of disconnectedness is to fill in the "missing points", which we now discuss.
#### 1.2.1. \(\dots\)on algebraic varieties
Let \((K,|\cdot|)\) be a non-Archimedean field, and \(X\) be an affine variety over \(K\), i.e. \(X\) is the zero locus in \(K^{n}\) of a finite set of polynomials \(f_{1},\dots,f_{m}\in K[x_{1},\dots,x_{n}]\). Recall that the coordinate ring of \(X\) is defined as
\[K[X]:=K[T_{1},\dots,T_{n}]/(f_{1},\dots,f_{m}) \tag{1}\]
One easily checks that every point \(k\in X(K)\) gives rise to a multiplicative seminorm on \(K[X]\)
\[|\cdot|_{k}\colon K[X] \longrightarrow\mathbb{R}_{\geq 0} \tag{2}\] \[f \longmapsto|f(k)|,\]
otherwise known as the _evaluation seminorm at \(k\)_. Extending this insight, one can then define the analytification of \(X\) in terms of seminorms on its coordinate ring.
**Definition 1.23** (Analytification of Affine Varieties).: Fix a valued field \((K,|\cdot|)\), and let \(X\) be an affine variety over \(K\) with associated coordinate ring \(K[X]\).
1. Given a multiplicative seminorm on \(K[X]\), which we denote \[|\cdot|_{x}\colon K[X]\longrightarrow\mathbb{R}_{\geq 0}\] (3) we say that \(|\cdot|_{x}\)_extends the given norm on \(K\)_ if \(|k|_{x}=|k|\) for all \(k\in K\). Notice that this includes the evaluative seminorms.
2. The _analytification of \(X\)_, denoted \(X^{\mathrm{an}}\), is defined as the following point-set space: * _Underlying set of_ \(X^{\mathrm{an}}\) = the set of all multiplicative seminorms on \(K[X]\) extending the original norm \(|\cdot|\) on the base field \(K\); * _Topology on_ \(X^{\mathrm{an}}\) = the weakest topology such that all maps of the form \[\psi_{f}\colon X^{\mathrm{an}} \longrightarrow\mathbb{R}_{\geq 0}\] (4) \[|\cdot|_{x} \longmapsto|f|_{x}.\] are continuous, for any \(f\in K[X]\), which we shall call the _Berkovich topology_.13 For clarity, we emphasise that \(\psi_{f}\) is a mapping on _all_ multiplicative seminorms on \(K[X]\) extending \(|\cdot|\) -- not just the evaluative seminorms from before. Footnote 13: This is also sometimes called the _Gelfand topology_.
**Example 1.24**.: Given a non-Archimedean field \((K,|\cdot|)\), the underlying set of the Berkovich Affine line \(\mathbb{A}^{1}_{\mathrm{Berk}}\) is the set of multiplicative seminorms
\[|\cdot|_{x}\colon K[T]\longrightarrow\mathbb{R}_{\geq 0} \tag{5}\]
extending the norm on \(K\), as already seen in Summary Theorem 0.1.
**Remark 1.25**.: The expert reader may have noticed Definition 1.23 actually only defines the underlying topological space of the Berkovich analytification. To be correct, let us mention that the full definition of Berkovich analytification also includes a structure sheaf of analytic functions on \(X^{\mathrm{an}}\), yielding a locally ringed space. For details, see [1, Ch. 2-3].
**Discussion 1.26**.: Regarding the topological aspects of Definition 1.23:
1. Despite its point-set formulation, the Berkovich analytification \(X^{\mathrm{an}}\) is suggestive from the point-free perspective since the points of these spaces correspond to models of theories. In particular, let us mention [15], which highlights a subtle interaction between topology and algebra. Namely, if we define absolute values on \(\mathbb{Z}\) using upper reals, then the Scott topology prevents us from defining positive definiteness. Conversely, if we consider absolute values on a field (e.g. \(\mathbb{Q}\)), then the absolute values must be valued in the Dedekinds - this in turn can be shown to imply positive definiteness. This extends Discussion 1.22 on why it may be natural to work with the upper reals.
2. The Berkovich topology can be more explicitly characterised as the weakest topology such that for all \(f\in K[X]\) and for all \(\alpha\in\mathbb{R}\), the sets \[U(f,\alpha) :=\{|\cdot|\in X^{\mathrm{an}}\,\big{|}\,|f|<\alpha\}\] (6) \[V(f,\alpha) :=\{|\cdot|\in X^{\mathrm{an}}\,\big{|}\,|f|>\alpha\}\] are open in \(X^{\mathrm{an}}\). Notice then that Berkovich topology crucially depends on the fact that \(|\cdot|\) is valued in the Dedekinds as opposed to say, the upper reals.14 Footnote 14: Why? Note that \(V(f,\alpha)\) from Equation (6) would no longer be well-defined since an absolute value \(|\cdot|\colon K[X]\to\overleftarrow{[0,\infty)}\) valued in the upper reals is unable to sense which values are smaller than \(|f|\), only those larger than it. See Discussion 1.21.
3. Why should we regard the space of multiplicative seminorms as filling in the "missing points" of the base field? Notice that Definition 1.23 is defined for any valued field \(K\), not necessarily non-Archimedean. By the Gelfand-Mazur Theorem, every multiplicative seminorm on \(\mathbb{C}[T]\) corresponds to an evaluative seminorm, and so one deduces \(\mathbb{A}^{1}_{\mathrm{Berk}}\cong\mathbb{C}\) when \(K=\mathbb{C}\). However, when \(K\) is non-Archimedean, then there exist more multiplicative seminorms than just the evaluative seminorms.
In fact, Definition 1.23 can be extended to the more general case of \(K\)-schemes of (locally) finite type (for details, see [1, Ch. 2 - 3]). One important appeal of the Berkovich analytification is that it constructs well-behaved spaces that are sensitive to the topological character of the original variety.
**Summary Theorem 1.27** ([1, SS3.4-3.5]).: _Let \(X\) be a \(K\)-scheme of finite type. Then \(X^{\mathrm{an}}\) is locally compact and locally path-connected. Furthermore, we also have the following GAGA-type results:_
1. \(X\) _is a connected scheme iff_ \(X^{\mathrm{an}}\) _is connected;_
2. \(X\) _is a separated scheme iff_ \(X^{\mathrm{an}}\) _is Hausdorff;_
3. \(X\) _is a proper scheme iff_ \(X^{\mathrm{an}}\) _is compact._
As a beautiful example of the interaction between (classical) logic and Berkovich geometry, let us also mention the following result by Hrushovski and Loeser.
**Theorem 1.28** ([1, Theorem 14.4.1]).: _Let \(X\) be a \(K\)-scheme of finite type. Then, its Berkovich analytification \(X^{\mathrm{an}}\) is locally contractible._
Prior to Theorem 1.28, local contractibility was only known in the case of smooth Berkovich analytic spaces [1]; by contrast, the model-theoretic techniques developed by Hrushovski and Loeser [1] were sufficiently general to handle both the singular and non-singular cases.
#### 1.2.2....on Banach rings
In algebraic geometry, the basic building blocks are affine schemes, which are spectra of commutative rings. In Berkovich geometry, the analogous building blocks are the so-called Berkovich spectra of commutative Banach rings.
**Definition 1.29** (The Berkovich Spectrum).: Let \((\mathcal{A},||\cdot||)\) be a commutative Banach ring15 with identity.
Footnote 15: Recall: a Banach ring \((\mathcal{A},||\cdot||)\) is a normed ring that is complete with respect to \(||\cdot||\).
1. A _bounded_ multiplicative seminorm on \(\mathcal{A}\) is a multiplicative seminorm \[|\cdot|_{x}\colon\mathcal{A}\longrightarrow\mathbb{R}_{\geq 0}\] (7) that satisfies the inequality \(|f|_{x}\leq||f||\) for all \(f\in\mathcal{A}\).
2. The _Berkovich Spectrum_\(\mathcal{M}(\mathcal{A})\) is the set of all bounded multiplicative seminorms on \(\mathcal{A}\), equipped with the weakest topology such that the map \[\psi_{f}\colon\mathcal{M}(\mathcal{A}) \longrightarrow\mathbb{R}_{\geq 0}\] (8) \[|\cdot|_{x} \longmapsto|f|_{x}\] is continuous for all \(f\in\mathcal{A}\).
**Convention 1.30** (On the Berkovich Spectrum).:
1. In this section, all seminorms should be assumed to be multiplicative unless otherwise stated.
2. The given norm on the Banach ring \(\mathcal{A}\) will always be denoted as \(||\cdot||\). By contrast, to emphasise that \(\mathcal{M}(\mathcal{A})\) is a topological space, its points will be represented as \(|\cdot|_{x}\), or even \(x\in\mathcal{M}(\mathcal{A})\) when the context is clear.
We illustrate this construction with examples. A few orienting remarks are in order. First, notice there is nothing specifically non-Archimedean about Definition 1.29 -- in fact, as we shall see in Example 1.31, the Berkovich spectrum of \(\mathbb{Z}\) yields a space that naturally includes both Archimedean and non-Archimedean components. Another notable feature of Berkovich geometry is that the basic setup accommodates both the trivially and non-trivially valued fields -- see Example 1.32. Interestingly, this flexibility appears to be abandoned/lost in many modern approaches to the subject (see e.g. [1, 1]). Finally, the generality of Definition 1.29 also allows us to define the Berkovich spectrum of an important class of Banach rings known as \(K\)-affinoid algebras -- its role in Berkovich geometry will be discussed in Example 1.34.
**Example 1.31**.: The ring of integers \((\mathbb{Z},|\cdot|_{\infty})\) equipped with the usual Euclidean norm is a Banach ring. The characterisation of its Berkovich spectrum \(\mathcal{M}(\mathbb{Z})\) usually proceeds by a series of case-splittings.
* _Stage 1: Identify the seminorms that fail positive definiteness._ Any point \(x\in M(\mathbb{Z})\) corresponds to a seminorm \(|\cdot|_{x}\) on \(\mathbb{Z}\), which induces a prime ideal \(\mathfrak{p}_{x}=\{n\,|\,|n|_{x}=0\}\subseteq\mathbb{Z}\). In the case where \(\mathfrak{p}_{x}=p\mathbb{Z}\), deduce that \(|\cdot|_{x}\) induces the unique trivial seminorm on \(\mathbb{F}_{p}\), whereby \[|\cdot|_{x}=|n|_{p,\infty}:=\begin{cases}0&\quad\text{if $p|n$}\\ 1&\quad\text{if otherwise.}\end{cases}\] (9) Otherwise, note that the induced prime ideal \(\mathfrak{p}_{x}=(0)\) must be the zero ideal.
* _Stage 2: Classify the remaining seminorms._ Suppose \(\mathfrak{p}_{x}=(0)\). By Ostrowski's Theorem, deduce that \(|\cdot|_{x}\) must be one of the following: _Case 2a:_ \(|\cdot|_{x}=|\cdot|_{0}\) is the trival norm on \(\mathbb{Z}\). _Case 2b:_ \(|\cdot|_{x}=|\cdot|_{\infty}^{\alpha}\) for some \(\alpha\in(0,1]\), where \(|\cdot|_{\infty}\) is the usual Euclidean norm. _Case 2c:_ \(|\cdot|_{x}=|\cdot|_{p}^{\alpha}\) for some \(\alpha\in(0,\infty)\), where \(|\cdot|_{p}\) is the standard \(p\)-adic norm.
Assembling this data together, one obtains the following picture:
**Example 1.32**.: Fix an algebraically-closed non-Archimedean field \((K,|\cdot|)\). We define the Banach ring \((\mathcal{A},||\cdot||)\) whereby:
* \(\mathcal{A}\) is the ring of power series converging in radius \(R\) \[\mathcal{A}=K\{R^{-1}T\}:=\left\{f=\sum_{i=0}^{\infty}c_{i}T^{i}\,\middle|\,c_{i }\in K,\lim_{i\to\infty}|c_{i}|R^{i}=0\right\}\] (10)
* \(||\cdot||\) is the so-called _Gauss norm_ \[||f||:=\sup_{i}|c_{i}|R^{i},\quad\text{where }f\in\mathcal{A}.\] (11)
The description of \(\mathcal{M}(\mathcal{A})\) differs depending on whether \(K\) is trivially or non-trivially valued.
* _Case 1:_ \(K\) _is trivially valued._ In which case, \[K\{R^{-1}T\}=\begin{cases}K[[T]]&\quad\text{if }R<1\\ K[T]&\quad\text{if }R\geq 1\end{cases}\] (12) where \(K[[T]]\) is the formal power series ring and \(K[T]\) is the polynomial ring.16 When \(R<1\), one checks that the the map \(|\cdot|_{x}\mapsto|T|_{x}\) yields a homeomorphism \(\mathcal{M}(\mathcal{A})\cong[0,R]\); when \(R\geq 1\), a more involved argument shows \(\mathcal{M}(\mathcal{A})\) has the structure of \(\mathcal{M}(\mathbb{Z})\) (see Figure 1). Footnote 16: Why? Notice if \(R<1\), then \(\lim_{i\to\infty}|c_{i}|R^{i}\) is always zero since \(|c_{i}|=0\) or 1; if \(R\geq 1\) instead, then the sequence \(\{c_{i}\}\) must eventually be 0.
* _Case 2:_ \(K\) _is non-trivially valued._ In which case, the characterisation of \(\mathbb{A}^{1}_{\mathrm{Berk}}\) in Summary Theorem 0.1 extends here as well: all points of \(x\in\mathcal{M}(\mathcal{A})\) can be realised as \[|\cdot|_{x}=\lim_{n\to\infty}|\cdot|_{D_{r_{i}}(k_{i})}\] (13) for some nested descending sequence of discs \[D_{r_{1}}(k_{1})\supseteq D_{r_{2}}(k_{2})\supseteq\ldots\] (14) where \(|\cdot|_{D_{r}(k)}\) is a multiplicative seminorm canonically associated to the closed disc \[D_{r}(k):=\{b\in K\,\middle|\,|b-k|\leq r\}.\] (15) Discs of this form shall be referred to as a _rigid disc_. This description of \(\mathcal{M}(\mathcal{A})\) results in a complicated tree with infinite branching points:
The reader may wonder: where did we use the hypothesis that \(K\) was non-trivially valued? Notice that the rigid discs in Equation (15) are defined as subsets of \(K\). As such, in order for their radii to
be well-defined (i.e. if \(D_{r}(k)=D_{r^{\prime}}(k^{\prime})\) then \(r=r^{\prime}\)) the base field \(K\) is forced to be non-trivially valued.17 In fact, since \(|\cdot|_{D_{r}(k)}\) is defined by sending Footnote 17: Why? If \(K\) is equipped with a trivial norm, then by definition \(|k|=1\) for all \(k\neq 0\) in \(K\). In which case, \(D_{r}(k)=\{k\}\) for any \(k\in K\) whenever \(r<1\). \[|f|_{D_{r}(k)}:=\sup_{z\in D_{r}(k)}|f(z)|,\] (16) these multiplicative seminorms begin to collapse into one another when \(K\) is trivially-valued, causing the original approximation argument to break down.
**Remark 1.33**.: The Berkovich Affine Line \(\mathbb{A}^{1}_{\mathrm{Berk}}\) is defined as the space of multiplicative seminorms on \(K[T]\). Is \(\mathbb{A}^{1}_{\mathrm{Berk}}\) therefore just another example of a Berkovich spectrum? The answer, perhaps surprisingly, is generally no. Recall: Berkovich spectra are defined for _Banach Rings_. When \(K\) is non-trivially valued, one can check that \(K[T]\) fails to be complete with respect to the Gauss norm \(||\sum a_{i}T_{i}||=\max_{i}|a_{i}|\). However, two important caveats:
1. One can also check that \(K\{R^{-1}T\}\) is in fact a Banach ring (with respect to the appropriate Guass norm), and that \(\mathbb{A}^{1}_{\mathrm{Berk}}\) can be represented as an infinite union of Berkovich spectra \[\mathbb{A}^{1}_{\mathrm{Berk}}\cong\bigcup_{R>0}\mathcal{M}(K\{R^{-1}T\}).\]
2. When \(K\) is trivially valued, then \(K[T]\) turns out to be complete with respect to \(||\cdot||\) and thus defines a Banach ring; in which case, the two constructions do coincide. This gives another way of reading the difference between the trivially vs. non-trivially valued fields in the Berkovich setting.
For details, see e.g. [1, Example 1.4.4] or [1, Ch. 1-2].
**Example 1.34**.: Extending Example 1.32, given \(R_{1},\ldots,R_{n}>0\), define the ring
\[K\{R_{1}^{-1}T_{1},\ldots,R_{n}^{-1}T_{n}\}:=\left\{f=\sum_{\mathsf{v}\in \mathbb{N}^{n}}a_{\mathsf{v}}T^{\mathsf{v}}\,\Bigg{|}\,a_{\mathsf{v}}\in K, \lim_{|\mathsf{v}|\to\infty}|a_{\mathsf{v}}|R^{\mathsf{v}}=0\right\}, \tag{17}\]
where \(|\mathsf{v}|=\mathsf{v}_{1}+\cdots+\mathsf{v}_{n}\) and \(R^{\mathsf{v}}=R_{1}^{\mathsf{v}_{1}}\ldots R_{n}^{\mathsf{v}_{n}}\). To turn \(K\{R_{1}^{-1}T_{1},\ldots,R_{n}^{-1}T_{n}\}\) into a Banach ring, we equip it with the Gauss norm \(||\cdot||\) where
\[||f||:=\sup_{\mathsf{v}}|a_{\mathsf{v}}|R^{\mathsf{v}}. \tag{18}\]
When \(R_{i}=1\) for all \(i\), then \(K\{R_{1}^{-1}T_{1},\ldots,R_{n}^{-1}T_{n}\}\) is called the _Tate Algebra_.
These Banach rings play an important role in Berkovich geometry because they allow us to define _\(K\)-affinoid algebras_, which are the quotients of these power series rings. More precisely, a \(K\)-affinoid algebra \(\mathcal{A}\) is a commutative Banach ring for which there exists an admissible epimorphism
(19)
This should be understood as being the analogues of quotients of polynomial rings in classical scheme theory; in particular, one constructs a Berkovich \(K\)-analytic space by gluing together the Berkovich spectra \(\mathcal{M}(\mathcal{A})\) of these \(K\)-affinoid algebras.
## 2. Berkovich's Classification Theorem, Revisited
An organising theme of this section is the language of filters, which gives a transparent way of understanding how certain key notions in topology, logic and non-Archimedean geometry may interact.
We motivate our study by way of a biased historical overview. On the side of geometry, the fact that the points of a Berkovich spectrum18\(\mathcal{M}(\mathcal{A})\) may be characterised as ultrafilters was already known by the 1990s [1, Remark 2.5.21]; on the side of logic, the fact that the (complete) types over a model may also be characterised as ultrafilters was well understood by the 1960s [20], if not earlier. Yet it was only within the last 10 years that the two perspectives started to converge. Most notably, fixing a valued field \(K\) of rank 1, Hrushovski and Loeser [1, SS14.1] showed that the Berkovich analytification of any
quasi-projective variety \(V\) over \(K\) can be described using the language of definable types (cf. item (iii) of Summary Theorem 0.1). The power of this logical perspective may be measured by the fact that the authors were able to establish many deep results (e.g. Theorem 1.28) that were inaccessible to previous methods (at least, without relying on stronger hypotheses e.g. smoothness).
This sets up our present investigation. The understanding that models of a propositional geometric theory can be characterised as completely prime filters is well known to topos theorists, although the connections with model-theoretic types appear to be undeveloped (but see Discussion 1.18). In this section, we follow the model theorist's cue and use point-free techniques to study the points of the Berkovich spectra \(\mathcal{M}(K\{R^{-1}T\})\) from Example 1.32.
### Berkovich's Disc Theorem
We fix the following hypothesis for the rest of this section.
**Hypothesis 2.1**.:
1. \(K\) _is an algebraically closed field, complete with respect to a non-Archimedean norm_ \(|\cdot|\)_. We emphasise that_ \(|\cdot|\) _need not be non-trivial._
2. _Abusing notation, we use_ \(K\) _to denote its underlying set of points._19__ Footnote 19: This is all standard if one works classically. In point-set topology, one considers the points of a space as a set of elements (in contrast to point-free topology), and thus it is natural to denote the underlying set also as \(K\). However, subtleties arise in point-free topology – see Summary 2.24 for more discussion.
3. \(R\) _denotes any positive Dedekind real_ \(R>0\)_, and_ \(Q_{+}\) _denotes the set of positive rationals._
4. _Define_ \(K_{R}:=\{k\in K\,|\,|k|\leq R\}\)_._
5. _Following Example_ 1.32_: define a Banach ring_ \((\mathcal{A},||\cdot||)\)_, where_ \(\mathcal{A}=K\{R^{-1}T\}\) _denotes the ring of power series converging on radius_ \(R\)_, and_ \(||\cdot||\) _denotes the associated Gauss norm._
In order to classify the bounded multiplicative seminorms on \(\mathcal{A}\), we first reduce our study to something algebraically simpler.
**Definition 2.2** (Bounded \(K\)-Seminorms).:
1. Define the space of linear polynomials \(\mathcal{A}_{\mathrm{Lin}}\) as \[\mathcal{A}_{\mathrm{Lin}}:=\{aT-b\,|\,a,b\in K\}\cong K^{2}.\] (20) In particular, by setting \(a=0\), notice that \(K\subset\mathcal{A}_{\mathrm{Lin}}\).
2. We define a \(K\)_-seminorm_ on \(\mathcal{A}_{\mathrm{Lin}}\) to be an upper-valued map \[|\cdot|_{x}\colon\mathcal{A}_{\mathrm{Lin}}\longrightarrow\overbrace{[0, \infty)}\] (21) such that * _(Preserves constants)_ \(|a|_{x}\) = the right Dedekind section of_ \(|a|\)__ * _(Semi-multiplicative)_ \(|aT|_{x}=|a|\cdot|T|_{x}\)__ * _(Ultrametric Inequality)_ \(|f+f^{\prime}|_{x}\leq\max\{|f|_{x},|f^{\prime}|_{x}\}\)__ for all \(a\in K\), and \(f,f^{\prime}\in\mathcal{A}_{\mathrm{Lin}}\).
3. We define the _Gauss Norm on_ \(\mathcal{A}_{\mathrm{Lin}}\) as \[||aT-b||:=\text{right Dedekind section of }\max\{|a|R,|b|\}\] where \(aT-b\in\mathcal{A}_{\mathrm{Lin}}\). We call a \(K\)-seminorm \(|\cdot|_{x}\) is called _bounded_ if \(|\cdot|_{x}\leq||\cdot||\).
**Remark 2.3**.: It is well-known [1, Lemma 1.1] that any bounded multiplicative seminorm \(|\cdot|_{x}\) satisfies \(|a|_{x}=|a|\) and \(|f+g|_{x}\leq\max\{|f|,|g|\}\) for any \(f,g\in\mathcal{A}\) -- this justifies the axioms we chose in Definition 2.2(ii). Notice also that we did not require a \(K\)-seminorm to be multiplicative (only semi-multiplicative), but this is reasonable since \(\mathcal{A}_{\mathrm{Lin}}\) is not closed under multiplication.
**Reminder 2.4**.: To eliminate potential confusion:
* A _multiplicative seminorm_ is defined on the whole ring \(\mathcal{A}\) of convergent power series.
* A \(K\)_-seminorm_, which is not multiplicative, is only defined on the space of linear polynomials \(\mathcal{A}_{\mathrm{Lin}}\).
We justify the reduction to linear polynomials in the following Preparation Lemma.
**Lemma 2.5** (Preparation Lemma).:
1. _Let_ \((\mathcal{B},||\cdot||_{\mathcal{B}})\) _be an arbitrary Banach ring with dense subring_ \(\mathcal{B}^{\prime}\)_. Then, any bounded multiplicative seminorm on_ \(\mathcal{B}\) _is determined by its values on_ \(\mathcal{B}^{\prime}\)_._
2. _Any bounded multiplicative seminorm on_ \(\mathcal{A}\) _is determined by its values on linear polynomials_ \(T-a\)_, where_ \(a\in K_{R}\)_. The same also holds for bounded_ \(K\)_-seminorms on_ \(\mathcal{A}_{\mathrm{Lin}}\)_._
Proof.: (i) Fix a bounded multiplicative seminorm \(|\cdot|_{x}\) on \(\mathcal{B}\), and suppose \(f\in\mathcal{B}\). Next, for any positive rational \(\epsilon>0\) and any \(g\in\mathcal{B}^{\prime}\) such that \(||f-g||_{\mathcal{B}}<\epsilon\), compute:
\[|f|_{x}\leq|g|_{x}+|f-g|_{x}\leq|g|_{x}+\epsilon,\]
\[|g|_{x}\leq|f|_{x}+|f-g|_{x}\leq|f|_{x}+\epsilon.\]
Hence, deduce that if \(g\to f\) with respect to \(||\cdot||_{\mathcal{B}}\), then \(|g|_{x}\rightarrow|f|_{x}\). The claim then follows from \(\mathcal{B}^{\prime}\) being a dense subalgebra in \(\mathcal{B}\).
(ii) The following two basic observations get us almost all the way:
1. The polynomial ring \(K[T]\) is a dense subalgebra in \(\mathcal{A}\), since any \(f\in\mathcal{A}\) can be expressed as \[f=\sum_{i=0}^{\infty}a_{i}T^{i}=\lim_{n\rightarrow\infty}\sum_{i=0}^{n}a_{i}T ^{i}.\]
2. Since \(K\) is algebraically closed, any polynomial \(g\in K[T]\) can be expressed as \[g=c\cdot\prod_{j=1}^{m}(T-b_{j}),\] where \(c,b_{j}\in K\), for all \(1\leq j\leq m\).
Applying item (i) of the Lemma, Observations (a) and (b) together imply that any bounded multiplicative seminorm \(|\cdot|_{x}\) on \(\mathcal{A}\) is determined by its values on linear polynomials \(T-b\) with \(b\in K\).
In fact, a simple argument shows that we can restrict to just the linear polynomials \(T-a\) where \(a\in K_{R}\). Denote \(h:=T-b\) to be a linear polynomial where \(b\in K\). By polynomial division, we can represent the polynomial \(T\) as
\[T=qh+z.\]
Taking the Gauss norm \(||\cdot||\) on both sides of the equation, we obtain:
\[R=||T||=\max\{||qh||\,\ ||z||\},\]
and so deduce \(z\in K_{R}\). Since it is clear that \(q\) must be a unit in \(K\), we get
\[q^{-1}\cdot(T-z)=h=T-b,\]
and so our claim follows. The argument for the \(K\)-seminorm case is virtually identical.
**Remark 2.6**.: For the reader familiar with non-Archimedean geometry: the proof of Preparation Lemma 2.5 gives a prototype argument that can be generalised to prove the well-known Weierstrass Preparation Theorem [Jona, Prop. 1.9.10], which informally says: convergent power series often look like polynomials when restricted to a closed disc. Stated more precisely: all non-zero \(f\in\mathcal{A}\) with finite order \(m\geq 0\) can be represented as
\[f=W\cdot u(T), \tag{22}\]
where \(u(T)\) is a unit power series on the closed disc \(K_{R}\), and \(W\) is a monic polynomial (the "Weierstrass polynomial") with both degree and order \(m\). However, unlike e.g. [1], we shall avoid invoking the Weierstrass Preparation Theorem since it only applies to non-zero \(f\in\mathcal{A}\) with _finite order_. This will create issues if we wish to consider \(K\{R^{-1}T\}\) with irrational radius \(R\), which we can avoid with our approach.
We now introduce a special class of filters that highlights the topological structure of \(\mathcal{M}(\mathcal{A})\).
**Definition 2.7** (Formal Non-Archimedean Balls).: A _formal non-Archimedean ball_ is an element \((k,q)\in K_{R}\times Q_{+}\). We represent this using the more suggestive notation \(B_{q}(k)\), to emphasise that we should view this pair as denoting a disc of radius \(q\) centred at \(k\). In particular, we write:
\[B_{q^{\prime}}(k^{\prime})\subseteq B_{q}(k)\ \text{just in case}\ |k-k^{\prime}|<q \wedge q^{\prime}\leq q.\]
**Observation 2.8**.: By decidability20 of \(<\) on \(Q_{+}\), one obtains the following sequents from the definition of \(\subseteq\) from Definition 2.7:
Footnote 20: That is, for any \(q,r\in Q_{+}\), we (constructively) know that \(q<r\), \(q=r\) or \(q>r\).
1. \(|k-k^{\prime}|<q\longrightarrow B_{q}(k^{\prime})=B_{q}(k)\).
2. \(B_{q}(k)=B_{q^{\prime}}(k^{\prime})\longrightarrow q=q^{\prime}\)
**Discussion 2.9** (Rigid discs vs. Formal balls).: Notice item (ii) of Observation 2.8 says that the radii of the formal balls are well-defined (essentially by construction), even when the norm \(|\cdot|\) on \(K\) is trivial. This should be contrasted with the classical rigid discs from Example 1.32
\[D_{r}(k):=\{b\in K\,\big{|}\,|b-k|\leq r\}, \tag{23}\]
whose radii are well-defined only if \(|\cdot|\) is non-trivial due to the point-set formulation.
**Remark 2.10**.: The language of formal balls may strike the classical reader as a peculiar abstraction, but in fact they are sufficiently expressive to provide point-free accounts of standard completions of metric spaces [23, 24]. Once the connection between the Berkovich construction and the completion of a space is made precise, these techniques can be adjusted accordingly to our present context.21
Footnote 21: One important adjustment is that the _non-strict_ order defined in Definition 2.7 is quite different from the _strict_ order defined in [24, 23], but this reflects the fact that closed discs in non-Archimedean topology are also open.
**Definition 2.11** (Filters of Formal Non-Archimedean Balls).: A _filter_ of formal non-Archimedean balls \(\mathcal{F}\) is an inhabited22 subset of \(K_{R}\times Q_{+}\) satisfying the following conditions:
Footnote 22: The classical reader may substitute mentions of “inhabited” with “non-empty” without too much trouble.
* _(Upward closed with respect to_ \(\subseteq\)_) If \(B_{q^{\prime}}(k^{\prime})\subseteq B_{q}(k)\) and \(B_{q^{\prime}}(k^{\prime})\in\mathcal{F}\), then \(B_{q}(k)\in\mathcal{F}\).
* _(Closed under pairwise intersections)_ If \(B_{q}(k),B_{q^{\prime}}(k^{\prime})\in\mathcal{F}\), then there exists some \(B_{r}(j)\in\mathcal{F}\) such that \(B_{r}(j)\subseteq B_{q}(k)\) and \(B_{r}(j)\subseteq B_{q^{\prime}}(k^{\prime})\).
Further, we call \(\mathcal{F}\) an \(R\)-_good_ filter if it also satisfies the following two conditions:
* For any \(k\in K_{R}\), and \(q\in Q_{+}\) such that \(R<q\), \(B_{q}(k)\in\mathcal{F}\).
* If \(B_{q}(k)\in\mathcal{F}\), there exists \(B_{q^{\prime}}(k^{\prime})\in\mathcal{F}\) such that \(q^{\prime}<q\).
**Convention 2.12**.: In this section, unless stated otherwise:
* A ball \(B_{q}(k)\) will always mean a formal non-Archimedean ball (Definition 2.7).
* A filter \(\mathcal{F}\) will always means a filter of formal non-Archimedean balls (Definition 2.11).
**Observation 2.13** (Radius of \(R\)-Good Filters).: Let \(\mathcal{F}\) be an \(R\)-good filter. Define the _radius_\(\mathsf{rad}_{\mathcal{F}}\) of \(\mathcal{F}\) as:
\[\mathsf{rad}_{\mathcal{F}}:=\{q\in Q_{+}|\,B_{q}(k)\in\mathcal{F}\,\}.\]
Then, \(\mathsf{rad}_{\mathcal{F}}\) defines an upper real in \(\overbrace{[0,R]}^{\prime}\).
Proof.: Roundedness and upward closure is immediate from the definition of \(\mathcal{F}\) being an \(R\)-good filter, and so \(\mathsf{rad}_{\mathcal{F}}\) defines an upper real. The fact that \(\mathsf{rad}_{\mathcal{F}}\in\overbrace{[0,R]}^{\prime}\) follows from additionally noting:
* \(\mathsf{rad}_{\mathcal{F}}\) is a subset of positive rationals. Hence, \(0\leq\mathsf{rad}_{\mathcal{F}}\).
* By \(\mathcal{F}\) being \(R\)-good, we know \(q\in\mathsf{rad}_{\mathcal{F}}\) for all rationals \(q>R\). Hence, \(\mathsf{rad}_{\mathcal{F}}\leq R\).
**Construction 2.14**.: Suppose we have a bounded \(K\)-seminorm \(|\cdot|_{x}\) on \(\mathcal{A}_{\mathrm{Lin}}\). We then define the following collection of formal balls:
\[\mathcal{F}_{x}:=\{B_{q}(k)\,|\,k\in K_{R}\,\text{and}\,|T-k|_{x}<q\}\]
**Claim 2.15**.: \(\mathcal{F}_{x}\) _is an \(R\)-good filter._
Proof.: By Definition 2.11, we need to check that \(\mathcal{F}_{x}\) is...
* _...Upward closed._ Suppose \(B_{q^{\prime}}(k^{\prime})\subseteq B_{q}(k)\) and \(B_{q^{\prime}}(k^{\prime})\in\mathcal{F}_{x}\). Unpacking definitions, this means \(|k-k^{\prime}|<q\) and \(q^{\prime}\leq q\), as well as \(|T-k^{\prime}|_{x}<q^{\prime}\). But since \[|T-k|_{x}=|(T-k^{\prime})+(k^{\prime}-k)|_{x}\leq\max\{|T-k^{\prime}|_{x},|k-k^ {\prime}|\}<\max\{q^{\prime},q\}=q,\] this implies \(B_{q}(k)\in\mathcal{F}_{x}\).
* _...Closed under Pairwise Intersection._ We first claim \(\mathcal{F}_{x}\) is totally ordered by \(\subseteq\). Why? Given any \(B_{q}(k),B_{q^{\prime}}(k^{\prime})\in\mathcal{F}_{x}\), we get \(|T-k|_{x}<q\) and \(|T-k^{\prime}|_{x}<q^{\prime}\), and so \[|k-k^{\prime}|=|(T-k^{\prime})-(T-k)|_{x}\leq\max\{|T-k^{\prime}|_{x},|T-k|_{x }\}<\max\{q^{\prime},q\}.\] By decidability of \(<\) on \(Q_{+}\), this means either \(B_{q}(k)\subseteq B_{q^{\prime}}(k^{\prime})\) or \(B_{q^{\prime}}(k^{\prime})\subseteq B_{q}(k)\), as claimed. The fact that \(\mathcal{F}_{x}\) is closed under pairwise intersection follows immediately.
* Suppose \(B_{q}(k)\in\mathcal{F}_{x}\), and so \(|T-k|_{x}<q\) by definition. Since \(|\cdot|_{x}\) defines an upper real, there exists \(q^{\prime}\in Q_{+}\) such that \(|T-k|_{x}<q^{\prime}<q\), and so \(B_{q^{\prime}}(k)\in\mathcal{F}_{x}\).
* Suppose \(k\in K_{R}\) and \(q\in Q_{+}\) such that \(R<q\). This gives \[|T-k|_{x}\leq\max\{|T|_{x},|k|\}\leq\max\{||T||,|k|\}=R<q,\] since \(k\in K_{R}\) implies \(|k|\leq R\) by definition, and \(|T|_{x}\leq||T||=R\). Hence, \(B_{q}(k)\in\mathcal{F}_{x}\).
* _...Inhabited._ Immediate from \(R\)-goodness.
In the converse direction, we define the following:
**Construction 2.16**.: For any formal non-Archimedean ball \(B_{q}(k)\), we define \(|\cdot|_{B_{q}(k)}\) as follows:
\[|T-a|_{B_{q}(k)}:=\max\{|k-a|,q\},\qquad\qquad\text{ where }T-a\in\mathcal{A}_{\rm Lin}.\]
More generally, given an \(R\)-good filter \(\mathcal{F}\), we define \(|\cdot|_{\mathcal{F}}\) as
\[|T-a|_{\mathcal{F}}:=\inf_{B_{q}(k)\in\mathcal{F}}|T-a|_{B_{q}(k)}=\inf_{B_{q }(k)\in\mathcal{F}}\max\{|k-a|,q\}\]
for any linear polynomial \(T-a\in\mathcal{A}_{\rm Lin}\).
**Remark 2.17**.: Let \(f\in\mathcal{A}\) such that \(f\) converges on a rigid disc \(D_{r}(k)\). By the Maximum Modulus Principle in Non-Archimedean Analysis (see e.g. [10, p. 3]), one can express \(|\cdot|_{D_{r}(k)}\) as
\[|f|_{D_{r}(k)}=\sup_{i}|c_{i}|r^{i}\qquad\qquad\text{ where }f=\sum_{i=0}^{\infty}c_{i}(T-k)^{i}. \tag{24}\]
Notice that \(|T-a|_{D_{q}(k)}\) is classically equivalent to \(|T-a|_{B_{q}(k)}\) just in case \(r=q\). Nonetheless, the definition of \(|\cdot|_{D_{(r)}(k)}\) as stated is problematic in our setting for two reasons.
* First, the use of \(\sup\) presents a constructive issue. A \(\sup\) of Dedekinds yields a _lower real_ (see Definition 1.12), whereas our \(K\)-seminorms are valued in upper reals, and there is no constructive way to switch between lower and upper reals.23 On the other hand, \(\max\) and \(\inf\)_are_ well-defined on the upper reals, which explains the formulation of Construction 2.16. Footnote 23: Why? See Discussion 2.23. A possible objection: since \(f\in\mathcal{A}\), \(\sup_{i}\) is really a \(\max_{i}\) because \(|c_{i}|r_{i}\to 0\). Now since we can take finite max’s of upper reals, does this not circumvent the constructivist issue? The answer is no: we do not know _a priori_ on which index \(i\) does \(\sup_{i}|c_{i}|r^{i}\) attains the maximum value. As such, we are still forced to quantify over all indices.
* The rigid discs featured in Equation (24) are required to have bounded radius \(0<r\leq R\), which ensures the convergence of \(|f|_{D_{r}(k)}\) for any \(f\in\mathcal{A}\). On the other hand, the radius of the formal balls \(B_{q}(k)\) have no upper bound. Nonetheless, we avoid convergence issues since \(|\cdot|_{B_{q}(k)}\) is restricted to just the linear polynomials.
Having established Construction 2.16, we perform the obligatory check:
**Claim 2.18**.: \(|\cdot|_{\mathscr{F}}\) _determines a bounded \(K\)-seminorm on \(\mathcal{A}_{\mathrm{Lin}}\). More explicitly, for any \(B_{q}(k)\in\mathscr{F}\), define_
\[|\cdot|_{B_{q}(k)} :\mathcal{A}_{\mathrm{Lin}}\longrightarrow\mathbb{R}_{\geq 0} \tag{25}\] \[aT-b \longmapsto\max\{|ak-b|,|a|\cdot q\}.\]
_Then, the map_
\[|aT-b|_{\mathscr{F}}:=\inf_{B_{q}(k)\in\mathscr{F}}\lvert aT-b|_{B_{q}(k)}. \tag{26}\]
_defines a bounded \(K\)-seminorm on \(\mathcal{A}_{\mathrm{Lin}}\)._
**Remark 2.19**.: It is easy to see that Equation (25) recovers the original \(|\cdot|_{B_{q}(k)}\) in Construction 2.16 when we restrict to \(T-b\). In fact, if we work classically, then this extension of \(|\cdot|_{B_{q}(k)}\) is canonical in that it is essentially forced upon us by Definition 2.2.24
Footnote 24: Why? Classically, for any field \(K\), it is decidable if \(a\in K\) is a unit or \(a=0\). (Geometrically, however, this is not true – one has to impose this property as an extra axiom, see [12].) Nonetheless, suppose \(|\cdot|_{x}\) is a \(K\)-seminorm such that \(|T-c|_{x}=\) “the right Dedekind section of \(|T-c|_{B_{q}(k)}\)”, for any \(c\in K\). If \(a\) is a unit, then semi-multiplicativity then gives \(|aT-b|_{x}=|a\cdot(T-b\cdot a^{-1})|_{x}=|a|\cdot|T-b\cdot a^{-1}|_{B_{q}(k)}= \max\{|ak-b|,|a|\cdot q\}\), whereas if \(a=0\), then \(|aT-b|_{x}=|b|=\max\{|ak-b|,|a|\cdot q\}\), coinciding with (the right Dedekind section of) the extended \(|\cdot|_{B_{q}(k)}\).
Proof of Claim.: Verifying the claim involves checking that Equation (26) satisfies the required properties.
* \(|\cdot|_{\mathscr{F}}\) is valued in the upper reals, since \(|f|_{\mathscr{F}}\) takes the infimum of an arbitrary set of Dedekinds.
* To verify \(|\cdot|_{\mathscr{F}}\) satisfies the properties listed in Definition 2.2(ii), one first verifies their obvious analogues for \(|\cdot|_{B_{q}(k)}\), before observing that they are preserved by taking \(\inf\)'s. Most are obvious except perhaps the ultrametric inequality. Suppose we have \(aT-b,a^{\prime}T-b^{\prime}\in\mathcal{A}_{\mathrm{Lin}}\). Then, given any \(B_{q}(k)\in\mathscr{F}\), compute: \[|aT-b+a^{\prime}T-b^{\prime}|_{B_{q}(k)} =\max\{|(a+a^{\prime})k-(b+b^{\prime})|\,,\,|a+a^{\prime}|\cdot q\}\] \[\leq\max\left\{\max\{|ak-b|,|a^{\prime}k-b^{\prime}|\},\max\{|a| \cdot q,|a^{\prime}|\cdot q\}\right\}\] \[=\max\{|aT-b|_{B_{q}(k)},|a^{\prime}T-b^{\prime}|_{B_{q}(k)}\}\] (27) where the middle inequality is by the ultrametric inequality satisfied by the original norm \(|\cdot|\) on \(K\). Since this inequality holds for all \(B_{q}(k)\in\mathscr{F}\), taking the infimum on both sides of the Inequality (27) over all \(B_{q}(k)\in\mathscr{F}\) gives the desired ultrametric inequality for \(|\cdot|_{\mathscr{F}}\) \[|aT-b+a^{\prime}T-b^{\prime}|_{\mathscr{F}}\leq\max\{|aT-b|_{\mathscr{F}}\,,\, |a^{\prime}T-b^{\prime}|_{\mathscr{F}}\}.\] (28)
* For any \(R\)-good filter \(\mathscr{F}\), \[|T-a|_{\mathscr{F}}=\inf_{B_{q}(k)\in\mathscr{F}}\max\{|k-a|,q\}\leq\inf_{B_{ q}(k)\in\mathscr{F}}\max\{|a|,|k|,q\}\leq\max\{|a|,R\},\] (29) where the final inequality is by \(R\)-goodness of \(\mathscr{F}\) plus the fact that \(k\in K_{R}\). The same argument extends to show that \(|aT-b|_{\mathscr{F}}\leq\max\{|a|\cdot R,|b|\}\). Hence, conclude that \(|\cdot|_{\mathscr{F}}\) is bounded by the Gauss norm \(||\cdot||\).25
Footnote 25: Technically, \(\max\{|a|,R\}\) defines a Dedekind and not an upper real, so we really mean its corresponding right Dedekind section.
As the reader may have anticipated, the algebraic constructions (i.e. bounded multiplicative seminorms and \(K\)-seminorms) and the topological constructions (i.e. the \(R\)-good filters) defined in this section have a close interaction. This is made precise in the following two theorems.
**Theorem 2.20**.: _The space of bounded \(K\)-seminorms on \(\mathcal{A}_{\mathrm{Lin}}\) is equivalent to the space of \(R\)-good filters._
Proof.: It suffices to show that Constructions 2.14 and 2.16 are inverse to each other. This amounts to checking:
_First Direction:_\(|\cdot|_{x}=|\cdot|_{\mathcal{F}_{x}}\). Fix a bounded \(K\)-seminorm \(|\cdot|_{x}\). By Preparation Lemma 2.5, it suffices to check that \(|\cdot|\) and \(|\cdot|_{\mathcal{F}_{x}}\) agree on linear polynomials \(T-a\) such that \(a\in K_{R}\).
Suppose \(|T-a|_{x}<q\) for some \(q\in Q_{+}\). Then, \(B_{q}(a)\in\mathcal{F}_{x}\) by construction. In particular, there exists \(q^{\prime}\in Q_{+}\) such that
\[|T-a|_{x}<q^{\prime}<q \tag{30}\]
since \(|T-a|_{x}\) defines an upper real. Further, since
\[|T-a|_{B_{q^{\prime}}(a)}=\max\{|a-a|,q^{\prime}\}=q^{\prime}, \tag{31}\]
and since Equation (30) implies \(B_{q^{\prime}}(a)\in\mathcal{F}_{x}\), deduce that
\[|T-a|_{\mathcal{F}_{x}}=\inf_{B_{q}(k)\in\mathcal{F}_{x}}|T-a|_{B_{q}(k)}\leq |T-a|_{B_{q^{\prime}}(a)}<q,\]
and so
\[|T-a|_{\mathcal{F}_{x}}\leq|T-a|_{x}. \tag{32}\]
Conversely, suppose \(|T-a|_{\mathcal{F}_{x}}<q\) for some \(q\in Q_{+}\). Again, since \(|T-a|_{\mathcal{F}_{x}}\) defines an upper real, deduce there exists \(B_{q^{\prime}}(k)\in\mathcal{F}_{x}\) such that
\[|T-a|_{\mathcal{F}_{x}}\leq|T-a|_{B_{q^{\prime}}(k)}=\max\{|k-a|,q^{\prime}\} <q.\]
By definition, \(B_{q^{\prime}}(k)\in\mathcal{F}_{x}\) implies \(|T-k|_{x}<q^{\prime}\), and so
\[|T-a|_{x}=|(T-k)+(k-a)|_{x}\leq\max\{|T-k|_{x},|k-a|\}<q,\]
which in turn implies
\[|T-a|_{x}\leq|T-a|_{\mathcal{F}_{x}}. \tag{33}\]
Put together, Equations (32) and (33) give \(|\cdot|_{x}=|\cdot|_{\mathcal{F}_{x}}\), as claimed.
_Second Direction:_\(\mathcal{F}=\mathcal{F}_{|\cdot|_{\mathcal{F}}}\). Fix an \(R\)-good filter \(\mathcal{F}\). Suppose \(B_{q}(k)\in\mathcal{F}\). Since the radius \(\mathsf{rad}_{\mathcal{F}}\) of \(\mathcal{F}\) defines an upper real, find a \(q^{\prime}\in Q_{+}\) such that \(B_{q^{\prime}}(k^{\prime})\in\mathcal{F}\) and \(\mathsf{rad}_{\mathcal{F}}<q^{\prime}<q\). Without loss of generality, we may assume \(k=k^{\prime}\).26 Since \(|T-k|_{B_{q^{\prime}}(k)}=q^{\prime}\), deduce that
Footnote 26: Why? Since \(\mathcal{F}\) is closed under pairwise intersection, there exists \(B_{r}(j)\subseteq B_{q}(k)\cap B_{q^{\prime}}(k^{\prime})\), and so by Observation 2.8(i) \(B_{q^{\prime}}(k^{\prime})=B_{q^{\prime}}(j)\subseteq B_{q}(j)=B_{q}(k)\).
\[|T-k|_{\mathcal{F}}\leq|T-k|_{B_{q^{\prime}}(k)}=q^{\prime}<q.\]
In particular, this implies \(B_{q}(k)\in\mathcal{F}_{|\cdot|_{\mathcal{F}}}\), and so
\[\mathcal{F}\subset\mathcal{F}_{|\cdot|_{\mathcal{F}}}. \tag{34}\]
Conversely, suppose \(B_{q}(k)\in\mathcal{F}_{|\cdot|_{\mathcal{F}}}\). Unpacking definitions, deduce there exists \(B_{q^{\prime}}(j)\in\mathcal{F}\) such that
\[|T-k|_{\mathcal{F}}\leq|T-k|_{B_{q^{\prime}}(j)}=\max\{|k-j|,q^{\prime}\}<q. \tag{35}\]
To show that \(B_{q^{\prime}}(j)\subseteq B_{q}(k)\), we need to show that \(|k-j|<q\) and \(q^{\prime}\leq q\). But this is clear from Equation (35). Since \(B_{q^{\prime}}(j)\in\mathcal{F}\) and \(\mathcal{F}\) is upward closed, conclude that \(B_{q}(k)\in\mathcal{F}\), and so
\[\mathcal{F}_{|\cdot|_{\mathcal{F}}}\subset\mathcal{F}. \tag{36}\]
Combining Equations (34) and (36) gives \(\mathcal{F}=\mathcal{F}_{|\cdot|_{\mathcal{F}}}\), finishing the proof.
**Theorem 2.21**.: _As our setup, denote:_
* \(\mathcal{M}(\mathcal{A})\) _as the classical Berkovich spectrum (of Dedekind-valued multiplicative seminorms);_
* \(\overleftarrow{\mathcal{M}(\mathcal{A}_{\mathrm{Lin}})}\) _as the space of bounded_ \(K\)_-seminorms on_ \(\mathcal{A}_{\mathrm{Lin}}\)_._
_Then,_
1. _Each_ \(|\cdot|_{x}\in\mathcal{M}(\mathcal{A})\) _is uniquely represented by an_ \(R\)_-good filter._
2. \(\mathcal{M}(\mathcal{A})\) _is classically equivalent to_ \(\overleftarrow{\mathcal{M}(\mathcal{A}_{\mathrm{Lin}})}\)_. In particular, it is classically equivalent to the space of_ \(R\)_-good filters._
Proof.: It is clear \(|\cdot|_{x}\in\mathcal{M}(\mathcal{A})\) restricts to a bounded \(K\)-seminorm, which we denote
\[|\cdot|_{x}|_{\mathcal{A}_{\mathrm{Lin}}}\colon\mathcal{A}_{\mathrm{Lin}} \longrightarrow\overbrace{[0,\infty)}^{\left(\mathcal{A}_{\mathrm{Lin}}\right) }. \tag{37}\]
By Fact 1.16, the map \(x\mapsto R_{x}\) sending a Dedekind to its right Dedekind section is monic. As such, (i) follows immediately from Preparation Lemma 2.5 and Theorem 2.20.
For (ii), we start by declaring the following classical assumption, which we will use freely in the remainder of the proof:
* Any upper real \(\gamma_{U}\) besides \(\infty\) or \(-\infty\) can be canonically associated to a Dedekind real \(\gamma\) whose right Dedekind section corresponds to \(\gamma_{U}\). Similarly, any bounded lower real can also be canonically associated to its corresponding Dedekind real.
The argument proceeds (again) by way of construction. By Equation (37), we already know how to send \(|\cdot|_{x}\in\mathcal{M}(\mathcal{A})\) to a bounded \(K\)-seminorm in \(\overline{\mathcal{M}(\mathcal{A}_{\mathrm{Lin}})}\). In the converse direction, notice Theorem 2.20 helpfully allows us to explicitly characterise the generic bounded \(K\)-seminorm as \(|\cdot|_{\mathcal{F}}\in\overbrace{\mathcal{M}(\mathcal{A}_{\mathrm{Lin}})}\), where \(\mathcal{F}\) is the generic \(R\)-good filter. We extend this to a multiplicative seminorm on \(\mathcal{A}\) in two steps. First, since any polynomial \(f\in K[T]\) can be represented as
\[f=c\cdot\prod_{j=1}^{m}(T-b_{j}),\]
\(|\cdot|_{\mathcal{F}}\) naturally extends to a map on \(K[T]\), which (abusing notation) we also represent as:
\[|\cdot|_{\mathcal{F}}\colon K[T] \longrightarrow[0,\infty) \tag{38}\] \[f \longmapsto|c|\cdot\prod_{j=1}^{m}|T-b_{j}|_{\mathcal{F}}.\]
Second, since any power series \(f\in\mathcal{A}\) can be represented as limit of polynomials
\[f=\sum_{i=0}^{\infty}a_{i}T^{i}=\lim_{n\to\infty}\sum_{i=0}^{n}a_{i}T^{i},\]
we define the obvious extension of \(|\cdot|_{\mathcal{F}}\) to \(\mathcal{A}\):
\[\widetilde{|\cdot|_{\mathcal{F}}}\colon\mathcal{A} \longrightarrow[0,\infty) \tag{39}\] \[f \longmapsto\lim_{n\to\infty}|\sum_{i=0}^{n}a_{i}T^{i}|_{\mathcal{ F}}\]
A few orienting remarks:
1. Notice the implicit use of Assumption \((\star)\) throughout, e.g. the original definition of \(|\cdot|_{\mathcal{F}}\) was valued in the upper reals, but we extended it to a Dedekind-valued map on \(K[T]\) in Equation (38).
2. Assumption \((\star)\) also strengthens item (i): we can now view \(\mathcal{M}(\mathcal{A})\) as a subspace of \(\overbrace{\mathcal{M}(\mathcal{A}_{\mathrm{Lin}})}^{\left(\mathcal{A}_{ \mathrm{Lin}}\right)}\).
3. Suppose the construction \(\widetilde{|\cdot|_{\mathcal{F}}}\) defines a bounded multiplicative seminorm on \(\mathcal{A}\). Since Assumption \((\star)\) allows us to treat \(K\)-seminorms as if they were valued in Dedekinds, it becomes straightforward to check that the constructions \(|\cdot|_{x}|_{\mathcal{A}_{\mathrm{Lin}}}\) and \(|\cdot|_{\mathcal{F}}\) define maps which are inverse to each other. In particular, the identity \[|\cdot|_{\mathcal{F}}=\widetilde{|\cdot|_{\mathcal{F}}}\big{|}_{\mathcal{A}_{ \mathrm{Lin}}}\] (40) is immediate by construction, whereas the identity \[|\cdot|_{x}=\widetilde{|\cdot|_{x}|_{\mathcal{A}_{\mathrm{Lin}}}}\] (41) follows from Preparation Lemma 2.5 and checking the values on the linear polynomials.
As such, in order to prove the (classical) equivalence stated in the Theorem, it remains to verify that \(\widetilde{|\cdot|_{\mathcal{F}}}\) is in fact a bounded multiplicative seminorm on \(\mathcal{A}\). The check relies on standard arguments from non-Archimedean analysis; for details, see Appendix A.
We conclude with some discussions on various aspects of the proof.
**Discussion 2.22**.: Let us sketch the original argument from classical Berkovich geometry.
* Given any rigid disc \(D_{r}(k)\) such that \(0<r\leq R\), define a multiplicative seminorm \(|\cdot|_{D_{r}(k)}\) on \(\mathcal{A}\).
* Next, given any nested sequence of discs \(\mathsf{D}:=D_{r_{1}}(k_{1})\supseteq D_{r_{2}}(k_{2})\supseteq\dots\), define the multiplicative seminorm \(|\cdot|_{\mathsf{D}}:=\inf_{\mathsf{D}}|\cdot|_{D_{r_{i}}(k_{i})}\).
* Finally, given \(|\cdot|_{x}\in\mathcal{M}(\mathcal{A})\), define the nested sequence \(\mathsf{D}_{x}:=\{D_{|T-k|_{x}}(k)\,|\,k\in K\text{ and }|k|\leq R\}\). Check that \(|\cdot|_{x}\) and \(|\cdot|_{\mathsf{D}_{x}}\) agree on linear polynomials, and conclude \(|\cdot|_{x}=|\cdot|_{\mathsf{D}_{x}}\).
The parallels with the proof for Theorem 2.20 are clear. However, the argument must be adjusted and finitised appropriately in order to work in our context. Some important differences:
1. _On "rational" discs._ Both \(R\)-good filters and the nested sequence of discs \(\mathsf{D}_{x}\) give rise to approximation arguments, but their approximants differ in important ways. In particular, whereas the radius of a formal ball \(B_{q}(k)\) is rational in the usual sense that \(q\in Q_{+}\) is a positive rational number, in non-Archimedean geometry a rigid disc \(D_{r}(k)\in\mathsf{D}_{x}\) with rational radius means that \[r\in\Gamma:=\{|k|\in[0,\infty)\,|\,k\in K\},\] i.e. \(r\) belongs to the value group \(\Gamma\) of \(K\). The suggestive terminology ("rational") indicates an analogy between \(Q_{+}\) and \(\Gamma\), but it is important to remember that they are not the same -- particularly when \(K\) is trivially-valued.
2. _On \(K\)-seminorms_. Whereas the original argument starts by defining a multiplicative seminorm on \(\mathcal{A}\), before restricting it to the linear polynomials to perform certain checks, we instead defined a new algebraic structure (which we called \(K\)-seminorms) on the space of linear polynomials \(\mathcal{A}_{\mathrm{Lin}}\).
3. _On the use of filters._ While Berkovich's original argument shows that every \(|\cdot|_{x}\in\mathcal{M}(\mathcal{A})\) corresponds to a nested descending sequence of discs, this representation is not unique. In particular, two different sequences of discs may define the same multiplicative seminorm on \(\mathcal{A}\). We resolve this issue by appealing to the more natural language of filters, which allows us to obtain a representation result: every \(|\cdot|_{x}\in\mathcal{M}(\mathcal{A})\) is uniquely associated to an \(R\)-good filter \(\mathcal{F}_{x}\).
4. _On the use of formal balls._ As already pointed out in Discussion 2.9, our result holds for both trivially and non-trivially valued \(K\). This is in contrast to the original argument, which only works for non-trivially valued \(K\).
Items (i) and (ii) reflect our decision to work with the upper reals as opposed to the Dedekinds, and strike a careful balance: whilst the upper reals are particularly suited to analysing the filters of formal balls (cf. Observation 2.13), they also impose strong (constructive) restrictions on the algebra (cf. Remark 2.17, but see also Discussion 2.23). Items (iii) and (iv) give evidence that filters (as opposed to nested sequences of discs) and formal balls (as opposed to the classical rigid discs) are the correct language for studying \(\mathcal{M}(\mathcal{A})\).
Finally, for the reader interested in constructive mathematics, we sort out and summarise the classical vs. constructive aspects of our result.
**Discussion 2.23** (On Assumption (\(\star\))).: Recall that Theorem 2.21 (ii) is a classical result because the proof relies on Assumption (\(\star\)), which essentially says: any bounded upper (or lower) real can be canonically associated to a Dedekind real. Some natural questions (and answers):
1. _Why is Assumption (\(\star\)) classical?_ Consider the obvious argument: given an upper real \(R\) defined as the set of rationals strictly greater than 1, we define a lower real \(L\) as the set of rationals strictly less than 1. Hence, conclude that \((L,R)\) is the Dedekind real canonically associated to \(R\). This argument will strike most as reasonable, so where does it fail constructively? Discussion 1.21 reminds us that an upper real is blind to the rationals less than itself, suggesting it may only have the same knowledge as a Dedekind real by way of classical reasoning. Reformulated more precisely, we claim that if any upper real \(R\) can be associated to a lower real \(L\) such that \((L,R)\) defines a Dedekind real, then this implies that every proposition \(p\) has a Boolean complement \(p^{\prime}\) -- which holds classically, but _not_ constructively (cf. Remark 1.3). To prove our claim, let \(p\) be any proposition, and define the subset of rationals: \[R:=\{q\in\mathbb{Q}|\text{ either ``$q>1$'' or ``$p$ holds and $q>0$''}\}\]
In other words, \(R\) is a kind of schizophrenic upper real: since \(p\) holds iff \(R<1\), \(R\) may define the upper real \(1\) or the upper real \(0\), depending on the truth value of \(p\). Now, suppose we have some lower real \(L\) such that \((L,R)\) is Dedekind. Define a new proposition \(p^{\prime}\) iff \(\frac{1}{2}<L\). If \((L,R)\) indeed define a Dedekind, one deduces 1. \(\top\to p\lor p^{\prime}\); 2. [Why? Locatedness of \((L,R)\) gives \(\frac{1}{2}<L\lor R<1\).] 2. \(p\wedge p^{\prime}\to\bot\). 3. [Why? Since \(p\to R<\frac{1}{2}\), this gives \(p\wedge p^{\prime}\to(\frac{1}{2}<L)\wedge(R<\frac{1}{2})\), contradicting separatedness of \((L,R)\).] This shows \(p^{\prime}\) is a Boolean complement of \(p\), as claimed.
2. _Is Assumption \((\star)\) necessary?_ To constructivise Theorem 2.21, one might look to prove an equivalence between upper-valued multiplicative seminorms on \(\mathcal{A}\) and \(K\)-seminorms on \(\mathcal{A}_{\mathrm{Lin}}\) instead. If such a result does hold, we will need a different proof strategy. This seems challenging: Assumption \((\star)\) allows us to assume that bounded suprema/infima of Dedekinds are still Dedekinds, which plays a key (if implicit) role in our proof (see Claim A.2). Notice that we will need Assumption \((\star)\) anyway if we are to obtain the final equivalence with the usual Berkovich spectra.
**Summary 2.24** (Classical vs. Geometric Mathematics).: _Geometric mathematics_ is a specific regime of constructive mathematics, natural in point-free topology, which essentially deals with constructions arising from colimits and finite limits (for details, see [23, 24]). Since our work here is influenced by point-free techniques, a natural question to ask is: to what extent do the results of this section adhere to these constraints?
1. Hypothesis 2.1 views \(K\) as a set of elements, equipped with structure of a field. Taken at face value, these are unwelcome restrictions. If one wished to be consistently geometric throughout, this means \(K\) has to be a discrete field and so many (topological) fields of interest, e.g. the \(p\)-adic complex numbers \(\mathbb{C}_{p}\), would be excluded. However, if the reader is happy to work with point-set topology, then this is no longer a problem.
2. Although the polynomial ring \(K[T]\) and \(\mathcal{A}_{\mathrm{Lin}}\) can be defined geometrically (assuming Hypothesis 2.1), it is presently unclear how to do the same for the convergent power series ring \(K\{R^{-1}T\}\) -- in particular, how to formulate the condition "\(f\in\mathcal{A}\) converges on a radius \(R\)" geometrically.
3. So long as Hypothesis 2.1 holds (in fact, we can eliminate the assumption that \(K\) is complete here), Theorem 2.20 is a fully geometric result. On the other hand, as pointed out in Discussion 2.23, the full strength of Theorem 2.21 is a classical result due to the use of Assumption \((\star)\).
### Applications to the Trivial Case
As a slick application of Theorem 2.21, we recover the familiar characterisations of \(\mathcal{M}(\mathcal{A})\) when \(K\) is trivially valued (see Example 1.32). What's new here? For one, the proofs given here appear quite different from the standard arguments [22], and are also shorter. More fundamentally, they give a very interesting indication of how Berkovich's characterisation of \(\mathcal{M}(\mathcal{A})\) (via nested sequences of discs) is in fact more robust than previously thought.
**Example 2.25** (Case: \(R<1\)).: If \(R<1\) and \(K\) is trivially valued, then \(K_{R}=\{0\}\). The \(R\)-good filters are thus entirely determined by their radii, and so the space of \(R\)-good filters is equivalent to \(\overleftarrow{[0,R]}\) (see Observation 2.13). Applying Theorem 2.21, one deduces that \(\mathcal{M}(\mathcal{A})\) is classically equivalent to \(\overleftarrow{[0,R]}\), essentially for free.
**Example 2.26** (Case: \(R\geq 1\)).: If \(R\geq 1\) and \(K\) is trivially valued, then notice \(K_{R}=K\). Consider the following two subcases:
**Subcase 1:**\(\mathcal{F}\) _is an \(R\)-good filter with radius \(\mathsf{rad}_{\mathcal{F}}\geq 1\)._ In which case, \(B_{q}(k)\in\mathcal{F}\) for any \(k\in K\) and any \(q>\mathsf{rad}_{\mathcal{F}}\). [Why? Since \(\mathcal{F}\) is \(R\)-good, there must exist some \(B_{q}(k^{\prime})\in\mathcal{F}\) for any \(q>\mathsf{rad}_{\mathcal{F}}\). Since \(K\) is trivially valued, we get \(|k-k^{\prime}|\leq 1\leq\mathsf{rad}_{\mathcal{F}}<q\) for any \(k\in K\), and so \(B_{q}(k^{\prime})=B_{q}(k)\).] Hence, an \(R\)-good filter (with radius \(\mathsf{rad}_{\mathcal{F}}\geq 1\)) is entirely determined by its radius, and so the space of such \(R\)-good filters form an interval \([1,R]\).
**Subcase 2:**\(\mathcal{F}\) _is an \(R\)-good filter with radius \(\mathsf{rad}_{\mathcal{F}}<1\)._ In which case:
1. If \(B_{q}(k)\) such that \(q>1\), then \(B_{q}(k)\in\mathcal{F}\).
2. [Why? Since \(\mathsf{rad}_{\mathcal{F}}<1\), there must exist \(B_{q^{\prime}}(k^{\prime})\in\mathcal{F}\) such that \(q^{\prime}\leq 1<q\). Since \(K\) is trivially valued, deduce that \(B_{q^{\prime}}(k^{\prime})\subseteq B_{q}(k)\) and so \(B_{q}(k)\in\mathcal{F}\) as \(\mathcal{F}\) is upward closed.]
3. If \(B_{q}(k)\in\mathcal{F}\) and \(q\leq 1\), then \(k=k^{\prime}\) for any \(B_{q^{\prime}}(k^{\prime})\in\mathcal{F}\) such that \(q^{\prime}\leq 1\).
[Why? Take the "pairwise intersection" of \(B_{q}(k),B_{q^{\prime}}(k^{\prime})\in\mathcal{F}\) to get \(B_{q^{\prime\prime}}(k^{\prime\prime})\in\mathcal{F}\). Since \(B_{q^{\prime\prime}}(k^{\prime\prime})\subseteq B_{q}(k)\), this forces \(k^{\prime\prime}=k\) since otherwise \(1=|k^{\prime\prime}-k|<q\leq 1\), contradiction. The same argument shows that \(k^{\prime\prime}=k^{\prime}\), and so we conclude \(k^{\prime}=k\).]
Summarising, an \(R\)-good filter \(\mathcal{F}\) with radius \(\mathsf{rad}_{\mathcal{F}}\geq 1\) is entirely determined by its radius (Subcase 1), whereas an \(R\)-good filter \(\mathcal{F}\) with \(\mathsf{rad}_{\mathcal{F}}<1\) is determined by its radius plus its unique choice of \(k\in K\) (Subcase 2). Applying Theorem 2.21 once more, this gives the following diagram on the left:
\(\mathcal{M}(\mathcal{A})\) should be compared with \([\overleftarrow{av}]\), the space of multiplicative seminorms on \(\mathbb{Z}\) defined in [13], pictured on the right. In particular, notice the key branching points of the spaces are identical up to a \(\log_{R}\) (--) transformation.
**Remark 2.27**.: Of course, Examples 2.25 and 2.26 present \(\mathcal{M}(\mathcal{A})\) using upper reals instead of Dedekinds (in particular, they are not Hausdorff), but this can be resolved by applying Assumption \((\star)\) once more.
## 3. An Algebraic Fork in the Road
Let us review our work. In principle, the extension of Berkovich's original result to Theorem 2.21 could have been discovered much earlier. And yet it was not -- to our knowledge, the idea that one could modify the language of rigid discs to classify the points of \(\mathcal{M}(K\{R^{-1}T\})\)_without_ requiring \(K\) to be non-trivially valued was not suspected by the experts.27 The reason for this seems to be that Theorem 2.21, both in its formulation and proof, belongs to the point-free perspective in an essential way. Of course, one certainly does not need to be a topos theorist in order to e.g. understand what a formal ball is, but there are specific intuitions from the point-free perspective that guided us to our result:
Figure 4.
1. The topos theorist is trained to recognise how the same idea may be expressed in different settings, and to ask about the connections.28 In particular, recall: any essentially propositional theory corresponds to a space of completely prime filters. Work in [23] showed that the theory of multiplicative seminorms on \(\mathbb{Z}\) is essentially propositional. Given the obvious parallels between \(\mathcal{M}(\mathbb{Z})\) and \(\mathcal{M}(K\{R^{-1}T\})\) when \(K\) is trivially normed and \(R\geq 1\) (cf. Example 2.26), the topos theorist may guess that \(\mathcal{M}(K\{R^{-1}T\})\) can also be described via completely prime filters. Footnote 28: For the insider: the topos theorist knows that there are many sketches of the same elephant [11, 12]. 2. Once we defined the formal ball \(B_{q}(k)\) and the correct inclusion relation \(B_{q^{\prime}}(k^{\prime})\subseteq B_{q}(k)\), the rest of the argument began to fall into place. But notice: the decision to use formal balls (as opposed to the classical rigid discs) reflects the localic perspective that it is the _opens_ that are the basic units for defining a space, and **not** the underlying _set of points_. Compare, for instance, our use of formal balls with the symbols \(P_{qr}\) in the propositional theory of Dedekinds \(\mathbb{T}_{\mathbb{R}}\).
Though a relatively simple result, the surprising aspects of Theorem 2.21 hints at the clarifying potential of using point-free ideas to investigate foundational issues in non-Archimedean geometry. Motivated by this, we conclude with a list of inter-related problems, geared towards testing this idea.
### Trivially vs. Non-Trivially valued Fields
First, an obvious piece of mathematical due diligence. Many results in Berkovich geometry are sensitive to the case-split between trivially vs. non-trivially valued (non-Archimedean) fields. This motivates the following general exercise:
**Problem 3.1**.: Pick an interesting result in Berkovich geometry that appears to rely on the base field \(K\) being non-trivially valued. Examine why. Just as in Theorem 2.21, can we eliminate this hypothesis by applying point-free techniques? If yes, what applications does this generalised result give us?
**Discussion 3.2**.: Here's one place to start looking. In non-Archimedean geometry, a common strategy for proving results on an irrational (or open) disc \(\mathsf{D}\) is to first prove the result for _rational closed discs_, before extending the result to \(\mathsf{D}\) by expressing it as a nested union of rational closed discs (see e.g. [1]). This strategy obviously breaks down when \(K\) is trivially valued, but the language of upper reals may offer a workaround (cf. Discussion 2.22).
### Overconvergent Lattices and Rigid Geometry
We now discuss a more solid lead. In unpublished work of Dudzik [13] as well as Baker's Berkeley Lecture Notes [1], the following notion was defined:
**Definition 3.3**.: Consider a lower-bound distributive lattice \(L\), with finite \(\wedge\) and \(\vee\) and a minimal element denoted \(\bot\).
1. For \(x,x^{\prime}\in L\), we say \(x\)_is inner in_\(x^{\prime}\), written \(x\triangleleft x^{\prime}\), if for all \(z\geq x^{\prime}\), there exists \(w\) with \(x\wedge w=\bot\) and \(x^{\prime}\lor w=z\).
2. We call \(L\) an _overconvergent lattice_ if for all \(x,y\in L\) where \(x\wedge y=\bot\), there exists \(x^{\prime}\in L\) such that \(x\triangleleft x^{\prime}\) and \(x^{\prime}\wedge y=\bot\).
It was then showed that these so-called overconvergent lattices captured some key topological features of rigid analytic geometry. We summarise some of their key results in the following theorem:
**Theorem 3.4**.: _As our setup,_
* _Let_ \(\mathcal{A}\) _be a (strict) affindi algebra over a suitable_29 _field_ \(K\)_;_ Footnote 29: By which we mean: complete, non-trivially valued and non-Archimedean. Compare this with [1, Remark 2.5.21], which we briefly discussed at the start of Section 2.
* _Let_ \(X=\operatorname{Sp}(\mathcal{A})\)_, i.e. the set of maximal ideals of_ \(\mathcal{A}\) _equipped with the canonical topology;_ Footnote 29: By which we mean: complete, non-trivially valued and non-Archimedean. Compare this with [1, Remark 2.5.21], which we briefly discussed at the start of Section 2.
* _Let_ \(L\) _be the lattice of special subdomains of_ \(X\)_;_ Footnote 29: By which we mean: complete, non-trivially valued and non-Archimedean. Compare this with [1, Remark 2.5.21], which we briefly discussed at the start of Section 2.
* _Let_ \(\mathsf{P}(L)\) _be the set of prime filters and_ \(\mathsf{M}(L)\) _the set of maximal filters._ Footnote 30: By which we mean: complete, non-trivially valued and non-Archimedean. Compare this with [1, Remark 2.5.21], which we briefly discussed at the start of Section 2.
_Then:_
1. _The lattice_ \(L\) _is overconvergent and its elements form a neighbourhood base._
2. _There exists a canonical surjective map_ \(\mathsf{P}(L)\to\mathsf{M}(L)\) _sending a prime filter to the unique maximal filter containing it. When equipped with the quotient topology,_ \(\mathsf{M}(L)\) _is equivalent to the Berkovich analytification_ \(X^{\mathrm{an}}\)
_(iii)_\(\operatorname{\mathsf{P}}(L)\) _is equivalent to Huber's adic space of continuous semivaluations on_ \(\mathcal{A}\)_._
Closer examination of the mechanics underlying the proof of Theorem 3.4 seems warranted. Although the theorem is essentially a reworking of classical facts about rigid geometry [22, 23], the lattice-theoretic perspective brings into focus the key topological ingredients. In particular, locale theorists may recognise the family resemblance between overconvergent lattices and _normal_ lattices, which suggests that the hypothesis of overconvergence was chosen precisely to guarantee that each prime filter is contained in a unique maximal filter30 (see item (ii) of the Theorem). Some natural test problems and questions:
Footnote 30: Normality and overconvergence appear to be essentially dual notions. Recall: for a (bounded) distributive lattice \(L\) and \(a,a^{\prime}\in L\), we say \(a^{\prime}\)_is well inside_\(a\), written \(a^{\prime}\in a\) if there exists \(y\) such that \(a\lor y=\top\) and \(a^{\prime}\wedge y=\bot\). Such a lattice \(L\) is said to be _normal_ if whenever \(a\lor b=\top\), there exists \(a^{\prime}\lessdot a\) with \(a^{\prime}\lor b=\top\). In particular, a (bounded) distributive lattice \(L\) is normal iff each prime ideal in \(L\) is contained in a unique maximal ideal [12, §3.6-3.7]. Of course, this is a characterisation of normality for _bounded_ distributive lattices, but a similar (dual) result appears under the weaker hypothesis of only being lower-bounded [21].
**Problem 3.5**.: In his note [12], Dudzik left unfinished the problem of applying overconvergent lattices to the classification of the points of \(\mathbb{A}^{1}_{\operatorname{Berk}}\). A good exercise: finish this. In particular, our proof of Theorem 2.21 should be relevant. However, what do \(R\)-good filters have to do with the filters of overconvergent lattices? In addition, do point-free techniques allow us (once more) to eliminate the requirement that \(K\) be non-trivially valued in Theorem 3.4?
### Model-theoretic vs. Point-free perspectives
Interspersed throughout this chapter were various mentions of Hrushovski-Loeser's groundbreaking work [10], which applied model-theoretic tools to Berkovich geometry. For the topos theorist, a natural question is the following:
**Problem 3.6**.: Leveraging point-free techniques, simplify and/or extend the framework of Hrushovski-Loeser spaces.
**Discussion 3.7**.: A good place to start: where do overconvergent lattices feature in their framework? Relatedly, the technology of pro-definable sets bears a strong resemblance to \(R\)-structures and rounded ideal completions (which implicitly featured in our use of upper reals). It would be interesting to see if this connection can be developed to indicate some natural simplifications. (Discussion 3.2 may be relevant.)
**Discussion 3.8** (The Role of Stability).: Recall from Discussion 1.19 that the topos theorist and model theorist hold different attitudes towards the presence of strict order in a theory. For the model theorist, strict order is a sign that the theory is complex [i.e. unstable], and so a great deal of work is needed in order to account for e.g. the instability in \(\operatorname{ACVF}\) via the language of stably-dominated types [13, 10]. On the other hand, strict order is _not_ a sign of complexity for the topos theorist -- the theory of the rationals and the Dedekinds both have strict order and are considered well-behaved [i.e. they are essentially propositional]. One is therefore led to ask: are (model-theoretic) notions such as stable/meta-stable/unstable truly essential to understanding the topology of non-Archimedean spaces?
One way to parse this question: perhaps the way to reduce our sensitivity to strict order is not by passing to stably-dominated types but by passing to a different fragment of logic. Of course, given the context of this paper, we have geometric logic in mind but one may also wish to consider e.g. continuous logic, as done in [1].31 Another approach is to ask: is the notion of stability itself fundamental to understanding how the residue field and value group interact? We make no firm assertions at this juncture, but recent work on residue field domination [1] indicates some interesting evidence to the contrary.
Footnote 31: Notice, however, stability still plays a role in [1], albeit in the continuous logic setting. Notice also that the paper restricts itself to _metric valued fields_ and not valued fields whose value group \(\Gamma\) is an arbitrary ordered abelian group (with minimal element \(0\)). This will become relevant when we start thinking about extending our logical methods to adic spaces – see also Discussion 3.9.
**Discussion 3.9** (Adic Spaces).: There have been recent efforts to extend Hrushovski-Loeser's work to the setting of adic spaces (see e.g. [10]). However, Theorem 3.4 alerts us to one obvious issue. At least in the setting of \(K\)-affinoid algebras, the distinction between Berkovich spaces vs. adic spaces is analogous to the distinction between maximal filters vs. prime filters. Recalling Discussion 1.18, this raises questions about whether the model theorist's language of definable types is suited for analysing adic spaces since types properly correspond to ultrafilters and not prime filters. Given what we know about locale spaces and geometric logic, Theorem 3.4 leads us to wonder if the point-free approach would be better suited (although some care should be taken regarding the distinction between prime vs. completely prime filters).
## Appendix A Constructing multiplicative seminorms from \(K\)-seminorms
We now provide the rest of the details of the proof of Theorem 2.21. For clarity, we review the key steps of the argument. Recall the main goal: we wish to show that \(\mathcal{M}(\mathcal{A})\) is classically equivalent to \(\widehat{\mathcal{M}(\mathcal{A}_{\mathrm{Lin}})}\). After declaring our reliance on Assumption (\(\star\) *> 1.1), the argument proceeds by construction: given \(|\cdot|_{x}\in\mathcal{M}(\mathcal{A})\), one defines \(|\cdot|_{x}\big{|}_{\mathcal{A}_{\mathrm{Lin}}}\) on \(\mathcal{A}_{\mathrm{Lin}}\) by taking the obvious restriction, whereas given \(|\cdot|_{\mathcal{F}}\in\widehat{\mathcal{M}(\mathcal{A}_{\mathrm{Lin}})}\), we start by first extending \(|\cdot|_{\mathcal{F}}\) to a bounded multiplicative seminorm on \(K[T]\), before finally defining \(\widehat{|\cdot|_{\mathcal{F}}}\) on \(\mathcal{A}\). It is fairly obvious that if both constructions are well-defined, then they are inverse to each other because they are both determined by their values on the linear polynomials (which would complete the proof). In addition, it is clear that the restriction \(|\cdot|_{x}\big{|}_{\mathcal{A}_{\mathrm{Lin}}}\) defines a \(K\)-seminorm, but more effort is needed to check that \(\widehat{|\cdot|_{\mathcal{F}}}\) satisfies the required properties.
We organise the argument into the following two claims.
**Claim A.1**.: _The extension map \(|\cdot|_{\mathcal{F}}\colon K[T]\to[0,\infty)\) (given by Equation (38)) defines a bounded multiplicative seminorm on \(K[T]\), satisfying the ultrametric inequality._
Proof of Claim.: Our argument relies on the explicit characterisation of \(|\cdot|_{\mathcal{F}}\) to perform the required checks.
_Step 1: Working "level-wise"._ Fix a ball \(B_{q}(k)\in\mathcal{F}\), and define the obvious extension of \(|\cdot|_{B_{q}(k)}\) from \(\mathcal{A}_{\mathrm{Lin}}\) to \(K[T]\):
\[|\cdot|_{B_{q}(k)}\colon K[T] \longrightarrow[0,\infty) \tag{42}\] \[f \longmapsto|c|\cdot\prod_{j=1}^{m}|T-b_{j}|_{B_{q}(k)}\]
Following the cue of Remark 2.17, we may also express \(f\) as a finite power series centred at \(k\):
\[f=\sum_{i=0}^{m}c_{i}(T-k)^{i}, \tag{43}\]
and define a new map
\[\widehat{|\cdot|_{B_{q}(k)}}\colon K[T] \longrightarrow[0,\infty) \tag{44}\] \[f \longmapsto\max_{i}|c_{i}|q^{i}\]
We claim that \(\widehat{|\cdot|_{B_{q}(k)}}\) defines a multiplicative seminorm (though not necessarily bounded, since \(q\) is an arbitrary positive rational). This follows from noting:
* \(\widehat{|0|_{B_{q}(k)}}=0\) _and_ \(\widehat{|1|_{B_{q}(k)}}=1\)_. Obvious.
* \(\widehat{|\cdot|_{B_{q}(k)}}\) _satisfies the ultrametric inequality._ Straightforward, but we elaborate for completeness. Given \(f,f^{\prime}\in K[T]\), assume WLOG that \(\deg(f)=m\geq m^{\prime}=\deg(f^{\prime})\). Then, add their corresponding finite power series (as in Equation (43)) and compute: \[\widehat{|f+f^{\prime}|_{B_{q}(k)}} =|\sum_{i=0}^{m}c_{i}(T-k)^{i}+\widehat{\sum_{i=0}^{m^{\prime}}c _{i}^{\prime}(T-k)^{i}|_{B_{q}(k)}}\] \[=|\sum_{i=0}^{m}(c_{i}+c_{i}^{\prime})(T-k)^{i}|_{B_{q}(k)} \text{[writing $c_{i}^{\prime}=0$ for all $i>m^{\prime}$]}\] \[=\max_{i}|c_{i}+c_{i}^{\prime}|q^{i} \text{[by Definition of $\widehat{|\cdot|_{B_{q}(k)}}$]}\] \[\leq\max_{i}\big{\{}\max\{|c_{i}|\,,\,|c_{i}^{\prime}|\}\cdot q^{ i}\big{\}} \text{[since $|c_{i}+c_{i}^{\prime}|\leq\max\{|c_{i}|,|c_{i}^{\prime}|\}$]}\] \[=\max\{\widehat{|f|_{B_{q}(k)}}\,,\,|\widehat{f^{\prime}|_{B_{q}(k )}}\}\] \[\text{[since the $\max$'s commute]}.\]
* \(|\cdot\cdot|_{B_{q}(k)}\) _is multiplicative._ Since \(f\), \(f^{\prime}\) are both polynomials, there exists \(v,w\in\mathbb{N}\) such that \[|\widehat{f|_{B_{q}(k)}}=\max_{i}|c_{i}|q^{i}=|c_{v}|q^{v}\] \[|\widehat{f^{\prime}|_{B_{q}(k)}}=\max_{i}|c^{\prime}_{i}|q^{i}=|d_{w}|q^{w}\] Pick \(v,w\) to be first maximising indices in the lexicographic order.32 For explicitness, we write: \[|\widehat{f|_{B_{q}(k)}}\cdot|\widehat{f^{\prime}|_{B_{q}(k)}}=|c_{v}||d_{w}|q^ {v+w}.\] (45) \[|\widehat{f\cdot f^{\prime}|_{B_{q}(k)}}=|\left(\sum c_{i}d_{n-i} \right)(T-k)^{n}|_{B_{q}(k)}=\max_{n}|\sum c_{i}d_{n-i}|q^{n}\] (46) The ultrametric inequality on \(K\) almost immediately gives \(|\widehat{f\cdot f^{\prime}|_{B_{q}(k)}}\leq|\widehat{f|_{B_{q}(k)}}\cdot| \widehat{f^{\prime}|_{B_{q}(k)}}\). For the reverse inequality, denote \(u:=v+w\). Note that \(\sum_{i}c_{i}d_{u-i}\) equals \(c_{v}d_{w}\) plus other terms of the form \(c_{i^{\prime}}d_{u-i^{\prime}}\), and so one of these two cases must occur: **Case 1:**\(i^{\prime}\) _precedes \(v\) in the lexicographic order._ In which case \(|d_{u-i^{\prime}}|q^{u-i^{\prime}}\leq|d_{v}|q^{v}\) and \(|c_{i^{\prime}}|q^{i^{\prime}}<|c_{v}|q^{v}\); **Case 2:**\(u-i^{\prime}\) _precedes \(w\) in the lexicographic order._ In which case \(|d_{u-i^{\prime}}|q^{u-i^{\prime}}<|d_{v}|q^{v}\) and \(|c_{i^{\prime}}|q^{i^{\prime}}\leq|c_{v}|q^{v}\). Either way, the ultrametric inequality once again gives \(|\sum c_{i}d_{u-i}-c_{v}d_{w}|<|c_{v}d_{w}|\), and so \(|\sum c_{i}d_{u-i}|=|c_{v}d_{w}|\),33 Hence, conclude that \(|\widehat{f|_{B_{q}(k)}}\cdot|\widehat{f^{\prime}|_{B_{q}(k)}}\leq|\widehat{f \cdot f^{\prime}|_{B_{q}(k)}}\), verifying multiplicativity of \(|\widehat{\cdot|_{B_{q}(k)}}\). Finally, we claim that:
Footnote 33: To illustrate: if \(|\widehat{f|_{B_{q}(k)}}=|c_{i}|q^{i}=|c_{i+1}|q^{i+1}\) and \(|c_{j}|q^{j}<|c_{i}|q^{i}\) for all \(j<i\), then we set \(v:=i\) instead of \(i+1\).
Footnote 33: Why? This follows from the fact that \(|x+y|=\max\{x,y\}\) if \(|x|<|y|\) if \(|\cdot|\) is ultrametric. To prove the claim, notice \(|y|\leq\max\{|x+y|,|-x|\}\), and so \(|x|<|y|\) implies \(|y|\leq|x+y|\). But since \(y\leq|x+y|\leq\max\{|x|,|y|\}\), conclude \(|y|=|x+y|\).
\[|\cdot|_{B_{q}(k)}=\max\{|k-a|,q\}=|\widehat{T-a|_{B_{q}(k)}},\qquad\quad\text {for any }T-a. \tag{48}\]
Hence, conclude that \(|\cdot|_{B_{q}(k)}\) is in fact a multiplicative seminorm satisfying the ultrametric inequality.
_Step 2: Lifting to \(\mathcal{F}\)._ The argument is similar to the proof of Claim 2.18. To show that \(|\cdot|_{\mathcal{F}}\) is a muliplicative seminorm on \(K[T]\) satisfying the ultrametric inequality, this amounts to checking a list of properties. But by Step 1, we know that these properties already hold for \(|\cdot|_{B_{q}(k)}\), for all \(B_{q}(k)\in\mathcal{F}\). Hence, since \(|\cdot|_{\mathcal{F}}=\inf_{B_{q}(k)\in\mathcal{F}}|\cdot|_{B_{q}(k)}\), observe that these properties are respected by taking \(\inf\)'s, and conclude that they hold for \(|\cdot|_{\mathcal{F}}\) as well.
_Step 3:_\(|\cdot|_{\mathcal{F}}\) _is bounded._ We do not get boundedness of \(|\cdot|_{\mathcal{F}}\) from Step 1, so this must be checked separately. But since \(\mathcal{F}\) is \(R\)-good, this implies
\[|T-a|_{\mathcal{F}}\leq\max\{|a|,R\}=||T-a||, \tag{49}\]
where \(||\cdot||\) is the Gauss norm restricted to \(K[T]\).34 Since all polynomials factor into linear polynomials, and since both \(||\cdot||\) and \(|\cdot|_{\mathcal{F}}\) are multiplicative seminorms (by Step 2), deduce that
Footnote 34: Why? See proof of Claim 2.18.
\[|f|_{\mathcal{F}}\leq||f||,\qquad\qquad f\in K[T]. \tag{50}\]
This finishes the proof of our claim.
**Claim A.2**.: _The construction \(\widehat{|\cdot|_{\mathcal{F}}}\) defines a bounded multiplicative seminorm on \(\mathcal{A}\)._
Proof of Claim.: The fact that \(\widehat{|0|_{\mathcal{F}}}=0\) and \(\widehat{|1|_{\mathcal{F}}}=1\) is obvious by construction. As for the other properties:
* \(\overline{|\cdot|_{\mathcal{F}}}\) _is a well-defined map._ Since \(\overline{|\cdot|_{\mathcal{F}}}\) takes values in \([0,\infty)\), we need to show that the limit of Equation (39) exists for \(f\in\mathcal{A}\). For explicitness, let \(f=\sum_{i=0}^{\infty}a_{i}T^{i}\). By Claim A.1, we know that \(|\cdot|_{\mathcal{F}}\) is bounded, and also satisfies the ultrametric inequality on \(K[T]\). Hence, for any natural numbers \(M<N\), a telescoping series argument yields \[\big{|}|\sum_{i=0}^{N}a_{i}T^{i}|_{\mathcal{F}}-|\sum_{i=0}^{M}a_{i}T^{i}|_{ \mathcal{F}}\big{|}\leq\max_{M+1\leq i\leq N}\{|a_{i}|R^{i}\}.\] (51) Since \(f\in\mathcal{A}\), we know \(|a_{i}|R^{i}\to 0\) by definition. Hence, \(\{|\sum_{i=0}^{n}a_{i}T^{i}|_{\mathcal{F}}\}_{n\in\mathbb{N}}\) is a Cauchy sequence and thus converges to a limit in \([0,\infty)\).
* _Bounded._ Let \(f\in\mathcal{A}\) where \(f=\sum_{i=0}^{\infty}a_{i}T^{i}\). Since \[\lim_{n\to\infty}||\sum_{i=0}^{n}a_{i}T_{i}||=||f||,\] (52) and since \[|\sum_{i=0}^{n}a_{i}T_{i}|_{\mathcal{F}}\leq||\sum_{i=0}^{n}a_{i}T_{i}||, \qquad\text{for all $n$,}\] (53) by Claim A.1, conclude that \(\overline{|\cdot|_{\mathcal{F}}}\leq||\cdot||\).
* _Ultrametric Inequality._ This also follows from \(|\cdot|_{\mathcal{F}}\) satisfying the ultrametric inequality. Indeed, given \(f,f^{\prime}\in\mathcal{A}\), compute: \[\widetilde{|f+f^{\prime}|_{\mathcal{F}}} =\lim_{n\to\infty}|\sum_{i=0}^{n}a_{i}T^{i}+\sum_{i=0}^{n}b_{i}T^ {i}|_{\mathcal{F}}\] \[\leq\lim_{n\to\infty}\max\{|\sum_{i=0}^{n}a_{i}T^{i}|_{\mathcal{F}},|\sum_ {i=0}^{n}b_{i}T^{i}|_{\mathcal{F}}\}\] \[=\max\{\widetilde{|f|_{\mathcal{F}}},\widetilde{|f^{\prime}|_{ \mathcal{F}}}\}\] with representations \(f=\sum_{i=0}^{\infty}a_{i}T^{i}\) and \(f^{\prime}=\sum_{i=0}^{\infty}b_{i}T^{i}\).
* _Multiplicativity._ For orientation, note \(f\in\mathcal{A}\) converges absolutely with respect to \(||\cdot||\) since \[||f||=\left\|\sum_{i=0}^{\infty}a_{i}T^{i}\right\|=\max_{i}|a_{i}|R^{i}=\left\| \sum_{i=0}^{\infty}|a_{i}|T^{i}\right\|\] (54) and \(|a_{i}|R^{i}\to 0\). Extending this observation, since \(\widetilde{|\cdot|_{\mathcal{F}}}\leq||\cdot||\) on \(\mathcal{A}\), one easily adapts the proof of Merten's Theorem for Cauchy products (for infinite series) to prove multiplicativity of \(\widetilde{|\cdot|_{\mathcal{F}}}\).
This completes the proof of the claim.
**Remark A.3**.: Readers familiar with Berkovich geometry may recognise parallels between our proof of Claim A.2 and the standard proof of the homeomorphism
\[\mathbb{A}_{\mathrm{Berk}}^{1}\cong\bigcup_{R>0}\mathcal{M}(K\{R^{-1}T\}).\] |
2309.16983 | Tuning excitation transport in a dissipative Rydberg ring | We demonstrate the flexible tunability of excitation transport in Rydberg
atoms, under the interplay of controlled dissipation and interaction-induced
synthetic flux. Considering a minimum four-site setup -- a triangular
configuration with an additional output site -- we study the transport of a
single excitation, injected into a vertex of the triangle, through the
structure. While the long-range dipole-dipole interactions between the Rydberg
atoms lead to geometry-dependent Peierls phases in the hopping amplitudes of
excitations, we further introduce on-site dissipation to a vertex of the
triangle. As a result, both the chirality and destination of the transport can
be manipulated through the flux and dissipation. In particular, we illustrate a
parameter regime where our Rydberg-ring structure may serve as a switch for
transporting the injected excitation through to the output site. The underlying
mechanism is then analyzed by studying the chiral trajectory of the excitation
and the time-dependent dissipation. | Yiwen Han, Wei Yi | 2023-09-29T05:03:03Z | http://arxiv.org/abs/2309.16983v1 | # Tuning excitation transport in a dissipative Rydberg ring
###### Abstract
We demonstrate the flexible tunability of excitation transport in Rydberg atoms, under the interplay of controlled dissipation and interaction-induced synthetic flux. Considering a minimum four-site setup\(-\)a triangular configuration with an additional output site\(-\)we study the transport of a single excitation, injected into a vertex of the triangle, through the structure. While the long-range dipole-dipole interactions between the Rydberg atoms lead to geometry-dependent Peierls phases in the hopping amplitudes of excitations, we further introduce on-site dissipation to a vertex of the triangle. As a result, both the chirality and destination of the transport can be manipulated through the flux and dissipation. In particular, we illustrate a parameter regime where our Rydberg-ring structure may serve as a switch for transporting the injected excitation through to the output site. The underlying mechanism is then analyzed by studying the chiral trajectory of the excitation and the time-dependent dissipation.
## I Introduction
In recent years, Rydberg atoms have proved to be an ideal candidate for exploring intriguing synthetic quantum matter [1; 2; 3; 4]. The highly flexible geometry facilitated by individual optical-tweezer trapping [5; 6; 7; 8], together with the long-range dipole-dipole interactions [9; 10], offer abundant control degrees of freedom for simulating a variety of many-body quantum models and strongly correlated phenomena, including quantum spin models [11; 12; 13; 14; 15; 16], the topological order [17; 18; 19], dynamic gauge fields [20; 21; 22; 23; 24], and quantum spin liquids [25; 26; 27; 28; 29]. Particularly, under dipolar exchange interactions, Rydberg atoms can be modeled as few-level systems in the \(nS\) and \(nP\) manifolds, with excitations in the \(nP\) states hopping between different atoms that are often spatially separated [30]. When more than one excited states per atom are involved, the dipole interaction couples the internal degrees of freedom with the center-of-mass motion of an excitation, leading to synthetic gauge fields (in the form of either spin-orbit coupling or the Peierls phase) that are responsible for the topological bands [31; 32; 33] and chiral motion [34; 30; 35] of the Rydberg excitations.
While the Rydberg excitations are known for their remarkable life time, they are also amenable to engineered dissipation, for instance, by coupling the \(nP\) state to some short-lived intermediate states. This would open up fresh possibilities for emulating quasiparticles in solid-state materials, where the quasiparticle excitations naturally acquire a finite life time due to their interactions with the background environment. The introduction of dissipation also makes connection with the ongoing study of non-Hermitian physics [36; 37], where the interplay of synthetic gauge field and dissipation often gives rise to intriguing dynamics.
In this work, we study the transport of a single Rydberg excitation through a triangular ring structure, where the dipole-induced synthetic flux and dissipation offer flexible control over the transport dynamics. In a recent experiment, the chiral motion of a single excitation was observed in a minimum triangular setup of Rydberg atoms [30]. The chiral motion relies on the spin-orbit coupling intrinsic to the dipole interactions, as well as external magnetic and electric fields that break the time-reversal symmetry and the degeneracy of the \(nP\) states. Building upon the experiment, we consider a four-site configuration, where a triangular ring is attached to an output site. Under the long-range dipolar exchange interaction, the hopping amplitude of the excitation between any two sites acquires a Peierls phase that is sensitive to the relative position of the sites [30]. Since the four-site configuration we consider contains three triangles, each threaded by a geometry-dependent synthetic magnetic flux, the transport dynamics of the excitation is a consequence of the competition between the various fluxes. Specifically, we consider the dynamics of a single excitation injected into a vertex of the triangle, transported through the triangular ring structure to the output site. In the absence of dissipation, the flux-dependent chiral motion in the simple triangular case becomes more complicated, and shows a counter-clockwise (CCW) tendency, regardless of the flux parameters. We then introduce a local on-site dissipation to one of the sites (other than the output site), and show that, through the simultaneous tuning of the flux and dissipation, the chiral motion of the excitation can be restricted to within different triangular sites, such that the transport to the output site can be switched on demand. We characterize the transport dynamics through the chiral trajectory of the excitation, as well as the total loss rate of the excitation, which offer a transparent picture for the tunable transport dynamics. Our findings suggest the interplay between the interaction-induced synthetic gauge field and dissipation provides unique opportunities for quantum control and simulation in Rydberg atoms.
Our work is organized as follows. In Sec. II, we introduce the four-site configuration as well as the resulting model Hamiltonian. We present the main results
in Sec. III and Sec. IV, where we study the transport dynamics either in the presence or absence of dissipation. We also show the ideal parameter regime to control the chiral motion and transport dynamics in Sec. IV. In Sec. V, we analyze the transport dynamics using chiral trajectory and decay rate, with increasing dissipation. We summarize in Sec. VI.
## II Configuration and Model
Our system comprises four \({}^{87}\)Rb atoms confined in optical tweezers. For each atom, the relevant three Rydberg states in the \(60S_{1/2}\) and \(60P_{3/2}\) manifolds are marked by bold black lines in Fig. 1(a). Following Ref. [30], we label the states as \(\ket{+}=\ket{60P_{3/2},m_{j}=3/2}\), \(\ket{-}=\ket{60P_{3/2},m_{j}=-1/2}\), and \(\ket{0}=\ket{60S_{1/2},m_{j}=1/2}\), where states \(\ket{\pm}\) correspond to the two internal states of the Rydberg excitation, and \(\ket{0}\) indicates absence of excitation. We assume the states \(\ket{\pm}\) are isolated from others in \(60P_{3/2}\) manifold by static magnetic and electric fields (in the \(z\) direction), and the degeneracy of the two states are broken, with an energy difference \(\Delta=E_{-}-E_{+}\).
We consider a four-site configuration in the \(x\)-\(y\) plane, as illustrated in Fig. 1(b). Atoms on sites 1, 2 and 3 form an equilateral triangle (each side of length \(r_{0}\)), while a fourth atom is located on the output site (labeled 4). The distance between atoms 3 and 4 is also \(r_{0}\).
The excitation transport between any two sites \(i\) and \(j\) is driven by the dipole-dipole interaction [38]
\[\hat{V}_{ij} = \frac{1}{4\pi\epsilon_{0}r_{ij}^{3}}\left[\hat{d}_{i}^{z}\hat{d}_ {j}^{z}+\frac{1}{2}\left(\hat{d}_{i}^{+}\hat{d}_{j}^{-}+\hat{d}_{i}^{-}\hat{d} _{j}^{+}\right)\right. \tag{1}\] \[\left.-\frac{3}{2}\left(\hat{d}_{i}^{+}\hat{d}_{j}^{+}e^{-i2 \varphi_{ij}}+\hat{d}_{i}^{-}\hat{d}_{j}^{-}e^{i2\varphi_{ij}}\right)\right],\]
where the relative position of the atoms \(r_{ij}=\ket{\mathbf{r}_{j}-\mathbf{r}_{i}}\), and the position-dependent phase \(\varphi_{ij}=\arg\left(\mathbf{r}_{j}-\mathbf{r}_{i}\right)\). Components of the dipole operator are given by \(\hat{d}_{i}^{x}\), \(\hat{d}_{i}^{y}\), \(\hat{d}_{i}^{z}\), and \(\hat{d}_{i}^{\pm}:=\mp\left(\hat{d}_{i}^{x}\pm i\hat{d}_{i}^{y}\right)/\sqrt{2}\). Of particular interest here are the last terms in the brackets, namely \(\hat{d}_{i}^{+}\hat{d}_{j}^{+}e^{-i2\varphi_{ij}}\) and its conjugate, which formally represent the dipole-induced spin-orbit coupling where the internal-state degrees of freedom and the center-of-mass motion (represented by the phase) are coupled.
We then introduce light-induced dissipation to the excited \(\ket{\pm}\) states of atom 1, characterized by a decay rate \(\gamma\). This can be engineered by resonantly coupling these long-lived excited states to short-lived states, for instance, in the \(6S\) manifold. Dynamics of atoms in the \(60S_{1/2}\) and \(60P_{3/2}\) manifolds are hence driven by an effective non-Hermitian Hamiltonian denoted as \(H_{\text{full}}\), which resides in the single-excitation subspace spanned by the states \(\{\ket{\pm 000},\ket{0\pm 00},\ket{00\pm 0},\ket{000\pm}\}\) (see Appendix). For the dynamic transport, we initialize the system in the state \(\ket{0-00}\), where atom 2 is in the \(\ket{-}\) excited state, while the other atoms are in the ground \(\ket{0}\) state. Assuming a large energy difference \(\Delta\) between the \(\ket{+}\) and \(\ket{-}\) excitations, the system should primarily remain in the subspace with the basis states \(\{\ket{-000},\ket{0-00},\ket{00-0},\ket{000-}\}\). We then derive the effective Hamiltonian by eliminating the \(\ket{+}\) excitations, which, in a second-quantized form, reads
\[H_{\text{eff}}=\sum_{i}\varepsilon_{i}a_{i}^{\dagger}a_{i}+\sum_{ij}t_{ij}e^{i \phi_{ij}}a_{i}^{\dagger}a_{j}. \tag{2}\]
Here \(a_{i}^{\dagger}\) (\(a_{i}\)) creates (annihilates) an excitation on site \(i\) in the state \(\ket{-}\). The derivation and expressions for the self-energy terms \(\varepsilon_{i}\) and the hopping terms \(t_{ij}e^{i\phi_{ij}}\) are given in the Appendix.
Importantly, the hopping amplitudes of the excitations carry Peierls phases \(\phi_{ij}\), which contribute to the synthetic fluxes \(\Phi_{ijk}\) shown in Fig. 1(b). More explicitly, we have the expression \(\Phi_{ijk}\equiv\phi_{ij}+\phi_{jk}+\phi_{ki}\), defined within the range of \((-\pi,\pi]\). Since we take a second-order perturbation and eliminate the \(\ket{+}\) excitations, the phase \(\phi_{ij}\) includes contributions from both the direct hopping of the \(\ket{-}\) excitation from site \(i\) to site \(j\) (that is, \(\ket{-}_{i}\rightarrow\ket{-}_{j}\)), and those mediated by a \(\ket{+}\) excitation on a third site \(k\) (that is, \(\ket{-}_{i}\rightarrow\ket{+}_{k}\rightarrow\ket{-}_{j}\)). For an explicit example regarding the transition from \(\ket{0-00}\) to \(\ket{00-0}\), see Fig. 1(c). As a result, \(\phi_{14}\neq\phi_{13}+\phi_{34}\) as they represent distinct physical processes. Likewise, \(\Phi_{124}\neq\Phi_{123}+\Phi_{324}\), unlike the Raman-induced Peierls phases for neutral atoms.
Figure 1: Synthetic flux induced by dipolar exchange interaction. (a) Schematic Zeeman structure in the relevant Rydberg manifolds, \(60S_{1/2}\) and \(60P_{3/2}\). The sub-figures from left to right respectively illustrate the original level scheme considering the quantum-defect effect, the level scheme under the Zeeman shifts, and that affected the dc stark effect additionally. (b) Four atoms are confined in the \(x\)-\(y\) plane. Excitations are injected into atom 2, and an on-site dissipation is introduced to atom 1. The interaction-induced synthetic fluxes \(\Phi_{ijk}\) are defined in the main text. (c) The second-order processes described by the colored dashed lines are included in the self-energy of the \(\ket{0-00}\) state. The transition from \(\ket{0-00}\) to \(\ket{00-0}\) consists of two processes: a direct hopping with an amplitude \(t_{-}\) (black solid arrow), and the virtual hoppings through \(\ket{+000}\) or \(\ket{000+}\) (black dashed lines). Here the notation \(\ket{\pm 000}\) indicates the first atom in an excited \(\ket{\pm}\) state.
In Fig. 2(a), we show the variation of the synthetic fluxes as functions of a dimensionless control parameter \(s_{\varphi}=\tilde{\Delta^{2}}/\tilde{w}\), where \(\tilde{\Delta}:=\Delta/t_{-}\) and \(\tilde{w}:=w/t_{-}\) (See Appendix for the definition and detailed derivation of \(t_{-}\) and \(w\)). The discontinuity near \(s_{\varphi}=-0.15\) in \(\Phi_{124}\) originates from a vanishing \(t_{41}\), due to the destructive interference of different processes [see Figs. 2(b)(c)]. Note that while we take \(\gamma=0\) in Fig. 2, the dependence of the fluxes on \(s_{\varphi}\) does note change dramatically under moderate \(\gamma\). In a simple triangular configuration, the sign of the flux determines the direction of chiral motion of the excitation. Likewise, in our setup, multiple synthetic fluxes compete with one another to influence the chiral motion of the injected excitation. As we illustrate below, the dominant impact comes from the competition between \(\Phi_{123}\) and \(\Phi_{324}\), because of the relatively larger hopping rates involved.
## III Dissipationless dynamics under the synthetic fluxes
We first study the transport of the injected excitation in the absence of dissipation, with \(\gamma=0\). In Figs. 3(a)(c), we show the time-evolved probabilities \(\rho_{i}\) for the excitation to occupy the \(i\)th site. Correspondingly, in Figs. 3(b)(d), we show the normalized trajectory of the excitation, which is defined as
\[\bar{x} = \frac{\sum_{i=1}^{4}x_{i}\rho_{i}}{r_{0}\sum_{i=1}^{4}\rho_{i}}, \tag{3}\] \[\bar{y} = \frac{\sum_{i=1}^{4}y_{i}\rho_{i}}{r_{0}\sum_{i=1}^{4}\rho_{i}}, \tag{4}\]
where \((x_{i},\ y_{i})\) is the coordinates of site \(i\) (in units of \(r_{0}\)) in Figs. 3(b)(d), \(\tau\) represents the time of evolution, and \((\bar{x},\ \bar{y})\) is the dimensionless coordinates of the normalized trajectory.
Notably, in Fig. 3(d), the trajectory is mostly within the triangle formed by sites \(123\) (denoted as triangle \(\Delta_{123}\)), meaning the dominance of the occupation in these sites at all times. This is in contrast to Fig. 3(b), where the trajectory is dominantly within the triangle \(\Delta_{234}\), though the occupation of site \(1\) can be dominant at intermediate times. In both cases, the trajectories exhibit a CCW tendency.
Under the parameters of Figs. 3(a)(b), \(\Phi_{123}<0\), \(\Phi_{324}>0\), and \(\Phi_{124}>0\). Notice that \(\Phi_{ijk}>0\) (\(\Phi_{ijk}<0\)) should lead to CCW (CW) chiral motion within the triangle \(\Delta_{ijk}\), with the strongest CCW or CW tendency under \(\Phi_{ijk}=\pm\pi/2\)[30; 39]. We therefore attribute the transport phenomena to the competition between \(\Phi_{123}\) and \(\Phi_{324}\). For instance, in Figs. 3(a)(b), the excitation motion tends to be CW within the triangle \(\Delta_{123}\), and CCW within the triangle \(\Delta_{324}\). As a result, the excitation injected from site \(2\) would predominantly flow to site \(3\), and subsequently to site \(4\), since \(|\Phi_{324}|\) is closer to \(\pi/2\) than \(|\Phi_{123}|\). This is further facilitated by \(\Phi_{124}>0\), which also favors a CCW transport.
On the other hand, in Figs. 3(c)(d), \(\Phi_{123}>0\), \(\Phi_{324}<0\), and \(\Phi_{124}>0\). The excitation motion tends to be CCW within the triangle \(\Delta_{123}\), and CW within the triangle \(\Delta_{324}\). Since the hopping amplitude between sites \(2\) and \(4\)
Figure 2: Manipulating the synthetic flux. (a) Variations of the synthetic fluxes under the control of \(s_{\varphi}\). Here we take a vanishing decay rate \(\gamma=0\). The asterisks indicate the values of synthetic fluxes for calculations in Fig. 3. (b) An exemplary illustration for constructing the hopping amplitude (up to second-order processes) from atom \(4\) to \(1\), where all components lie on the real axis. (c) Variation of hopping amplitude from site \(4\) to \(1\) under the control of \(s_{\varphi}\). When \(t_{41}\) crosses zero, the phase \(\phi_{41}\) undergoes a discontinuous jump from \(\pi\) to \(0\), leading to the discontinuity observed in \(\Phi_{124}\), as shown in (a).
Figure 3: The transport dynamics of a single excitation in the absence of dissipation. (a) Time evolution of the occupation probabilities of states \(|-000\rangle\), \(|0-00\rangle\), \(|00-0\rangle\), and \(|000-\rangle\), with \(s_{\varphi}=0.3\). Solid lines represent simulation results under the effective Hamiltonian, while dashed lines correspond to results under the full Hamiltonian (see Appendix), which additionally includes the population of all relevant \(|+\rangle\) excitations. (b) The corresponding normalized trajectory of the excitation for the evolution in (a), where the change in color from dark to light represents the passage of time. (c) Time evolution of the probabilities with \(s_{\varphi}=-0.3\). (d) The corresponding normalized trajectory of the excitation for the evolution in (c). All relevant synthetic fluxes are indicated by asterisks in Fig. 2(a), and we take \(\hbar/t_{-}\) as the unit of time.
is smaller than that between 2 and 1, and since \(\Phi_{124}>0\), the excitation injected from site 2 would predominantly flow to site 1. Further, once the excitation moves to site 3, both \(\Phi_{123}\) and \(\Phi_{324}\) favor the subsequent transport to site 2. These factors lead to a CCW transport with very small occupation of site 4.
## IV Dynamics with dissipation
We now switch on the on-site light-induced dissipation at site 1. When the decay rate \(\gamma\) is small, the trajectories remain qualitatively similar to the case of \(\gamma=0\), as shown in Figs. 4(a)(c). However, the two trajectories are now predominantly limited within the triangles \(\Delta_{324}\) (for \(s_{\varphi}=0.3\)) and \(\Delta_{123}\) (for \(s_{\varphi}=-0.3\)), respectively. This gives rise to a considerable difference in the time-averaged population of site 4, as we will explicitly demonstrate later. The difference between the trajectories in Fig. 3(b) and Fig. 4(b) arises from the interplay of dissipation and synthetic flux. To further illustrate this point, we define the total population loss as \(\mathcal{P}_{l}=1-\sum_{i=1}^{4}\rho_{i}\), as well as the loss rate \(\partial\mathcal{P}_{l}/\partial\tau\). In Figs. 4(a)(c), peaks of the loss rate \(\partial\mathcal{P}_{l}/\partial\tau\) almost overlaps with the occupation of site 4. The corresponding normalized trajectory of the excitation for the evolution in Fig. 4(a) and (b), respectively, is shown in Fig. 4(b). The corresponding normalized trajectory of the excitation for the evolution in Fig. 4(a) is smaller than that between 2 and 1, and since \(\Phi_{124}>0\), the excitation injected from site 2 would predominantly flow to site 1. Further, once the excitation moves to site 3, both \(\Phi_{123}\) and \(\Phi_{324}\) favor the subsequent transport to site 2. These factors lead to a CCW transport with very small occupation of site 4.
Figure 4: The transport dynamics of a single excitation in the presence of dissipation. (a) Time evolution of occupation probabilities, for \(\gamma/t_{-}=1\) and \(s_{\varphi}=0.3\). Solid lines represent simulation results under the effective Hamiltonian, while the short-dashed lines correspond to results under the full Hamiltonian. (b) The corresponding normalized trajectory of the excitation for the evolution in (a). (c) Time evolution of occupation probabilities, for \(\gamma/t_{-}=1\) and \(s_{\varphi}=-0.3\). (d) The corresponding normalized trajectory of the excitation for the evolution in (c). (e)(g) Time evolution of occupation probabilities, for \(\gamma/t_{-}=10\) and \(s_{\varphi}=\pm 0.3\). (f)(h) The corresponding normalized trajectories of the excitation for the evolution in (e) and (g), respectively. In (a)(c)(e)(g), the gray long-dashed lines represent the population loss rate (see main text for definition), with values referenced to the right \(y\) axis.
Figure 5: (a) Time-averaged population of the output site as a function of the control parameter \(s_{\varphi}\), with a total evolution time \(10\hbar/t_{-}\). (b) The total population loss during an evolution period of \(10\hbar/t_{-}\). In both panels, the colored lines represent different decay rate, the solid lines are results calculated from the effective Hamiltonian \(H_{\text{eff}}\), and the dashed lines correspond to those from the full Hamiltonian.
Figure 6: Regime of tunable transport. Calculated \(\bar{\rho}_{4}\) and \(\mathcal{P}_{l}\) (see main text for definition) for different evolution time \(\tau\) and decay rate \(\gamma\). The black solid (dashed) line is the calculated \(\bar{\rho}_{4}\) for evolution time \(\tau=10\hbar/t_{-}\) (\(\tau=20\hbar/t_{-}\)). The gray solid (dashed) line is \(\mathcal{P}_{l}\) for evolution time \(\tau=10\hbar/t_{-}\) (\(\tau=20\hbar/t_{-}\)).
1, suggesting a rapid loss from site 1.
In the case of Figs. 4(a)(b), the dissipation depletes the occupation in site 1, which limits the trajectory to the triangle \(\Delta_{324}\), giving rise to a large time-averaged population of site 4. By contrast, in the case of Figs. 4(c)(d), the appreciable population of site 1 at short times under the CCW chiral motion leads to significant loss. The time-averaged transfer to site 4 becomes suppressed.
When the decay rate is further increased, the loss is so significant that the occupation in site 1 is completely suppressed. Notice that this already occurs when the system is still far from the quantum Zeno regime (see discussions in the next section). This is the case for \(\gamma/t_{-}=10\), as shown in Figs. 4(e)(g), where the vanishing population of site 1 is accompanied by large loss (long dashed). The trajectories on the other hand, become limited to the triangle \(\Delta_{324}\). Importantly, throughout this intermediate-\(\gamma\) regime, the time-averaged transport to site 4 can be switched on and off by tuning the flux parameter \(s_{\varphi}\).
## V Further analysis of loss
To provide further evidence for the analysis above, in Fig. 5, we show the time-averaged population of the output site under different decay rate, for which we define
\[\bar{\rho}_{4}=\frac{1}{T}\int_{0}^{T}d\tau\rho_{4}(\tau), \tag{5}\]
which is a function of the control parameter \(s_{\varphi}\). As shown in Fig. 5(a), in the presence of a finite \(\gamma\) (but not large enough to reach the quantum Zeno regime), the transport to site 4 remains small but for an intermediate regime in \(s_{\varphi}\). Such a regime also features suppressed loss, as illustrated in Fig. 5(b). By contrast, in the regime where the transport to site 4 is suppressed, the loss becomes significant. Hence, combining dissipation and synthetic flux, we achieve an efficient control over the transport of injected excitation from site 2 to the output site 4.
Finally, in Fig. 6, we show the range of variation of \(\bar{\rho}_{4}\) and \(\mathcal{P}_{l}\) for \(s_{\varphi}\in[-1,1]\). For this purpose, we define \(\Delta\bar{\rho}_{4}=\max\left(\bar{\rho}_{4}\right)-\min\left(\bar{\rho}_{4}\right)\) and \(\Delta\mathcal{P}_{l}=\max\left(\mathcal{P}_{l}\right)-\min\left(\mathcal{P}_ {l}\right)\), where the function max (min) gives the maximum (minimum) value of the corresponding variable when \(s_{\varphi}\) varies in the range of \([-1,1]\).
A regime of tunable transport to site 4 is indicated by a large \(\Delta\bar{\rho}_{4}\). The overlap of the peaks of \(\Delta\bar{\rho}_{4}\) and \(\Delta\mathcal{P}_{l}\) suggests that the tunable transport is indispensable with large variations in the dissipation. In the large-\(\gamma\) limit, the transport dynamics become restricted to the triangle \(\Delta_{324}\), as both the occupation of site 1 and the total loss are significantly suppressed due to the quantum Zeno effect [40]. The tunable transport is no longer achievable in this regime.
## VI Conclusion
We show that the interplay of interaction-induced synthetic flux and dissipation gives rise to tunable excitation dynamics in Rydberg atoms. Considering a four-site configuration as a concrete example, we demonstrate how the injected single excitation can be transported to an output site, by tuning the flux and dissipation. This is further confirmed by characterizing the total loss rate and probability evolution throughout the transport dynamics. While the light-induced dissipation we consider here gives rise to non-Hermiticity through post selection, it is also interesting to study the transport dynamics in the context of quantum open systems, where the density-matrix dynamics of the system is driven by Liouvillian superoperators [41; 42]. Another interesting scenario is the dynamics of several excitations, where the interaction-induced gauge field become density-dependent and acquires its own dynamics in the context of quantum many-body open systems.
###### Acknowledgements.
This research is supported by the Natural Science Foundation of China (Grant No. 11974331).
## Appendix: Derivation of the effective non-Hermitian Hamiltonian
In this Appendix, we derive the effective Hamiltonian from the full Hamiltonian where both \(\ket{\pm}\) excitations are considered. According to Fig. 1, in the subspace spanned by the states \(\{\ket{\pm 000},\ket{0\pm 00},\ket{00\pm 0},\ket{000\pm}\}\), the system
is described by the following full Hamiltonian
\[H_{\text{full}}=\left(\begin{array}{ccccccccc}\frac{\Delta}{2}-i\gamma&t_{-}&t _{-}&\frac{t_{-}}{8}&0&we^{2i\varphi_{12}}&we^{2i\varphi_{13}}&\frac{we^{2i \varphi_{14}}}{8}\\ t_{-}&\frac{\Delta}{2}&t_{-}&\frac{t_{-}}{3\sqrt{3}}&we^{2i\varphi_{21}}&0&we^{2 i\varphi_{23}}&\frac{we^{2i\varphi_{24}}}{3\sqrt{3}}\\ t_{-}&t_{-}&\frac{\Delta}{2}&t_{-}&we^{2i\varphi_{31}}&we^{2i\varphi_{32}}&0&we^ {2i\varphi_{34}}\\ \frac{t_{-}}{8}&\frac{t_{-}}{3\sqrt{3}}&t_{-}&\frac{\Delta}{2}&\frac{we^{2i \varphi_{41}}}{8}&\frac{we^{2i\varphi_{42}}}{3\sqrt{3}}&we^{2i\varphi_{43}}&0 \\ 0&we^{-2i\varphi_{12}}&we^{-2i\varphi_{13}}&\frac{we^{-2i\varphi_{14}}}{8}& \frac{-2i\varphi_{14}}{3\sqrt{3}}&t_{+}&t_{+}&\frac{t_{+}}{8}\\ we^{-2i\varphi_{21}}&0&we^{-2i\varphi_{23}}&\frac{we^{-2i\varphi_{24}}}{3 \sqrt{3}}&t_{+}&-\frac{\Delta}{2}&t_{+}&\frac{t_{+}}{3\sqrt{3}}\\ we^{-2i\varphi_{31}}&we^{-2i\varphi_{32}}&0&we^{-2i\varphi_{34}}&t_{+}&t_{+}&- \frac{\Delta}{2}&t_{+}\\ \frac{we^{-2i\varphi_{41}}}{8}&\frac{we^{-2i\varphi_{42}}}{3\sqrt{3}}&we^{-2i \varphi_{43}}&0&\frac{t_{+}}{8}&\frac{t_{+}}{3\sqrt{3}}&t_{+}&-\frac{\Delta}{ 2}\end{array}\right). \tag{10}\]
The parameter \(\Delta\) is the energy difference between the state \(\left|+\right\rangle\) and the state \(\left|-\right\rangle\). The hopping amplitudes are given by
\[t_{\pm} = \frac{\left|\left\langle\pm\right|\hat{d}^{+}\left|0\right\rangle \right|^{2}}{8\pi\epsilon_{0}r_{0}^{3}} \tag{11}\] \[w = -\frac{3\left\langle+\right|\hat{d}^{+}\left|0\right\rangle \left\langle 0\right|\hat{d}^{+}\left|-\right\rangle}{8\pi\epsilon_{0}r_{0}^{3}}. \tag{12}\]
Additionally, the van der Waals interaction is neglected in the parameter regime we consider, since it is much weaker compared to the resonant dipole-dipole interaction.
By defining the creation and annihilation operators on site \(i\), \(a_{i}^{\dagger}\left|0\right\rangle=\left|-\right\rangle_{i}\) and \(b_{i}^{\dagger}\left|0\right\rangle=\left|+\right\rangle_{i}\), the Hamiltonian is expressed as
\[H_{\text{full}}=-i\gamma\left(a_{1}^{\dagger}a_{1}+b_{1}^{\dagger}b_{1}\right) +\frac{\Delta}{2}\sum_{i}\left(a_{i}^{\dagger}a_{i}-b_{i}^{\dagger}b_{i}\right) +\sum_{i\neq j}\left(\frac{r_{0}}{r_{ij}}\right)^{3}\left(t_{-}a_{i}^{\dagger} a_{j}+t_{+}b_{i}^{\dagger}b_{j}+we^{2i\varphi_{ij}}a_{i}^{\dagger}b_{j}+we^{-2i \varphi_{ij}}b_{i}^{\dagger}a_{j}\right). \tag{13}\]
We work in the regime with \(\Delta\gg t_{\pm},w\), so that when we take \(\left|0-00\right\rangle\) as the initial state, the system primarily remains in the subspace spanned by the states \(\left\{\left|-000\right\rangle,\left|0-00\right\rangle,\left|000-\right\rangle \right\}\). Adiabatically eliminating the \(\left|+\right\rangle\) excitations thus lead to the effective Hamiltonian
\[H_{\text{eff}}=\sum_{i}\varepsilon_{i}a_{i}^{\dagger}a_{i}+\sum_{ij}t_{ij}e^{ i\phi_{ij}}a_{i}^{\dagger}a_{j}, \tag{14}\]
where \(\varepsilon_{i}\) are the self-energy terms
\[\varepsilon_{1} = -i\gamma+\frac{\Delta}{2}+\frac{w^{2}}{\Delta}\left[e^{2i(\varphi _{12}-\varphi_{21})}+e^{2i(\varphi_{13}-\varphi_{31})}+\frac{1}{64}e^{2i( \varphi_{14}-\varphi_{41})}\right] \tag{15}\] \[= -i\gamma+\frac{\Delta}{2}+\frac{w^{2}}{\Delta}\frac{129}{64},\] \[\varepsilon_{2} = \frac{\Delta}{2}+\frac{w^{2}}{\Delta}\left[\frac{\Delta}{\Delta+ i\gamma}e^{2i(\varphi_{21}-\varphi_{12})}+e^{2i(\varphi_{23}-\varphi_{32})}+ \frac{1}{27}e^{2i(\varphi_{24}-\varphi_{42})}\right]\] (16) \[= \frac{\Delta}{2}+\frac{w^{2}}{\Delta}\left(\frac{\Delta^{2}-i \Delta\gamma}{\Delta^{2}+\gamma^{2}}+\frac{28}{27}\right),\] \[\varepsilon_{3} = \frac{\Delta}{2}+\frac{w^{2}}{\Delta}\left[\frac{\Delta}{\Delta+ i\gamma}e^{2i(\varphi_{31}-\varphi_{12})}+e^{2i(\varphi_{32}-\varphi_{23})}+e^{2i( \varphi_{34}-\varphi_{43})}\right]\] (17) \[= \frac{\Delta}{2}+\frac{w^{2}}{\Delta}\left(\frac{\Delta^{2}-i \Delta\gamma}{\Delta^{2}+\gamma^{2}}+2\right),\] \[\varepsilon_{4} = \frac{\Delta}{2}+\frac{w^{2}}{\Delta}\left[\frac{\Delta}{\Delta+ i\gamma}\frac{1}{64}e^{2i(\varphi_{41}-\varphi_{14})}+\frac{1}{27}e^{2i(\varphi_{42}- \varphi_{24})}+e^{2i(\varphi_{43}-\varphi_{34})}\right]\] (18) \[= \frac{\Delta}{2}+\frac{w^{2}}{\Delta}\left(\frac{\Delta^{2}-i \Delta\gamma}{\Delta^{2}+\gamma^{2}}\frac{1}{64}+\frac{28}{27}\right), \tag{19}\]
and \(t_{ij}e^{i\phi_{ij}}\) are the hopping terms, which satisfy \(t_{ji}=t_{ij}\) and \(\phi_{ji}=-\phi_{ij}\), with
\[t_{12}e^{i\phi_{12}} =t_{-}+\frac{w^{2}}{\varDelta}\left(e^{2i(\varphi_{23}-\varphi_{31 })}+\frac{1}{24\sqrt{3}}e^{2i(\varphi_{24}-\varphi_{41})}\right)\] \[=t_{-}+\frac{w^{2}}{\varDelta}\left[\left(-\frac{1}{2}-i\frac{ \sqrt{3}}{2}\right)+\frac{1}{24\sqrt{3}}\left(\frac{1}{2}-i\frac{\sqrt{3}}{2} \right)\right], \tag{10}\] \[t_{13}e^{i\phi_{13}} =t_{-}+\frac{w^{2}}{\varDelta}\left(e^{2i(\varphi_{32}-\varphi_{ 21})}+\frac{1}{8}e^{2i(\varphi_{34}-\varphi_{41})}\right)\] \[=t_{-}+\frac{w^{2}}{\varDelta}\left[\left(-\frac{1}{2}+i\frac{ \sqrt{3}}{2}\right)+\frac{1}{8}\right],\] (11) \[t_{14}e^{i\phi_{14}} =\frac{t_{-}}{8}+\frac{w^{2}}{\varDelta}\left(\frac{1}{3\sqrt{3 }}e^{2i(\varphi_{42}-\varphi_{21})}+e^{2i(\varphi_{43}-\varphi_{31})}\right)\] \[=\frac{t_{-}}{8}+\frac{w^{2}}{\varDelta}\left(-\frac{1}{3\sqrt{3 }}+1\right),\] (12) \[t_{23}e^{i\phi_{23}} =t_{-}+\frac{w^{2}}{\varDelta}\left(\frac{\varDelta}{\varDelta+ i\gamma}e^{2i(\varphi_{31}-\varphi_{12})}+\frac{1}{3\sqrt{3}}e^{2i(\varphi_{34}- \varphi_{42})}\right)\] \[=t_{-}+\frac{w^{2}}{\varDelta}\left[\frac{\varDelta^{2}-i \varDelta\gamma}{\varDelta^{2}+\gamma^{2}}\left(-\frac{1}{2}-i\frac{\sqrt{3}}{ 2}\right)+\frac{1}{3\sqrt{3}}\left(\frac{1}{2}+i\frac{\sqrt{3}}{2}\right) \right],\] (13) \[t_{24}e^{i\phi_{24}} =\frac{t_{-}}{3\sqrt{3}}+\frac{w^{2}}{\varDelta}\left(\frac{1}{8 }\frac{\varDelta}{\varDelta+i\gamma}e^{2i(\varphi_{41}-\varphi_{12})}+e^{2i( \varphi_{43}-\varphi_{32})}\right)\] \[=\frac{t_{-}}{3\sqrt{3}}+\frac{w^{2}}{\varDelta}\left[\frac{1}{8 }\frac{\varDelta^{2}-i\varDelta\gamma}{\varDelta^{2}+\gamma^{2}}\left(-\frac {1}{2}-i\frac{\sqrt{3}}{2}\right)+\left(-\frac{1}{2}+i\frac{\sqrt{3}}{2} \right)\right],\] (14) \[t_{34}e^{i\phi_{34}} =t_{-}+\frac{w^{2}}{\varDelta}\left(\frac{1}{8}\frac{\varDelta}{ \varDelta+i\gamma}e^{2i(\varphi_{41}-\varphi_{13})}+\frac{1}{3\sqrt{3}}e^{2i( \varphi_{42}-\varphi_{23})}\right)\] \[=t_{-}+\frac{w^{2}}{\varDelta}\left[\frac{1}{8}\frac{\varDelta^{2 }-i\varDelta\gamma}{\varDelta^{2}+\gamma^{2}}+\frac{1}{3\sqrt{3}}\left(\frac{ 1}{2}+i\frac{\sqrt{3}}{2}\right)\right]. \tag{15}\]
|
2309.13623 | Control Performance Analysis of Power Steering System Electromechanical
Dynamics | Modern power steering systems employ an electric motor drive system to
provide torque assistance to the driver. The closed-loop mechanical system
dynamics that impact stability, performance and steering feel are significantly
impacted by the electrical dynamics of the actuator depending on the structure
and tuning of the motor torque controller. This paper presents an integrated
approach to the analysis of this electromechanical dynamic control interaction
through mathematical modeling which is confirmed with simulations. | Prerit Pramod | 2023-09-24T12:49:28Z | http://arxiv.org/abs/2309.13623v1 | # Control Performance Analysis of Power Steering System Electromechanical Dynamics
###### Abstract
Modern power steering systems employ an electric motor drive system to provide torque assistance to the driver. The closed-loop mechanical system dynamics that impact stability, performance and steering feel are significantly impacted by the electrical dynamics of the actuator depending on the structure and tuning of the motor torque controller. This paper presents an integrated approach to the analysis of this electromechanical dynamic control interaction through mathematical modeling which is confirmed with simulations.
## Introduction
Electric power steering (EPS) systems are virtually ubiquitous in the automotive industry, particularly the passenger vehicle segment, owing to the myriad of advantages they offer in terms of fuel savings, tunability and performance flexibility and inherent intelligence in the form of fault detection and correction [1, 2, 3, 4, 5].
Rapidly changing vehicle safety and performance requirements have mandated the need for electric motor drives that provide rapid dynamics, variable bandwidth and fault tolerance [6, 7, 8, 9]. Permanent magnet synchronous machines (PMSMs) are most widely employed in EPS systems owing to the multifarious advantages they offer including, but not limited, high power density, low inertia and relatively minimal torque ripple [10, 11, 12]. The torque control of PMSMs used in EPS systems is performed indirectly via current control which may be implemented in multifarious forms. Feedforward or open loop control architectures employ an inverse mathematical model of the machine to determine the voltage commands to be applied for given current commands [13, 14, 15, 16, 17, 18, 19, 20]. Alternatively, feedback control architectures involve the closed loop regulation of measured currents [21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Depending on the selected control architecture and tuning of control parameters, the motor control system may significantly impact the overall stability and performance of EPS systems, which is typically ignored and results in sub-optimal control design and tuning of the overall system.
This paper presents a detailed mathematical treatment of overall EPS electromechanical dynamics through modeling of the interaction of different EMD torque control architectures with the mechanical system which has not been presented previously in literature.
## Electric Power Steering System Models
A typical EPS control architecture is shown in Fig. 1. The driver torque \(T_{d}\) is sensed via a handwheel torque sensor signal \(\hat{T}_{h}\) and is used to compute the motor (assistance) torque command \(T_{m}^{*}\) along with other dynamic compensation. The EMD consisting of a PMSM then converts the torque command to current commands \(I_{dq}^{*}\) in the so-called synchronous frame and commands voltages \(V_{dq}^{*}\) which are applied to the motor through the gate driver and inverter through either a feedforward or feedback current control structure.
The mechanical system may be modeled as a 2-mass model (2MM) wherein the handwheel and assist mechanism (plus the motor) are separated by the torque bar used as the handwheel torque sensor. The transfer functions relating the sensed handwheel torque to the external inputs of driver torque \(T_{d}\) and rack force \(T_{r}\) (translated to rotating coordinates) and the control signal motor torque \(T_{m}\) are given in (1).
\[\begin{split} T_{h}&=(N.M_{t})T_{m}+(M_{r})T_{r}+(M _{d})T_{d}\\ &=\Big{(}K_{h}\frac{K_{h}-M_{h}}{D}\Big{)}T_{m}+\Big{(}K_{h}\frac {K_{h}-M_{h}}{D}\Big{)}T_{r}+\Big{(}K_{h}\frac{M_{m}-K_{h}}{D}\Big{)}T_{d}\\ M_{h}=J_{h}s^{2}+b_{h}s\qquad M_{m}&=J_{m}s^{2}+b_ {m}s+K_{l}\qquad D=M_{h}M_{m}-K_{h}^{2}\end{split} \tag{1}\]
Figure 1: Electric power steering control system.
where \(N\) is the gear ratio, \(J\), \(b\) and \(K\) denote inertia, damping and compliance respectively, and subscripts \(h\) and \(m\) represents the handwheel and motor (including assist mechanism) respectively, while \(K_{l}\) is the linear tire model spring constant. The conventional synchronous frame motor model is non-linear since it contains voltage terms involving a product of velocity and currents and is linearized as perturbations represented by \(\Delta\) about a set-point (\(V_{dq}^{0}\), \(I_{dq}^{0}\), \(\omega_{m}^{0}\)) for the present analysis as given in (2).
\[\begin{array}{c}T_{m}=q.p\lambda_{m}\big{(}I_{q}^{0}+\Delta I_{q}\big{)}\\ V_{dq}^{0}+\Delta V_{dq}=P^{-1}\big{(}I_{dq}^{0}+\Delta I_{dq}\big{)}+E( \omega_{m}^{0}+\Delta\omega_{m})\\ \Big{[}\begin{matrix}V_{d}^{0}+\Delta V_{d}\\ V_{q}^{0}+\Delta V_{q}\end{matrix}\Big{]}=\begin{bmatrix}L_{d}s+R&p\omega_{m} ^{0}L_{q}\\ -p\omega_{m}^{0}L_{d}&L_{q}s+R\end{bmatrix}\Big{[}\begin{matrix}I_{d}^{0}+ \Delta I_{d}\\ I_{q}^{0}+\Delta I_{q}\end{matrix}\Big{]}+\begin{bmatrix}pL_{d}I_{q}^{0}\\ -pL_{d}I_{d}^{0}+p\lambda_{m}\end{bmatrix}(\omega_{m}^{0}+\Delta\omega_{m}) \end{array} \tag{2}\]
where \(V\) and \(I\) are voltage and current respectively, \(\omega_{m}\) is motor velocity, \(p\) is magnetic pole pairs, \(\lambda_{m}\) is the permanent magnet flux linkage, \(L\) represents inductance, \(R\) is the combined motor and inverter resistance, \(q\) is a constant equal to 1.5, and the subscripts \(d\) and \(q\) represent the direct and quadrature axis in the synchronous frame. Inverter and current estimation dynamics are modeled as transport lags of delay \(\tau_{c}\) and \(\tau_{p}\) as given in (3).
\[\begin{array}{c}\hat{I}_{dq}=Bl_{dq}&V_{dq}=XV_{dq}^{*}\\ \Big{[}\begin{matrix}I_{d}\\ I_{q}\end{matrix}\Big{]}=\Big{[}\begin{matrix}e^{-\tau_{c}s}&0\\ 0&e^{-\tau_{c}s}\end{matrix}\Big{]}\Big{[}\begin{matrix}I_{d}\\ I_{q}\end{matrix}\Big{]}&\Big{[}\begin{matrix}V_{d}\\ V_{q}\end{matrix}\Big{]}=\Big{[}\begin{matrix}e^{-\tau_{p}s}&0\\ 0&e^{-\tau_{p}s}\end{matrix}\Big{]}\Big{[}\begin{matrix}V_{d}^{*}\\ V_{q}\end{matrix}\Big{]}\end{array} \tag{3}\]
## Electric Motor Drive Control Dynamics
Motor torque control may be implemented as either a feedforward, i.e., open loop (model based) or a feedback current control (utilizing measured current feedback) system. A general expression for overall electrical dynamics of the EMD may be written as a torque tracking transfer function \(A_{t}\) and disturbance rejection transfer function \(A_{d}\) as given in (4).
\[T_{m}=A_{t}T_{m}^{*}+A_{\omega}\omega_{m} \tag{4}\]
Under normal operating conditions, i.e., at low velocities, the motor current commands are computed from the torque command using the maximum torque per ampere (MTPA) technique as given in (5).
\[\begin{array}{c}I_{q}^{*}=\frac{1}{qp\hat{A}_{m}}T_{m}^{*}\\ I_{d}^{*}=0\end{array} \tag{5}\]
**Feedforward Current Control**
This open-loop technique involves calculation of the voltage commands using an inverse motor model involving estimated parameters as shown in (6).
\[\begin{array}{l}V_{dq}^{*}=C_{t}^{f}I_{dq}^{*}+C_{\omega}^{f}\widehat{\omega}_{m }\\ \begin{bmatrix}V_{d}^{*}\\ V_{q}^{*}\end{bmatrix}=\begin{bmatrix}\hat{L}_{d}\hat{s}+\hat{R}&p\hat{\omega}_{ m}\hat{L}_{q}\\ -p\hat{\omega}_{m}\hat{L}_{d}&\hat{L}_{q}\hat{s}+\hat{R}\end{bmatrix}\begin{bmatrix} I_{d}^{*}\\ I_{q}^{*}\end{bmatrix}+\begin{bmatrix}0\\ p\hat{\lambda}_{m}\end{bmatrix}\widehat{\omega}_{m}\end{array} \tag{6}\]
The velocity estimate is modeled as a transfer function \(H_{\omega}\) incorporating position sensing and velocity estimation dynamics as a derivative with low pass filtering as given in (7).
\[\widehat{\omega}_{m}=H_{\omega}\omega_{m}=\left(\frac{s}{\tau_{\omega}s+1} \right)\omega_{m} \tag{7}\]
Combining (2) through (7) at zero nominal velocity (\(\omega_{m}^{0}=0\)) results in the overall feedforward motor torque control transfer function shown in (8).
\[\begin{array}{l}T_{m}=A_{t}^{f}T_{m}^{*}+A_{\omega}^{f}\omega_{m}\\ \begin{array}{l}=X\frac{\lambda_{m}}{\lambda_{m}}\left(\frac{\hat{L}_{q}s+ \hat{R}}{L_{q}s+\hat{R}}\right)T_{m}^{*}+qp^{2}\lambda_{m}\left(\frac{XH_{ \omega}\hat{\lambda}_{m}-\lambda_{m}}{L_{q}s+\hat{R}}\right)\omega_{m}\end{array} \tag{8}\]
**Feedback Current Control**
A current regulator such as a PI controller is used to minimize the error between the commanded currents and the measured current feedback as given in (9).
\[\begin{array}{l}V_{dq}^{*}=C_{t}^{b}\left(I_{dq}^{*}-I_{dq}\right)+C_{\omega }^{b}\widehat{\omega}_{m}\\ \begin{bmatrix}V_{q}^{*}\\ V_{q}^{*}\end{bmatrix}=\begin{bmatrix}K_{pq}+\frac{K_{id}}{\hat{s}}&0\\ 0&K_{pq}+\frac{K_{iq}}{\hat{s}}\end{bmatrix}\begin{bmatrix}I_{d}^{*}-I_{d}\\ I_{q}^{*}-\hat{I}_{q}\end{bmatrix}+\begin{bmatrix}0\\ p\hat{\lambda}_{m}\end{bmatrix}\widehat{\omega}_{m}\end{array} \tag{9}\]
Combining (2) through (5) with (7) and (9) at zero nominal velocity (\(\omega_{m}^{0}=0\)) results in the overall feedback motor torque control transfer function shown in (10).
\[\begin{array}{l}T_{m}=A_{t}^{b}T_{m}^{*}+A_{\omega}^{b}\omega_{m}\\ \begin{array}{l}=X\frac{\lambda_{m}}{\lambda_{m}}\left(\frac{\hat{R}_{pq}\hat {s}+\hat{R}_{iq}}{L_{q}s\hat{s}+R\hat{s}+XBk_{pq}\hat{s}+XBk_{iq}}\right)T_{m}^ {*}+qp^{2}\lambda_{m}\left(\frac{XH_{\omega}\lambda_{m}-\lambda_{m}}{L_{q}s\hat {s}+R\hat{s}+XBk_{pq}\hat{s}+XBk_{iq}}\right)\omega_{m}\end{array} \tag{10}\]
### Steering Control Closed-loop Dynamical Analysis
The stability as well as performance of the overall EPS control system may be studied by assessing the effective open-loop transfer function (E-OLTF) relating \(T_{m}^{*}\) to \(\hat{T}_{h}\approx T_{h}\) obtained by combining (1) and (4) as given in (11).
\[\begin{split} T_{h}&=Z_{t}T_{m}^{*}+Z_{r}T_{r}+Z_{d}T_{d} \\ &=\Big{(}K_{h}\frac{K_{h}-M_{h}}{D-sA_{\omega}M_{h}}A_{t}\Big{)}T_{m }^{*}+\Big{(}K_{h}\frac{K_{h}-M_{h}}{D-sA_{\omega}M_{h}}\Big{)}T_{r}+\Big{(}K_ {h}\frac{M_{m}-K_{h}-sA_{\omega}}{D-sA_{\omega}M_{h}}\Big{)}T_{d}\end{split} \tag{11}\]
where \(A_{t}\) and \(A_{\omega}\) may be replaced by expressions from (8) and (10) to obtain the specific E-OLTFs for feedforward and feedback control respectively. The motor torque control transfer function in (12) including mechanical dynamics is derived by combining (1) and (11). The expansion and simplification of (11) and (12) for various cases is left for the full paper.
\[\begin{split} T_{m}&=W_{t}T_{m}^{*}+W_{r}T_{r}+W_{d} T_{d}\\ &=\Big{(}\frac{D}{D-sA_{\omega}M_{h}}A_{t}\Big{)}T_{m}^{*}+\Big{(} \frac{sA_{\omega}M_{h}}{D-sA_{\omega}M_{h}}\Big{)}T_{r}+\Big{(}K_{h}\frac{sA_{ \omega}}{D-sA_{\omega}M_{h}}\Big{)}T_{d}\end{split} \tag{12}\]
Simulation results depicting motor torque control frequency response modification due to EPS mechanical dynamics and the corresponding E-OLTFs are shown in Fig. 2.
The motor control scaling responses shown in Fig. 2 illustrate the comparative behavior of the EMD with mechanical dynamics relative to electrical-only (ideal) control dynamics. The impact on EPS dynamics is clear from Fig. 2 which depicts a reduction in both the gain and phase (stability) margins when feedforward torque control rather than feedback control is employed, due to its inferior disturbance rejection characteristics.
## Conclusions
A comprehensive mathematical modeling capturing the overall closed-loop electromechanical dynamics of EPS systems considering the influence of EMD dynamics is presented. The novel model is crucial for properly characterizing, stabilizing and tuning EPS systems.
Figure 2: Simulation results illustrating frequency response plots of (a) motor torque control scaling due to mechanical dynamics (b) steering actual and effective open-loop system. |
2302.14216 | Decoding the Divide: Analyzing Disparities in Broadband Plans Offered by
Major US ISPs | Digital equity in Internet access is often measured along three axes:
availability, affordability, and adoption. Most prior work focuses on
availability; the other two aspects have received little attention. In this
paper, we study broadband affordability in the US. Specifically, we focus on
the nature of broadband plans offered by major ISPs across the US. To this end,
we develop a broadband plan querying tool (BQT) that obtains broadband plans
(upload/download speed and price) offered by seven major ISPs for any street
address in the US. We then use this tool to curate a dataset, querying
broadband plans for over 837k street addresses in thirty cities for seven major
wireline broadband ISPs. We use a plan's carriage value, the Mbps of a user's
traffic that an ISP carries for one dollar, to compare plans. Our analysis
provides us with the following new insights: (1) ISP plans vary inter-city.
Specifically, the fraction of census block groups that receive high and low
carriage value plans varies widely by city; (2) ISP plans intra-city are
spatially clustered, and the carriage value can vary as much as 600% within a
city; (3) Cable-based ISPs offer up to 30% more carriage value to users when
competing with fiber-based ISPs in a block group; and (4) Average income in a
block group plays a critical role in dictating who gets a fiber deployment
(i.e., a better carriage value) in the US. While we hope our tool, dataset, and
analysis in their current form are helpful for policymakers at different levels
(city, county, state), they are only a small step toward understanding digital
equity. Based on our learnings, we conclude with recommendations to continue to
advance our understanding of broadband affordability. | Udit Paul, Vinothini Gunasekaran, Jiamo Liu, Tejas N. Narechania, Arpit Gupta, Elizabeth Belding | 2023-02-28T00:33:56Z | http://arxiv.org/abs/2302.14216v1 | # Decoding the Divide: Analyzing Disparities in Broadband Plans Offered by Major US ISPs
###### Abstract.
Digital equity in Internet access is often measured along three axes: availability, affordability, and adoption. Most prior work focuses on availability; the other two aspects have received little attention. In this paper, we study broadband affordability in the US. Specifically, we focus on the nature of broadband plans offered by major ISPs across the US. To this end, we develop a broadband plan querying tool (BQT) that obtains broadband plans (upload/download speed and price) offered by seven major ISPs for any street address in the US. We then use this tool to curate a dataset, querying broadband plans for over 837 k street addresses in thirty cities for seven major wireline broadband ISPs. We use a plan's carriage value, the Mbps of a user's traffic that an ISP carries for one dollar, to compare plans. Our analysis provides us with the following new insights: (1) ISP plans vary inter-city. Specifically, the fraction of census block groups that receive high and low carriage value plans varies widely by city; (2) ISP plans intra-city are spatially clustered, and the carriage value can vary as much as 600% within a city; (3) Cable-based ISPs offer up to 30% more carriage value to users when competing with fiber-based ISPs in a block group; and (4) Average income in a block group plays a critical role in dictating who gets a fiber deployment (i.e., a better carriage value) in the US. While we hope our tool, dataset, and analysis in their current form are helpful for policymakers at different levels (city, county, state), they are only a small step toward understanding digital equity. Based on our learnings, we conclude with recommendations to continue to advance our understanding of broadband affordability.
+
Footnote †: copyrighted: *
+
Footnote †: copyrighted: *
+
Footnote †: copyrighted: *
## 1. Introduction
The National Digital Inclusion Alliance (NDIA) defines digital equity as "a condition in which all individuals and communities have the information technology capacity needed for full participation in our society, democracy, and economy" [(40)]. As modern life has moved increasingly online, high-quality Internet access has become a key component of digital equity. The Covid-19 pandemic, and the post-pandemic "new normal" of remote interaction, have drastically changed the need for home Internet access; work-from-home, online/remote schooling, telemedicine, and other networked applications have become commonplace, to the point of nearly indispensable. As a result, individuals without home access to highly reliable, high-speed broadband are severely disadvantaged [(34)].
Having an in-depth understanding of digital equity is critical for policymakers at different levels (e.g., city, county, state, federal) to take corrective actions, such as offering subsidies [(31)], regulating rates [(6)] and funding access infrastructure [(41)]. Digital equity, especially in the context of Internet access, is often measured along three axes: availability, affordability, and adoption [(46)]. Many past efforts [(28; 43; 44)], including ones in our research community, have focused on measuring availability. Researchers have disaggregated availability into coverage and quality. Here, coverage answers whether broadband access is available in a geographical region, while quality answers questions related to access type (e.g., cable, fiber, DSL), and upload/download speed. Researchers and policymakers use publicly-available datasets, such as the FCC's Form 477 [(30)], Measuring Broadband America (MBA) [(32)], and Measurement Lab (M-Lab) speed test [(38)], as well as proprietary ones, such as Ookla's speed test [(42)], to characterize Internet connectivity. More recently, as part of the Broadband Equity, Access, and Deployment (BEAD) program, Congress directed the FCC to develop an accurate map of fixed broadband availability across the US. Though it is still a work in progress, when completed, the FCC National Broadband map will provide information regarding broadband availability (i.e., provider, access type, maximum upload/download speed) at the granularity of street addresses.
Though the existing datasets, including the most recent FCC National Broadband map, enable measuring availability, affordability has received less attention. To answer any question related to broadband affordability, extracting the "cost of broadband connectivity", i.e., the nature of the "deal" a user is getting, at fine-grained geographical granularity, is critical. Using the cost data, one can answer critical policy |
2309.16773 | Neural scaling laws for phenotypic drug discovery | Recent breakthroughs by deep neural networks (DNNs) in natural language
processing (NLP) and computer vision have been driven by a scale-up of models
and data rather than the discovery of novel computing paradigms. Here, we
investigate if scale can have a similar impact for models designed to aid small
molecule drug discovery. We address this question through a large-scale and
systematic analysis of how DNN size, data diet, and learning routines interact
to impact accuracy on our Phenotypic Chemistry Arena (Pheno-CA) benchmark: a
diverse set of drug development tasks posed on image-based high content
screening data. Surprisingly, we find that DNNs explicitly supervised to solve
tasks in the Pheno-CA do not continuously improve as their data and model size
is scaled-up. To address this issue, we introduce a novel precursor task, the
Inverse Biological Process (IBP), which is designed to resemble the causal
objective functions that have proven successful for NLP. We indeed find that
DNNs first trained with IBP then probed for performance on the Pheno-CA
significantly outperform task-supervised DNNs. More importantly, the
performance of these IBP-trained DNNs monotonically improves with data and
model scale. Our findings reveal that the DNN ingredients needed to accurately
solve small molecule drug development tasks are already in our hands, and
project how much more experimental data is needed to achieve any desired level
of improvement. We release our Pheno-CA benchmark and code to encourage further
study of neural scaling laws for small molecule drug discovery. | Drew Linsley, John Griffin, Jason Parker Brown, Adam N Roose, Michael Frank, Peter Linsley, Steven Finkbeiner, Jeremy Linsley | 2023-09-28T18:10:43Z | http://arxiv.org/abs/2309.16773v1 | # Neural scaling laws for phenotypic drug discovery
###### Abstract
Recent breakthroughs by deep neural networks (DNNs) in natural language processing (NLP) and computer vision have been driven by a scale-up of models and data rather than the discovery of novel computing paradigms. Here, we investigate if scale can have a similar impact for models designed to aid small molecule drug discovery. We address this question through a large-scale and systematic analysis of how DNN size, data diet, and learning routines interact to impact accuracy on our Phenotypic Chemistry Arena (_Pheno-CA_) benchmark -- a diverse set of drug discovery tasks posed on image-based high content screening data. Surprisingly, we find that DNNs explicitly supervised to solve tasks in the _Pheno-CA_ do not continuously improve as their data and model size is scaled-up. To address this issue, we introduce a novel precursor task, the _Inverse Biological Process_ (IBP), which is designed to resemble the causal objective functions that have proven successful for NLP. We indeed find that DNNs first trained with IBP then probed for performance on the _Pheno-CA_ significantly outperform task-supervised DNNs. More importantly, the performance of these IBP-trained DNNs monotonically improves with data and model scale. Our findings reveal that the DNN ingredients needed to accurately solve small molecule drug discovery tasks are already in our hands, and project how much more experimental data is needed to achieve any desired level of improvement.
## 1 Introduction
Rich Sutton (Sutton, 2019) famously wrote, "the biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin." The scale of compute, model, and data have proven over recent years to be the most important factors for developing high performing systems in nearly every domain of AI including computer vision (Dehghani et al., 2023), natural language processing (Kaplan et al., 2020), reinforcement learning (Hilton et al., 2023), protein folding (Lin et al., 2023), and design (Hesslow et al., 2022). The foundation of each of these revolutions-of-scale rests on empirically derived "neural scaling laws," which indicate that continued improvement on a given domain's tasks are constrained by compute, model, and data scale rather than novel algorithmic solutions or additional domain-knowledge. Thus,
one of the extraordinary opportunities for AI is finding and exploiting similar scaling laws in domains that have not benefited from them yet.
Small molecule drug discovery is one of the domains where scaling laws could have an outsized impact. Biological experiments are costly and time intensive, while the space of molecules has been estimated to contain as many as \(10^{60}\) compounds with drug-like properties (Lipinski et al., 2012). The current standard approach for identifying interactions between small molecules and biological targets involves high throughput screening (HTS), in which libraries of hundreds of thousands of molecules are tested empirically in parallel for specific biological readouts at great cost. The ability to accurately predict _in silico_ whether a small molecule engages a biological target would at the very least reduce the size of the chemical libraries needed to find bioactive molecules and support significantly faster and cheaper discovery. Moreover, if models for small molecule drug discovery follow similar scaling laws as those discovered for natural language processing (Kaplan et al., 2020), then it would mean that even the loftiest goals may be within reach, such as screening larger parts of the \(10^{60}\) space of molecules to find treatments for today's intractable diseases. Can DNNs speed up drug discovery, and if so, do their abilities follow neural scaling laws?
One of the most promising avenues for generating data that could be used to train DNNs on drug discovery tasks is image-based high-content screening (iHCS). This type of screen is widely used to measure the effects of drugs and find targets for treating disease because it is can capture a large variety of biological signatures through different stains or biosensors, and has been helpful in drug discovery applications including hit identification (Simm et al., 2018; Bray et al., 2017) and expansion (Hughes et al., 2020), lead optimization (Caie et al., 2010), generating hypotheses on a drug's mechanism-of-action (Young et al., 2008; Boyd et al., 2020; Sundaramurthy et al., 2014) and target (Schenone et al., 2013), and also identifying and validating disease targets (see Chandrasekaran et al., 2021 for a review). While iHCS is potentially more flexible than standard biochemical assays used in drug discovery, it still requires significant time, money, and effort to set up and run. The recently released JUMP dataset (Chandrasekaran et al., 2023a, b) contains nearly two orders of magnitude more iHCS data than was previously available to the public (Bray et al., 2017), and therefore represents a significant opportunity for deep learning. However, it is still unclear if DNNs can leverage the data in JUMP for drug discovery.
Here, we use the JUMP dataset to investigate if DNNs trained on it for small molecule drug discovery tasks follow neural scaling laws. A positive answer to this question could bring about a revolution in biomedicine that mimics the ones in natural language processing and computer vision over recent years, making it faster, easier, and cheaper than ever to discover drugs.
Contributions.We began by augmenting the JUMP dataset with our Phenotypic Chemistry Arena (_Pheno-CA_): a diverse set of drug discovery tasks posed on a subset of images in JUMP. We then tested if the performance of DNNs trained to solve each task could be predicted by the size of their models or the amount of data they were trained with. Surprisingly, it could not: the performance of these "task-supervised" DNNs was either unaffected or hurt by an increase in data and model sizes (Fig. A1a). However, DNNs in domains like natural language processing and vision rely on specific objective functions to achieve efficient scaling -- for instance, GPT models use the causal language modeling objective (Kaplan et al., 2020). We reasoned that a similar precursor task, especially one that could force DNNs to learn a causal model of biology, could have have a large impact on scaling. We therefore developed a novel precursor task, the _inverse biological process_ (IBP), and performed large-scale and systematic experiments on the _Pheno-CA_ to understand how this task interacted with the size of DNN architectures and the amount of data used in training them. Through this large-scale survey, we found the following:
* DNNs pretrained with IBP significantly outperform task-supervised DNNs on the _Pheno-CA_.
* DNNs pretrained with IBP also follow linear scaling laws on the _Pheno-CA_ that accurately predict how many novel samples and replicates are needed to achieve arbitrary levels of accuracy.
* IBP-trained DNNs improved in predictable ways as the total number of model parameters was increased. The effect of model depth, on the other hand, was less clear, and impacted only a subset of tasks.
* Scaling laws on IBP-trained DNNs indicate that to achieve 100% accuracy on a task like predicting a compound's mechanism-of-action, JUMP would need to be expanded by approximately 3.25M compounds. Achieving this scale of data would take an impossible amount of time and money, meaning that additional experimental advances are needed to improve neural scaling laws and move significantly beyond the performances of our IBP-trained models.
* We will release our _Pheno-CA_ challenge and code to encourage the field to continue investigating scaling laws in iHCS drug discovery.
## 2 Methods
JUMP dataThe Joint Undertaking for Morphological Profiling (JUMP) project has produced the largest publicly available dataset for iHCS. The dataset consists of images of Human U2OS osteosarcoma cells from 12-different data generating centers. Each image depicts a well of cells in a plate that have been perturbed then stained with the "Cell Painting" toolkit (Bray et al., 2016). Cell Painting involves fixing and staining cells with six dyes that mark eight different cell organelles or compartments: DNA, nucleoli, actin, Golgi apparatus, plasma membrane, endoplasmic reticulum (ER), cytoplasmic RNA, and mitochondria. Together, these stains provide an unbiased read-out on the effects of different perturbations on cell biology (Fig. 1a). JUMP perturbations include the addition of 116,750 different compounds and the knockout of 7,975 genes by Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) 1. There are a total of 711,974 compound perturbation images and 51,185 CRISPR perturbation images, which amounts to an average of five replicates of each perturbation type.
Footnote 1: JUMP also contains gene overexpression manipulations, but we do not include those images in this study.
Phenotypic Chemistry Arena.We introduced the Phenotypic Chemistry Arena (_Pheno-CA_, Fig. 1b) to evaluate DNNs trained on JUMP data for drug discovery tasks The _Pheno-CA_ consists of annotations on 6,738 well images from JUMP for four different discovery tasks. These tasks are (_i_) predicting a drug's mechanism-of-action ("MoA deconvolution," 1,282 categories), (_ii_) predicting a drug's target ("target deconvolution," 942 categories), (_iii_) predicting a molecule's identity ("molecule deconvolution," 2,919 categories), and (_iv_) finding compounds with the same target as a CRISPR perturbation ("compound discovery" Fig. 1b). The remaining images from JUMP are used for training, and depict perturbations from all 116,750 represented in the JUMP dataset (including the 2,919 compounds in the _Pheno-CA_). All well images used in the _Pheno-CA_ were held out from model training.
DNN performance on the _Pheno-CA_ tasks was measured in different ways. Performance on MoA and target deconvolution was recorded as top-10 classification accuracy, _i.e._, a model was considered accurate if the model's top-10 predictions included the molecule's true (labeled) MoA or target. Molecule deconvolution performance was recorded using categorical cross entropy loss, which measured how closely the distribution of predicted molecule identities matched the true identity. Finally, to measure how accurately models could find the compounds that match a CRISPR perturbation, we constructed curves that indicated how many guesses it took a model to find the appropriate molecules, then computed area-under-the-curve (AUC).
PreprocessingWe preprocessed CP data in two ways. First, we aggregated all cells from a well into a single representation, which captured the effect of its particular experimental perturbation. Second, we normalized these representations to control for experimental nuisances such as well, plate, and batch effects. To aggregate cells into a single well-based representation, we took the median CP-vector per well then normalized these representations by subtracting off the per-plate median representation and dividing by the per-plate inter-quartile-range (Wong et al., 2023). Lastly, before training, we principle components analysis
(PCA) whitened the representations (Bell & Sejnowski, 1996), which yielded \(878-\)dimensional vectors for each well. As we describe in the Appendix C, some of the models were also explicitly trained to ignore experimental nuisances.
Inverse biological process learning as a generalist precursor taskDNN and data scale paired with the appropriate precursor training task -- so-called causal language modeling -- have lead to breakthroughs in natural language processing. Here, we devised a learning procedure that we hypothesized would similarly help DNNs learn biological causality from JUMP data. Our basic insight is that each cell in the JUMP dataset undergoes a "forward biological process", in which the addition of a small molecule transforms its phenotype from a control to a perturbed one (Fig 1c). We reasoned that training a model to invert this process
Figure 1: **Designing a scalable precursor task for phenotypic drug discovery.****(a)** We investigate the ability of DNNs to learn drug discovery tasks from the large-scale JUMP dataset. **(b)** Our Phenotypic Chemistry Arena (_Pheno-CA_) measures the ability of DNNs trained on JUMP data to solve diverse drug discovery tasks. Task performance is measured through either “learned probes,” small neural network readouts that map a DNN’s learned representations to the labels for a task, or through “0-shot” evaluations of performance (no task-specific training). **(c)** We find that only those DNNs pretrained on a specialized precursor task — the inverse biological process — follow scaling laws on the _Pheno-CA_.
would force it to learn the host of underlying biophysical processes that cause a cell to change phenotypes, and that the resulting model representations would prove useful for downstream discovery tasks including those in the _Pheno-CA_(Ardizzone et al., 2018). We refer to this precursor task as the _inverse biological process_ (IBP). If a model improves on tasks in the _Pheno-CA_ after IBP-training it means that the motivating hypothesis is at least partially correct. In practice, IBP involves learning to predict a molecule from the phenotype it causes. We investigated the efficacy of IBP on the _Pheno-CA_ by first pretraining a DNN for this task before freezing its weights and training task-specific readouts on its representations as detailed below.
Model zoo.We built a large "zoo" of DNNs to understand how changing model architecture, supervision methods, and the amount of data seen during training affects performance on the _Pheno-CA_. Each DNN ended with a task-specific 3-layer multilayer perceptron (MLP), which mapped its representations of image content to a _Pheno-CA_ task.
All DNNs consisted of a basic MLP block with a residual connection. The MLP consisted of a linear layer, followed by a 1-D BatchNorm, and finally a gaussian error linear unit (GELU; Hendrycks and Gimpel, 2016). DNNs consisted of 1, 3, 6, 9, or 12 layers of these blocks, each with 128, 256, 512, or 1512 features.
We tested two types of DNN supervision. (_i_) DNNs were directly trained to solve each _Pheno-CA_ task. (_ii_) DNNs pretrained with IBP were frozen and their representations were mapped to a _Pheno-CA_ task with the 3-layer MLP readout. In other words, we compared DNNs that learned task-specific representations to DNNs that learned IBP representations. Each of these DNNs was also given images from 1e3, 2e4, 5e4, 8e4 or 1e5 molecules that were not included in the _Pheno-CA_, and hence were out-of-distribution (OOD) of that challenge. Our hypothesis was that OOD compound information would help IBP-trained DNNs more accurately model the biophysical effects of compounds on U2OS cells, and ultimately outperform task-supervised DNNs on _Pheno-CA_ tasks. Finally, each DNN was trained on 1%, 25%, 50%, 75%, or 100% of replicates of each of the compounds included in their training set.
All combinations of data, model, and supervision parameters yielded 1,876 unique DNNs. Each DNN was implemented in PyTorch and trained using one NVIDIA TITAN X GPU with 24GB of VRAM. DNNs were trained with a AdamW (Loshchilov and Hutter, 2017), a learning rate of 1e-4, a batch size of 6000, and mixed-precision weights using Huggingface Accelerate library ([https://github.com/huggingface/accelerate](https://github.com/huggingface/accelerate)). Training was ended early if test performance stopped improving for 15 epochs. Training took at most 16 hours per DNN.
## 3 Results
The Phenotypic Chemistry Arena (_Pheno-CA_) is a large-scale evaluation benchmark we created to measure the performance of DNNs trained on iHCS for diverse phenotypic drug discovery tasks: (_i_) predicting a drug's mechanism-of-action ("MoA deconvolution"; Chandrasekaran et al., 2021), (_ii_) predicting a drug's target ("target deconvolution"; Schenone et al., 2013), (_iii_) predicting a molecule's identity ("molecule deconvolution"; Chandrasekaran et al., 2021), and (_iv_) finding compounds that have the same target as a CRISPR perturbation (Fig. 1b; Zhang and Gant, 2008; Mendez-Lucio et al., 2020). By surveying 1,876 different DNNs on the _Pheno-CA_, we identified the training routines and DNN architectures that yielded the highest performance on these tasks, and discovered scaling laws that predict performance for certain model classes with respect to the amount and types of data used to train them.
Challenge 1: Mechanism-of-action deconvolution.Phenotypic screening is a powerful way to find active compounds in a biologically relevant setting. Phenotypic screens have inspired many important drug programs, including the discoveries of FKBP12 (Harding et al., 1989), calcineurin25 (Liu et al., 1992), and mTOR26 (Brown et al., 1994). However, it often requires substantial effort to understand the mechanism-of-action and targets of small molecules that are bioactive in a phenotypic screen. By MoA, we mean the effect the
compound has on a cellular pathway or class of molecules, for instance 'inhibitor of bacterial cell wall synthesis', or 'glucocorticoid receptor agonist'. In contrast, in the target challenge below we refer to the actual cellular component (for instance a specific enzyme) that the compound alters (usually by binding to it). iHCS data has been used in the past to help solve MoA deconvolution through a "guilt-by-association" approach, in which compounds that have known MoAs and targets are added into an experiment and used to deduce those properties in other compounds (Chandrasekaran et al., 2021).
Here, we pose a version of "guilt-by-association" MoA-discovery on JUMP data. Each DNN in our zoo was given images of cells perturbed by different compounds, and trained to predict the MoA of a given compound out of 1,282 possibilities (Fig. 1a). DNNs were either supervised directly for MoA deconvolution or pretrained with IBP (Fig. 2a). Next, DNN weights were frozen and three-layer MLP probes were used to transform image representations from both models into MoA predictions (_i.e._ there was no direct task supervision for IBP models).
Our DNN zoo yielded a wide range of performances on this task. At the low end was a 12.09% accurate 12-layer and 128-feature DNN trained with IBP on 100% of out-of-distribution molecules but only 0.01% of the replicates of each compound. At the high-end was a 52.62% accurate 9-layer and 1512-feature DNN trained with IBP on 100% of out-of-distribution
Figure 2: _Pheno-CA challenge 1:_ **Mechanism-of-action (MoA) deconvolution.** (**a**) DNNs were either trained directly for MoA deconvolution from phenotypes or first pretrained on the IBP task before their weights were “frozen” for testing. Testing in each case involved fitting a 3-layer probe to generate MoA predictions for a molecule’s imaged phenotype. (**b**) The highest-performing DNN was an IBP-pretrained model. A UMAP decomposition of its image representations qualitatively reveals clustering for the most commonly appearing MoAs. (**c**) IBP-trained DNN performance is a linear function of the amount of data each model is trained on. Each individual colored curves depicts the performance of DNNs trained on a fixed number of molecules that fall “out-of-distribution” of the molecules in the _Pheno-CA_. Decreases on the right end of each curve indicate overfitting. The scaling law depicted here is a linear fit of the max-performing models in each curve. Chance is \(7e-2\). (**d**) While DNN performance generally improved as models grew in parameters, 9-layer DNNs were more accurate than 12-layer DNNs.
molecules and 75% of the replicates of each compound. The representations of this performant IBP-trained DNN clearly separated the phenotypes of different MoAs (Fig. 2b), and it was 46% more accurate than the highest performing task-supervised DNN (36.08%; 6-layer and 256-feature DNN). Overall, IBP-trained DNNs were significantly more accurate at MoA deconvolution than task-trained DNNs (\(T(624)=7.97\), \(p<0.001\)). These results indicate that the IBP precursor task promotes generalist representations that outperform task-specific training and are already well suited for MoA deconvolution.
Another key difference we found between task-supervised and IBP-trained DNNs is that the performance of the latter followed a scaling law. The MoA deconvolution accuracy of IBP-trained DNNs linearly increased as they were trained on additional molecules that were _out-of-distribution_ (OOD) of the _Pheno-CA_ (Fig 2c). The discovered law indicated that IBP-DNN performance increases by 1% with the addition of approximately 56K (non-unique) OOD molecules for training. While DNN performance generally improved with the total number of model parameters, the rate of improvement was higher for 9-layer DNNs than 12-layer DNNs (Fig 2c; analyzed further in Appendix ADD).
We further analyzed scaling laws for MoA prediction by recomputing them for different amounts of experimental replicates. That is, we expected that DNNs which were able to observe more experimental variability would scale better than those that observed less variability. Indeed, we found that more replicates lead to better models on average, and that more data also generally improved the scaling law slope (Fig. 3).
Challenge 2: Target deconvolution.Identifying a bioactive molecule's target from its phenotypes is another essential challenge for phenotypic screens. We evaluated the ability of DNNs to automate this task in the _Pheno-CA_, and measured how accurately models can deconvolve a molecule's target from its phenotype. This task followed the same general approach as the MoA deconvolution task described above, and tested models for 942-way target deconvolution (Fig. 4a).
As with MoA deconvolution, IBP-trained DNNs were on average significantly better at target deconvolution than task-supervised DNNs (\(T(624)=15.07\), \(p<0.001\)). The lowest performing DNN (35.00%) was an IBP-trained 12-layer and 256-feature model trained with 0.01% of out-of-distribution molecules and 25% of the replicates of each compound. The highest performing DNN (67.95%) was an IBP-trained 9-layer and 1512-feature model trained on 100% of out-of-distribution molecules and 75% of the replicates of each compound. The representations of this IBP-trained DNN separated the phenotypes of different targets (Fig. 2b), and it was 33% more accurate than the highest performing task-supervised DNN (51.03%), which was a 6-layer and 512-feature DNN.
IBP-trained DNNs also followed a scaling law on target deconvolution (Fig. 2c). Model performance was linearly predicted by the number of out-of-distribution molecules included in training. The discovered law indicated that IBP-trained DNN performance increases 1% per 79K (non-unique) OOD molecules added to training. As with MoA deconvolution, DNNs improved as they grew in size, but the deepest 12-layer models were again less effective than 9-layer models (Fig. 4d).
Challenge 3: Molecule deconvolution.We next probed how well trained DNNs could discriminate between individual molecules by their phenotypes, a task which we call
Figure 3: **Experimental replicates improve IBP-trained DNN scaling laws. DNN performance and scaling laws increased as they were trained with larger amounts of replicates of each experimental perturbation.**
molecule deconvolution. This task involved 2,919-way classification of molecule identities on the _Pheno-CA_ (Fig. A2a). In contrast to MoA and target deconvolution, molecular deconvolution represents a distinct challenge since many of the 2,919 compounds may yield quite similar phenotypes. As such, this task measures the ceiling discriminability of phenotypic representations in the models.
The best-performing DNN (4.02 CCE) was an IBP-trained 9-layer and 1512-feature model and trained with 100% of out-of-distribution molecules and 75% of the replicates of each compound. The representations of this IBP-trained DNN separated the phenotypes of different targets (Fig. A2b). IBP-trained DNNs followed another scaling law on this task; their performance was predicted by the number of OOD molecules included in training. The discovered law indicated that IBP-trained DNN performance improved by 1 point of crossentropy loss for every additional 606,061 (non-unique) OOD molecules added into training. DNNs on this task improved as they grew in size, and deeper models performed better (Fig. A2d).
Challenge 4: Compound discovery.Another important use of phenotypic screening is to find compounds that affect biology in ways that resemble a biological perturbation such as a mutation in a specific gene or protein. If we could predict such compounds accurately,
Figure 4: _Pheno-CA challenge 2:_**Target deconvolution.** (**a**) DNNs were either trained directly for target deconvolution from phenotypes or first pretrained on the IBP task then “frozen” for testing. Testing in each case involved fitting a 3-layer probe to generate target predictions for a molecule’s imaged phenotype. (**b**) The highest-performing DNN was an IBP-pretrained model, and its representations discriminate between the most commonly appearing targets. (**c**) IBP-trained DNN performance is a linear function of the amount of data each model is trained on. Each individual colored curves depicts the performance of DNNs trained on a fixed number of molecules that fall “out-of-distribution” of the molecules in the _Pheno-CA_. Decreases on the right end of each curve indicate overfitting. The scaling law depicted here is a linear fit of the max-performing models in each curve. Chance is \(1.1e-1\). (**d**) While DNN performance generally improved as models grew in parameters, 9-layer DNNs were more accurate than 12-layer DNNs.
we could rapidly assemble compound libraries that we could screen for an illness associated with that mutation.
We investigated the ability of our DNNs to perform this task "zero shot" (_i.e._, without a task-specific as like in the other challenges) by comparing model representations of molecule phenotypes and of CRISPR manipulations of specific targets. We measured the representational distance between the phenotype of a CRISPR perturbation of one gene and the phenotype of every molecule in the _Pheno-CA_. We then computed the rank-order of the distance between molecules and the target manipulation, and recorded how many molecules are the manipulation (Fig. 5). We repeated this analysis for all compounds with at least 5 replicates in the _Pheno-CA_ (12 total) and found that the IBP-trained model with the lowest loss recorded during IBP-training produced representations that were significantly better at task than standard Cell Profiler (CP) representations (\(T(11)=2.91\), \(p=0.007\)).
## 4 Related work
Deep learning-based chemistryWhile computational methods have played a large role in drug discovery since at least the early 80's (Van Drie, 2007), big data and large-scale DNN architectures have recently supported significant advances for a variety of discovery tasks, such as predicting protein folding (Lin et al., 2023) and designing proteins with specific functions (Hesslow et al., 2022). Far less progress has been made in leveraging iHCS data for computational drug discovery tasks, with several notable exceptions. For instance, one study (Wong et al., 2023) found that iHCS data from an earlier and smaller iteration of the JUMP dataset can be used to train models to deconvolve MoAs significantly better than chance. Others have showed that iHCS carries signal for decoding the types of chemical or genetic perturbations that have been applied to cells (Moshkov et al., 2023). Our study takes significant steps beyond these prior ones and aligns iHCS with the goals of large-scale and data-driven AI in three ways: (_i_) we introduce the _Pheno-CA_ as a standardized benchmark for model evaluation, (_ii_) we identify model parameterizations that perform well on this benchmark and are immediately useful for drug discovery, and (_iii_) we discover scaling laws that describe how datasets like JUMP need to be expanded for continued improvements.
Small molecule drug discovery benchmarksWhile JUMP and the _Pheno-CA_ offer a unique opportunity to train and evaluate DNNs on iHCS data for small molecule drug discovery, there are multiple other benchmarks that have focused on structure-based approaches to small molecule design. Frechet ChemNet Distance (Preuer et al., 2018) (FCD) measures the distance between a model-generated small molecule and the distribution of molecules modeled by a DNN trained to predict the bioactivity of 6,000 molecules. Scoring high on this benchmark means that a model generates compounds that are within distribution of the FCD-model's representations. Guacamol (Brown et al., 2019) and molecular sets (Polykovskiy et al., 2020) (MOSES) are benchmarks that evaluate a model's generated molecules according to their FCD, validity, uniqueness, and novelty. Finally, MoleculeNet (Wu et al., 2017) consists of multiple types of benchmarks for models spanning quantum mechanics, physical chemistry, biophysics, and predictions of physiological properties like toxicity and blood-brain barrier penetration. These benchmarks can synergize with the _Pheno-CA_, for instance, to
Figure 5: _Pheno-CA challenge 4: Compound discovery._ We measured the “zero-shot” effecicay of features from an IBP-trained DNN _vs_ cell profiler (CP) for finding compounds that share the target of a CRISPR-perturbation. Lines depict the cumulative number of discovered molecules that match a target. The IBP-trained DNN is significantly better than CP (\(p=0.007\)).
tune models for filtering molecules predicted by our _Pheno-CA_-adjudicated DNNs to have desirable phenotypic qualities.
## 5 Discussion
The many great breakthroughs of AI over recent years have been guided by the discovery of neural scaling laws (Kaplan et al., 2020; Dehghani et al., 2023). Prior to these scaling laws, it was unclear if achieving human- or superhuman-level performance on challenging tasks in natural language processing and computer vision would require computing breakthroughs that shifted the paradigm beyond deep learning. But scaling laws indicate that sufficiently good algorithms are already in our hands, and that we need more data and compute to unlock their full potential. Here, we provide -- to the best of our knowledge -- the first evidence that DNNs trained with our IBP precursor task follow similar scaling laws for small molecule discovery.
We find that IBP-trained DNNs are very useful for drug discovery, and significantly better than task-supervised DNNs of any tested size at solving the tasks in our _Pheno-CA_. While the number of experimental replicates included in training affected the overall accuracy of IBP-trained DNNs (Fig. 3), the introduction of additional molecules that fell "out-of-distribution" of the _Pheno-CA_ was what actually enabled the accuracy of these models to scale up. This finding implies that the manifold relationship of small molecules and their phenotypes is highly nonlinear and filled with local-minima that make it easy for models to overfit -- as if task-supervised DNNs are "looking for their keys under the streetlamp." While it may not be feasible to generate the 14M additional experimental images (3.25M more novel molecules, with about five experimental replicates each) needed to achieve 100% accuracy on a task like MoA deconvolution, continuing to scale experimental data and DNNs towards this goal will unlock extraordinary opportunities to expedite drug discovery for the most challenging diseases we face today. We will release our code to support these efforts to revolutionize medicine.
LimitationsJUMP and other iHCS datasets present significant opportunities for advancing DNNs in small molecule discovery. However, the scaling laws that we discover demonstrate some key limitations. Purchasing just the 3.25M molecules needed to reach 100% accuracy in MoA deconvolution would cost around $325M (assuming $100 per compound). Generating replicate experiments for each could multiply that cost by orders of magnitude. Thus, there is an essential need to identify new imaging and experimental methods that can generate cheaper and better data for training DNNs with IBP. A partial solution to this problem is time-lapse imaging of single cells (Arrasate et al., 2004; Finkbeiner et al., 2015), which enables analysis of single cells over time. Such time-course data has already been successfully used in multiple deep learning applications (Linsley* et al., 2021; Wang et al., 2022; Christiansen et al., 2018) and could approximate and supercharge the beneficial effects of replicates on scaling laws that we observed for MoA deconvolution (Fig. 3).
## Acknowledgments
This work was supported by the BRAINSTORM program at Brown University.
|
2309.10302 | Decoupled Training: Return of Frustratingly Easy Multi-Domain Learning | Multi-domain learning (MDL) aims to train a model with minimal average risk
across multiple overlapping but non-identical domains. To tackle the challenges
of dataset bias and domain domination, numerous MDL approaches have been
proposed from the perspectives of seeking commonalities by aligning
distributions to reduce domain gap or reserving differences by implementing
domain-specific towers, gates, and even experts. MDL models are becoming more
and more complex with sophisticated network architectures or loss functions,
introducing extra parameters and enlarging computation costs. In this paper, we
propose a frustratingly easy and hyperparameter-free multi-domain learning
method named Decoupled Training (D-Train). D-Train is a tri-phase
general-to-specific training strategy that first pre-trains on all domains to
warm up a root model, then post-trains on each domain by splitting into
multi-heads, and finally fine-tunes the heads by fixing the backbone, enabling
decouple training to achieve domain independence. Despite its extraordinary
simplicity and efficiency, D-Train performs remarkably well in extensive
evaluations of various datasets from standard benchmarks to applications of
satellite imagery and recommender systems. | Ximei Wang, Junwei Pan, Xingzhuo Guo, Dapeng Liu, Jie Jiang | 2023-09-19T04:06:41Z | http://arxiv.org/abs/2309.10302v2 | # Decoupled Training:
###### Abstract
Multi-domain learning (MDL) aims to train a model with minimal average risk across multiple overlapping but non-identical domains. To tackle the challenges of dataset bias and domain domination, numerous MDL approaches have been proposed from the perspectives of _seeking commonalities_ by aligning distributions to reduce domain gap or _reserving differences_ by implementing domain-specific towers, gates, and even experts. MDL models are becoming more and more complex with _sophisticated_ network architectures or loss functions, introducing extra parameters and enlarging computation costs. In this paper, we propose a _frustratingly easy_ and hyperparameter-free multi-domain learning method named Decoupled Training (D-Train). D-Train is a tri-phase _general-to-specific_ training strategy that first pre-trains on all domains to warm up a root model, then post-trains on each domain by splitting into multi-heads, and finally fine-tunes the heads by fixing the backbone, enabling decouple training to achieve domain independence. Despite its extraordinary simplicity and efficiency, D-Train performs remarkably well in extensive evaluations of various datasets from standard benchmarks to applications of satellite imagery and recommender system.
## Introduction
The success of deep learning models across a wide range of fields often relies on the fundamental assumption that the data points are _independent and identically distributed_ (i.i.d.). However, in real-world scenarios, training and test data are usually collected from different regions, devices, or platforms, consisting of multiple overlapping but non-identical domains. For example, a popular satellite dataset named Functional Map of the World (FMoW) [16], which aims to predict the functional purpose of buildings and land use on this planet, contains large-scale satellite images from various regions with different appearances and styles. In this case, jointly training a single model obscures domain distinctions, while separately training multiple models by domains reduces training data in each model [11]. This dilemma motivated the research on _multi-domain learning_[11, 12, 13, 14, 15].
To figure out the challenges of multi-domain learning, we first delved into these two standard benchmark datasets. By analyzing the examples of these datasets, it is obvious that _dataset bias_ across domains is one of the biggest obstacles to multi-domain learning. As shown in Figure 1(a), reviews from different domains have different keywords and styles. Further, Figure 1(b) reveals that images of Office-Home are from four significantly different domains: _Art_, _Clipart_, _Product_ and _Real-World_, with various appearances and backgrounds. To tackle the _dataset bias_ problem, numerous approaches have been proposed and they can be briefly grouped into two categories: 1) **Seeking Commonalities**. Some classical solutions [13, 14] adopting the insightful idea of domain adversarial training have been proposed to extract domain-invariant representations across multiple domains. 2) **Reserving Differences**. These approaches [13, 14] adopt multi-branch network architectures with domain-specific towers, gates, and even experts, implementing domain-specific parameters to avoid domain conflict caused by dataset bias across domains. As shown in Figure 2, these methods are becoming more and more complex with _sophisticated_ network architectures or loss functions.
Further, different from multi-task learning which focuses on tackling different tasks within a single domain, multi-domain learning shares the same label space across multiple domains with different marginal distributions. We thus calculated the distribution of sample number across domains on several benchmark datasets and the results are shown in Figure 1. In practice, the distributions of sample number across domains are usually twisted, imbalanced, or even long-tailed, causing another main challenge of multi-domain learning: _domain domination_. In this case, some tailed domains may have much fewer examples and it would be difficult to train satisfied models on them. Without proper network designs or loss functions, the model will be easily dominated by the head domains with much more examples and shift away from the tailed ones with few examples.
Realizing the main challenges of _dataset bias_ and _domain domination_ in multi-domain learning, we aim at proposing a general, effective, and efficient method to tackle these obstacles all at once. We first rethought the development of multi-domain learning and found that the approaches in this field are becoming more and more sophisticated, consisting of multifarious network architectures or complex loss functions with many trade-off hyperparameters. By assigning
different parameters across domains, these designs may be beneficial in some cases but they will absolutely include more parameters and enlarge computation cost, as well as introduce much more hyperparameters. For example, the latest state-of-the-art method named PLE has to tune the numbers of domain-specific experts and domain-agnostic experts in each layer and design the network structures of each expert, each tower, and each gate network.
Motivated by the famous quote of Albert Einstein, "_everything should be made as simple as possible, but no simpler_", we proposed a frustratingly easy multi-domain learning method named Decoupled Training (D-Train). D-Train is a training strategy based on the original frustratingly easy but most general _shared-bottom_ architecture. D-Train is a tri-phase _general-to-specific_ training strategy that first pre-trains on all domains to warm up a root model, then post-trains on each domain by splitting into multi-heads, and finally fine-tunes the heads by fixing the backbone, enabling decouple training to achieve domain independence. Despite its extraordinary simplicity and efficiency, D-Train performs remarkably well in extensive evaluations of various datasets from standard benchmarks to applications of satellite imagery and recommender system. In summary, this paper has the following contributions:
* We explicitly uncover the main challenges of dataset bias and domain domination in MDL, especially the latter since it is usually ignored in most existing works.
* We propose a frustratingly easy and hyperparameter-free MDL method named Decoupled Training by applying a tri-phase _general-to-specific_ training strategy.
* We conduct extensive experiments from standard benchmarks to real-world applications and verify that D-Train performs remarkably well.
## Related Work
### Seeking Commonalities
To tackle the _dataset bias_ problem in multi-domain learning, various approaches Liu et al. (2017); Alice et al. (2019) have been proposed from the perspective of domain alignment, by adopting the insightful idea of domain adversarial training to extract domain-invariant representations across domains. To smooth the presentation of domain alignment in MDL, we will first give a brief review of domain adaptation. There are mainly two categories of domain adaptation formulas: _covariate shift_Quionero-Candela et al. (2009); Pan and Yang (2010); Long et al. (2015); Ganin and Lempitsky (2015) and _label shift_Lipton et al. (2018); Azizzadenesheli et al. (2019); Alexandari et al. (2020), while we focus on the former in this paper since it is more relevant with the topic of MDL. Recent deep domain adaptation methods tackle domain shifts from the perspectives of either moment matching or adversarial training, in which the former aligns feature distributions by minimizing the distribution discrepancy across domains Long et al. (2015); Tzeng et al. (2014); Long et al. (2017). Further, domain adversarial neural network (DANN) Ganin et al. (2016) becomes the mainstream method in domain adaptation. It introduces a domain discriminator to distinguish the source features from the target ones, while the feature extractor is designed to confuse the domain discriminator. In this way, the domain discriminator and feature extractor are competing in a two-player minimax game. Its natural extension to MDL is DANN-MDL. Later, CDAN Long et al. (2018) further tailors the discriminative information conveyed in the classifier predictions into the input of the domain discriminator, whose natural extension to MDL is CDAN-MDL. Following the main idea of the minimax game, several variants of adversarial training methods Pei et al. (2018); Tzeng et al. (2017); Saito et al. (2018); Zhang et al. (2019) were proposed. MulANN Alice et al. (2019) tailors the insight of domain adversarial training into the MDL problem by introducing a domain discriminator into the shared model with a single head. In contrast, ASP-MTL Liu et al. (2017) includes a shared-private model and a domain discriminator.
### Reserving Differences
Another series of methods Ma et al. (2018); Tang et al. (2020); Sheng et al. (2021) for multi-domain learning adopt multi-branch network architectures and develop domain-specific parameters Dredze et al. (2010) to avoid do
Figure 1: **(a)-(b)**: Review examples from a recommender system benchmark named Amazon with various styles and keywords, as well as visual examples from a computer vision dataset named Office-Home with various appearances and backgrounds, reveal the main challenge of _dataset bias_. **(c)-(d)**: The distribution of sample number across domains is naturally imbalanced or even long-tailed, indicating another major challenge of _domain domination_.
main conflict caused by dataset bias across domains. Among them, Shared Bottom (SB) [14] is the frustratingly easy but effective one. Further, MoE [10] and its extension of MMoE [11] adopt the insightful idea of the mixture of experts to learn different mixture patterns of experts assembling, respectively. PLE [13] explicitly separates domain-shared and domain-specific experts to alleviate harmful parameter interference across domains. Note that, PLE further applies progressive separation routing with several deeper layers but we only adopt one layer for a fair comparison with other baselines. Other MDL methods focus on maintaining shared and domain-specific parameters by confidence-weighted combination [14], domain-guided dropout [21], or prior knowledge about domain semantic relationships [15]. Meanwhile, various task-specific MDL approaches have been proposed for computer vision [1, 16, 17, 18, 19, 20, 21], natural language processing [22, 23, 24] and recommender system [11, 12, 24, 25, 26, 27, 28, 29, 30, 31, 32]. The comparison between these MDL methods are summarized in Figure 2.
## Approach
This paper aims at proposing a simple, effective, and frustratingly easy method for multi-domain learning (MDL). Given data points from multiple domains \(\{\mathcal{D}_{1},\mathcal{D}_{2},...,\mathcal{D}_{T}\}\), \(T\) is the domain number. Denote \(\mathcal{D}_{t}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{n_{t}}\) the data from domain \(t\), where \(\mathbf{x}_{i}\) is an example, \(\mathbf{y}_{i}\) is the associated label and \(n_{t}\) is the sample number of domain \(t\). Denote the shared backbone \(\psi\) and the domain-specific heads \(\{h_{1},h_{2},...,h_{T}\}\) respectively. The goal of MDL is to improve the performance of each domain \(\mathcal{D}_{t}\).
As mentioned above, the proposed Decoupled Training (D-Train) is a frustratingly easy multi-domain learning method based on the original _shared-bottom_ architecture. D-Train takes a _general-to-specific_ training strategy. It first pre-trains on all domains to warm up a root model. Then, it post-trains on each domain by splitting into multi-heads. Finally, it fine-tunes the heads by fixing the backbone, enabling decouple training to achieve domain independence.
### Pre-train: warm up a root model
The power of deep learning models is unleashed by large-scale datasets. However, as mentioned in Section, the distributions of sample numbers across domains are usually twisted, imbalanced, or even long-tailed in real-world applications. In this case, some tailed domains may only have limited samples and it would be difficult to train satisfied models on them. To alleviate this problem, D-Train first pre-trains a single model on samples from all domains to warm up a root model for all domains, especially for the tailed domain with limited samples. The optimization function of the pre-train phase in multiple domains can be formalized as
\[\psi_{0},h_{0}=\operatorname*{arg\,min}_{\psi,h}\frac{1}{T}\sum_{t=1}^{T}\frac {1}{n_{t}}\sum_{i=1}^{n_{t}}\mathcal{L}\left[(h\circ\psi)(\mathbf{x}_{i}^{t}), \mathbf{y}_{i}^{t}\right], \tag{1}\]
where \(h\) and \(\psi\) are the _shared_ head and backbone at the pre-training phase. After the training process converges, \(\psi_{0}\) and \(h_{0}\) will be good initializations for the next phase, as shown in Figure 3(a). To verify it, we take DomainNet as an example and show the training curves of the proposed method in Figure 3(b). Note that, the experiments are repeated \(5\) times to show both mean and standard deviation.
### Post-train: split into multi-heads
As mentioned before, jointly training a single model obscures domain distinctions, leading to domain conflict caused by the specificity of different domains. To reflect the domain specificity and achieve satisfactory performance for each domain, we adopt the _shared-bottom_ architecture that has a shared feature extractor \(\psi\) and various domain-specific heads \(\{h_{1},h_{2},...,h_{T}\}\) to tackle the challenge of _dataset bias_ across
Figure 2: **(a)**: Seeking Commonalities by aligning distributions across domains to reduce domain gap. **(b)**: Reserving Differences by implementing domain-specific towers, gates, and even experts.
domains. With this design, the parameters of the feature extractor will be updated simultaneously by gradients of samples from all domains, but the parameters of domain-specific heads are trained on each domain. The optimization function of the post-training phase over all domains can be formalized as
\[\widetilde{\psi},\{\widetilde{h}_{1},\widetilde{h}_{2},...,\widetilde{h}_{T}\}= \operatorname*{arg\,min}_{\psi,\{h_{1},h_{2},...,h_{T}\}}\frac{1}{T}\sum_{t=1}^ {T}\!L_{t} \tag{2}\]
\[L_{t}=\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}\mathcal{L}\left[(h_{t}\circ\psi)( \mathbf{x}_{i}^{t}),\mathbf{y}_{i}^{t}\right] \tag{3}\]
where the domain-agnostic feature extractor \(\psi\) is initialized as \(\psi_{0}\) while each domain-specific head \(h_{t}\) of domain \(t\) is initialized as \(h_{0}\). After the training process converges, \(\psi\) and \(h\) will reach strong points of \(\widetilde{\psi}\) and \(\widetilde{h}\) as shown in Figure 3(b). Since the shared-bottom architecture contains both domain-agnostic and domain-specific parameters, the training across domains maintains a lukewarm relationship. In this way, the challenge of _dataset bias_ across domains and _domain domination_ can be somewhat alleviated. The training curves as shown in Figure 3(b) also witness a sharp improvement via splitting into multi-heads across domains. Note that, at the beginning of the post-train phase, the test accuracy of each domain drops first owing to the training mode switches from fitting all domains to each specific domain.
### Fine-tune: decouple-train for domain independence
Regarding the benefits of domain-specific parameters across domains, a natural question arises: Can the shared-bottom architecture fully solve the problem of domain domination? To answer this question, we adopt the Euclidean norm to calculate the parameter update between the domain-specific heads \(\{h_{1},h_{2},...,h_{T}\}\) and the domain-agnostic head \(h_{0}\) at the pre-training phase on DomainNet. As shown in Figure 4(a), the domain of Real has much more examples than other domains, and is believed to dominate the training process. However, as shown in Figure 4(b), the parameter update after the post-train phase reveal that the training process is still dominated by the head domain which still has the largest value of the parameter update. Hence, though the challenge of dataset bias across domains can be alleviated by introducing domain-specific heads in the shared-bottom architecture, the domain domination problem still exists after the post-train phase. This is reasonable since the domain-shared parameters of the backbone will be dominated by the head domains. To this end, we propose a decoupling training strategy by fully fixing the parameters of the feature extractor to achieve domain independence. Formally,
\[\widehat{h}_{t}=\operatorname*{arg\,min}_{h_{t}}\frac{1}{n_{t}}\sum_{i=1}^{n_{ t}}\mathcal{L}\big{[}(h_{t}\circ\widetilde{\psi})(\mathbf{x}_{i}^{t}), \mathbf{y}_{i}^{t}\big{]},\quad t=1,2,...,T. \tag{4}\]
In this way, the parameters of the domain-specific heads will be learned by samples from each domain as shown in Figure 3(a). With this kind of _domain-independent_ training, the head domains will no longer dominate the training of the tailed domains at this phase. Further, the parameter update between phases becomes more balanced across domains as shown in Figure 4(c). Meanwhile, the training curves as shown in Figure 3(b) reveal that fine-tuning across domains can further improve the performance.
### Why does D-Train work?
As mentioned before, dataset bias and domain domination are the main challenges of multi-domain learning. All of the existing works, including the methods from the perspectives of both seeking commonalities or reserving differences, have to face the challenge of _caring for this and losing that_. With the shared parameters across domains, these methods will influence each other. When the problem of domain domination or
Figure 3: **(a)**: Explanations of different phases of D-Train. \(\psi\) denotes the feature extractor; \(h\) denotes the shared head in the pre-training phase; \(\{h_{1},h_{2},...,h_{T}\}\) denote the domain-specific heads at the next two phases. During the fine-tuning phase, the parameters of the feature extractor are fixed. **(b)**: The training curves of each phase of D-Train on various domains: _Clipart_, _Painting_, _Real_ and _Sketch_ as shown in dotted lines, while the average accuracy over domains is shown in solid line.
the domain conflict caused by dataset bias cannot be ignored, the MDL model will struggle in finding an optimal solution for all domains. Actually, as shown in the training process of DomainNet in Figure 3(b), _different domains achieve the optimal performance at different time stamps_ after the post-training phase. However, the goal of multi-domain learning is to use one model to serve all domains, and thus we cannot use these models simultaneously. On the contrary, with the proposed decoupling training strategy, different domains in D-Train will train independently at the fine-tuning phase and reach an optimal solution for each domain.
## Experiments
In this section, we compared the proposed D-Train method with three categories of baselines: Single Branch, Domain Alignment, and Multiple Branch.
### Standard Benchmarks
In this section, we adopt two standard benchmarks in the field of domain adaptation with various dataset scales, in which Office-Home is in a low-data regime and DomainNet is a large-scale one. D-Train and all baselines in this section are implemented in a popular open-sourced library 1 which has implemented a high-quality code base for many domain adaptation baselines.
Footnote 1: [https://github.com/thuml/Transfer-Learning-Library](https://github.com/thuml/Transfer-Learning-Library)
Low-data Regime: Office-HomeOffice-Home is a standard multi-domain learning dataset [21] with \(65\) classes and \(15,500\) images from four significantly different domains: Art, Clipart, Product, and Real-World. As shown in Figure 1(b), there exist challenges of dataset bias and domain domination in this dataset. Following existing works on this dataset, we adopt ResNet-50 as the backbone and randomly initialize fully connected layers as heads. We set the learning rate as \(0.0003\) and batch size as \(24\) in each domain for D-Train and all baselines.
As shown in Table 1, Separately Training is a strong baseline and even outperforms Jointly Training, since the latter obscures domain distinctions and cannot tackle the dataset bias across domains. Multi-Domain Learning methods from the perspective of domain alignment work much better by introducing a domain discriminator and exploiting the domain information. However, applying domain alignment is not an optimal solution in MDL since the domain gap can only be reduced but not removed. Finally, the proposed D-Train consistently improves on all domains, even the tailed domain of "Art". D-Train achieves a new state-of-the-art result with an average accuracy of \(86.0\).
\begin{table}
\begin{tabular}{l|c c c c|c c} \hline \hline Method & Art & Clipart & Product & Real-World & Avg. Acc. & Worst Acc. \\ \hline \#Samples & 2427 & 4365 & 4439 & 4357 & - & - \\ \hline Separatly Train & 76.8 & 77.2 & 92.9 & 87.1 & 83.5 & 76.8 \\ Jointly Train & 73.5 & 73.3 & 91.4 & 86.7 & 81.2 & 73.3 \\ MulANN [1] & 77.8 & 80.0 & 92.5 & 87.3 & 84.4 & 77.8 \\ DANN-MDL [17] & 75.9 & 77.2 & 92.2 & 87.2 & 83.1 & 75.9 \\ ASP-MDL [16] & 78.8 & 79.3 & 93.2 & 87.5 & 84.7 & 78.8 \\ CDAN-MDL [16] & 78.6 & 79.0 & 93.6 & 89.1 & 85.1 & 78.6 \\ Shared Bottom [15] & 76.5 & 80.2 & 93.8 & 88.8 & 84.8 & 76.5 \\ MoE [1] & 76.8 & 77.1 & 92.3 & 87.4 & 83.4 & 76.8 \\ MMoE [16] & 78.8 & 79.6 & 93.4 & 88.9 & 85.2 & 78.8 \\ PLE [15] & 78.0 & 79.8 & 93.6 & 88.5 & 85.0 & 78.0 \\
**D-Train (ours)** & **80.0** & **80.3** & **94.1** & **89.5** & **86.0** & **80.0** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy (%) on _Office-Home_ for multi-domain learning (ResNet-50).
Figure 4: Analysis on the domain domination challenge where \(h_{0}\) is the domain-agnostic head at the pre-training phase. \(\|\cdot\|\) denotes the Euclidean norm of the parameter update between different phases.
Large-Scale Dataset: DomainNetDomainNet Peng et al. (2019) is a large-scale multi-domain learning and domain adaptation dataset with \(345\) categories. We utilize 4 domains with different appearances including _Clipart_, _Painting_, _Real_, and _Sketch_ where each domain has about \(40,000\) to \(200,000\) images. As shown in Figure 4(a), the domain of "_Real_" has much more examples than other domains, and is believed to dominate the training process. Following the code base in Transfer Learning Library, we adopt mini-batch SGD with the momentum of \(0.9\) as an optimizer, and the initial learning rate is set as \(0.01\) with an annealing strategy. We adopt ResNet-101 as the backbone since DomainNet is much larger and more difficult than the previous Office-Home dataset. Meanwhile, the batch size is set as \(20\) in each domain here for D-Train and all baselines.
As shown in Table 2, the proposed D-Train outperforms all baselines, no matter measured by average accuracy or worst accuracy in all domains. Since the domain of "Real" has much more samples than other domains, it achieves competitive performance while separately training. However, other domains benefit from training via D-Train.
### Applications of Satellite Imagery
In this section, we adopt a popular satellite dataset named Functional Map of the World (FMoW) Christie et al. (2018), which aims to predict the functional purpose of buildings and land use on this planet, contains large-scale satellite images with different appearances and styles from various regions: _Africa_, _Americas_, _Asia_, _Europe_, and _Oceania_. FMoW is a natural dataset for multi-domain learning. D-Train and all baselines in this section are implemented in a popular open-sourced library named WILDS 2 since it enables easy manipulation of this dataset. Each input \(\mathbf{x}\) in FMoW is an RGB satellite image that is resized to \(224\times 224\) pixels and the label \(y\) is one of \(62\) building or land use categories. For all experiments, we follow Christie et al. (2018) and use a DenseNet-121 model Huang et al. (2017) pretrained on ImageNet. We set the batch size to be \(64\) on all domains. Following WILDS, we report the average accuracy and worst-region accuracy in all multi-domain learning methods.
Footnote 2: [https://github.com/p-lambda/wilds](https://github.com/p-lambda/wilds)
As shown in Table 3, it's not wise to train a separate model for each domain on FMoW, since the data on some domains is extremely scarce. Joint Train or Shared Bottom is also not optimal, because conflicts widely exist in some domains, such as the Europe domain and Africa domain which have very different appearances. Note that D-Train also outperforms all baselines in this difficult dataset.
### Applications of Recommender System
In this section, we adopt a popular dataset named Amazon Product Review (_Amazon_) 3. We select \(7\) typical subsets with various scales including Books (_Books_), Electronics (_Elec._), Movies_and_TV (_TV_), CDs_and_Vinyl (_CD_), Kindle_Store (_Kindle_), Cell_Phones_and_Accessories (_Phone_), Digital_Music (_Music_). As shown in Tabel 4, different domains have various samples from \(0.38M\) to \(19.2M\). For each domain, diverse user behaviors are available, including more than \(5\) reviews for each user-goods pair. The features used for experiments consist of goods_id and user_id. It is obvious that users in these domains have different preferences for various goods.
Footnote 3: [https://jmcualey.ucsd.edu/data/amazon/](https://jmcualey.ucsd.edu/data/amazon/)
D-Train and all baselines in this section are implemented based on a popular open-sourced library named pytorch-fm 4. We use DNN as the CTR method and the embed_dim and mlp_dim are both set as \(16\). The layer number of the expert and the tower are set as \(2\) and \(3\) respectively. We report AUC (Area Under the Curve) for each domain. Further, AUC_d and AUC_s are averaged over all domains and all samples respectively to intuitively compare D-Train with other base
\begin{table}
\begin{tabular}{l|c c c c|c c} \hline \hline Method & Clipart & Painting & Real & Sketch & Avg. Acc. & Worst Acc. \\ \hline \#Samples & 49k & 76k & 175k & 70k & - & - \\ \hline Separatly Train & 78.2 & 71.6 & **83.8** & 70.6 & 76.1 & 70.6 \\ Jointly Train & 77.4 & 68.0 & 77.9 & 68.5 & 73.0 & 68.0 \\ MulANN Alice et al. (2019) & 79.5 & 71.7 & 81.7 & 69.9 & 75.7 & 69.9 \\ DANN-MDL Ganin et al. (2016) & 79.8 & 71.4 & 81.4 & 70.3 & 75.7 & 70.3 \\ ASP-MDL Liu et al. (2017) & 80.1 & 72.1 & 81.2 & 70.9 & 76.1 & 70.9 \\ CDAN-MDL Long et al. (2018) & 80.2 & 72.2 & 81.3 & 71.0 & 76.2 & 71.0 \\ Shared Bottom Ruder (2017) & 79.9 & 72.1 & 81.9 & 69.7 & 75.9 & 69.7 \\ MoE Jacobs et al. (1991) & 79.1 & 70.2 & 79.8 & 69.4 & 74.6 & 69.4 \\ MMoE Ma et al. (2018) & 79.6 & 72.2 & 82.0 & 69.8 & 75.9 & 69.8 \\ PLE Tang et al. (2020) & 80.0 & 72.2 & 82.1 & 70.0 & 76.1 & 70.0 \\ \hline D-Train (w/o Fine-tune) & 79.9 & 71.3 & 81.3 & 70.9 & 75.9 & 70.9 \\ D-Train (w/o Post-train) & 79.8 & 72.9 & 81.7 & 72.1 & 76.6 & 72.1 \\ D-Train (w/o Pre-train) & 80.9 & 72.9 & 82.7 & 71.5 & 77.0 & 71.5 \\
**D-Train (ours)** & **81.5** & **72.8** & 82.7 & **72.2** & **77.3** & **72.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy (%) on _DomainNet_ for multi-domain learning (ResNet-101).
lines. For all models, we use Adam as the optimizer with exponential decay, in which the learning rate starts at \(1e^{-3}\) with a decay rate of \(1e^{-6}\). During training, the mini-batch size is set to \(2048\). As shown in Table 4, D-Train yields larger improvements than a variety of MDL baselines on Amazon Product Review dataset.
### Plug-in Unit
D-Train can be used as a general plug-in unit for existing MDL methods. We compare its performance with Shared Bottom, MMoE, and PLE, by only training the parameters of domain-specific heads while fixing the other parameters at the fine-tuning phase. As shown in Table 4, D-Train can further improve these competitive MDL methods on Amazon Product Review dataset, by tailoring D-Train into them.
### Ablation Study
As shown in Table 2, we conduct an ablation study on DomainNet by removing each phase respectively. The reported results on four different domains reveal that only utilizing all of these phases works best. In particular, D-Train (w/o Fine-tune) works much worse than other ablation experiments, which reveals the importance of decoupling training for domain independence.
## Conclusion
Multi-domain learning (MDL) is of great importance in both academia and industry. Numerous efforts have been paid for it but most of the works focus on a special application and a general approach is missing. In this paper, we explicitly uncover the main challenges of dataset bias and domain domination in multi-domain learning, especially the latter since it is usually ignored in most existing works. We further propose a frustratingly easy and hyperparameter-free multi-domain learning method named Decoupled Train (D-Train) with a tri-phase _general-to-specific_ training strategy. Despite its extraordinary simplicity and efficiency, D-Train performs remarkably well in extensive evaluations of various datasets from standard benchmarks to applications of satellite imagery and recommender system.
\begin{table}
\begin{tabular}{l|c c c c c c|c c|c} \hline \hline Method & Books & Elec. & TV & CD & Kindle & Phone & Music & AUC\({}_{d}\) & AUC\({}_{s}\) \\ \hline \#Samples & 19.2M & 3.70M & 3.28M & 2.96M & 2.25M & 0.81M & 0.38M & – & – \\ \hline Separately Training & 66.09 & 77.50 & 79.43 & 59.69 & 52.79 & 70.06 & 52.95 & 65.50 & 67.17 \\ Jointly Training & 69.01 & 78.87 & 85.06 & 64.24 & 59.15 & 69.89 & 49.71 & 67.99 & 70.43 \\ MulANN [1] & 68.95 & 78.90 & 84.56 & 64.79 & 58.64 & 70.43 & 52.13 & 68.34 & 70.40 \\ DANN-MDL [1] & 68.64 & 80.33 & 86.08 & 66.32 & 58.59 & 72.47 & 54.21 & 69.52 & 70.74 \\ CDAN-MDL [1] & 69.74 & 80.63 & 85.88 & 67.24 & 60.61 & 73.34 & 57.39 & 70.69 & 71.69 \\ MoE [1] & 73.51 & 85.88 & 89.66 & 74.94 & 63.45 & 79.63 & 66.08 & 76.16 & 76.04 \\ \hline Shared Bottom [10] & 70.91 & 74.87 & 85.51 & 67.18 & 60.56 & 74.59 & 59.14 & 70.39 & 71.73 \\ Shared Bottom + D-Train & 71.35 & 74.76 & 85.52 & 67.61 & 60.20 & 73.53 & 61.61 & 70.65 & 71.99 \\ \hline MMoE [1] & 73.67 & 86.15 & 89.23 & 75.50 & 62.43 & 81.91 & 63.69 & 76.01 & 76.13 \\ MMoE + D-Train & 74.50 & 86.09 & 88.60 & 77.13 & 66.46 & 82.13 & 69.01 & 77.70 & 77.05 \\ \hline PLE [13] & 75.25 & 85.36 & 88.54 & 76.09 & 69.35 & 81.02 & 67.75 & 77.62 & 77.46 \\ PLE + D-Train & 74.70 & 86.70 & 89.53 & 77.40 & 69.26 & 82.47 & 69.91 & 78.57 & 77.56 \\ \hline \hline \end{tabular}
\end{table}
Table 4: AUC (%) on Amazon Product Review for multi-domain learning.
\begin{table}
\begin{tabular}{l|c c c c c|c c} \hline \hline Method & Asia & Europe & Africa & America & Oceania & Avg. Acc. & Worst Acc. \\ \hline \#Samples & 115k & 182k & 37k & 176k & 15k & - & - \\ \hline Separatly Train & 59.4 & 56.9 & 72.9 & 59.4 & 65.1 & 58.7 & 56.9 \\ Jointly Train & 60.8 & 57.2 & **77.0** & **63.5** & 71.6 & 60.4 & 57.2 \\ MulANN [1] & 61.2 & 57.5 & 74.6 & 62.3 & 67.9 & 60.2 & 57.5 \\ DANN-MDL [12] & 55.9 & 55.5 & 61.9 & 58.2 & **74.2** & 56.7 & 55.5 \\ ASP-MDL [12] & 54.5 & 53.9 & 73.4 & 57.4 & 70.3 & 55.3 & 53.9 \\ CDAN-MDL [12] & 57.0 & 56.8 & 68.0 & 59.7 & 70.3 & 57.7 & 56.8 \\ Shared Bottom [1] & 58.1 & 57.1 & 75.4 & 61.3 & 71.9 & 59.8 & 57.1 \\ MoE [1] & 55.9 & 54.0 & 63.2 & 59.0 & 70.6 & 56.3 & 54.0 \\ MMoE [1] & 60.7 & 55.7 & 65.6 & 62.4 & 64.8 & 58.7 & 55.7 \\ PLE [13] & 58.2 & 56.5 & 74.6 & 61.7 & 72.5 & 59.0 & 56.5 \\
**D-Train (ours)** & **62.3** & **58.3** & **77.0** & 62.7 & 68.3 & **61.0** & **58.3** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy (%) on _FMoW_ for multi-domain learning (DenseNet-121). |
2309.04377 | EPOCHS VI: The Size and Shape Evolution of Galaxies since z ~ 8 with
JWST Observations | We present the results of a size and structural analysis of 1395 galaxies at
$0.5 \leq z \lesssim 8$ with stellar masses $\log \left(M_* / M_{\odot}\right)$
$>$ 9.5 within the JWST Public CEERS field that overlaps with the HST CANDELS
EGS observations. We use GALFIT to fit single S\'ersic models to the rest-frame
optical profile of our galaxies, which is a mass-selected sample complete to
our redshift and mass limit. Our primary result is that at fixed rest-frame
wavelength and stellar mass, galaxies get progressively smaller, evolving as
$\sim (1+z)^{-0.71\pm0.19}$ up to $z \sim 8$. We discover that the vast
majority of massive galaxies at high redshifts have low S\'ersic indices, thus
do not contain steep, concentrated light profiles. Additionally, we explore the
evolution of the size-stellar mass relationship, finding a correlation such
that more massive systems are larger up to $z \sim 3$. This relationship breaks
down at $z > 3$, where we find that galaxies are of similar sizes, regardless
of their star formation rates and S\'ersic index, varying little with mass. We
show that galaxies are more compact at redder wavelengths, independent of sSFR
or stellar mass up to $z \sim 3$. We demonstrate the size evolution of galaxies
continues up to $z \sim 8$, showing that the process or causes for this
evolution is active at early times. We discuss these results in terms of ideas
behind galaxy formation and evolution at early epochs, such as their importance
in tracing processes driving size evolution, including minor mergers and AGN
activity. | K. Ormerod, C. J. Conselice, N. J. Adams, T. Harvey, D. Austin, J. Trussler, L. Ferreira, J. Caruana, G. Lucatelli, Q. Li, W. J. Roper | 2023-09-08T15:19:58Z | http://arxiv.org/abs/2309.04377v2 | # EPOCHS VI: The Size and Shape Evolution of Galaxies since \(z\sim 8\) with JWST Observations
###### Abstract
We present the results of a size and structural analysis of 1395 galaxies at \(0.5\leq z\lesssim 8\) with stellar masses log (\(M_{*}/M_{\odot}\))\(>9.5\) within the _JWST_ Public CEERS field that overlaps with the HST CANDELS EGS observations. We use GALFIT to fit single Sersic models to the rest-frame optical profile of our galaxies, which is a mass-selected sample complete to our redshift and mass limit. Our primary result is that at fixed rest-frame wavelength and stellar mass, galaxies get progressively smaller, evolving as \(\sim(1+z)^{-0.71\pm 0.19}\) up to \(z\sim 8\). We discover that the vast majority of massive galaxies at high redshifts have low Sersic indices, thus do not contain steep, concentrated light profiles. Additionally, we explore the evolution of the size-stellar mass relationship, finding a correlation such that more massive systems are larger up to \(z\sim 3\). This relationship breaks down at \(z>3\), where we find that galaxies are of similar sizes, regardless of their star formation rates and Sersic index, varying little with mass. We show that galaxies are more compact at redder wavelengths, independent of sSFR or stellar mass up to \(z\sim 3\). We demonstrate the size evolution of galaxies continues up to \(z\sim 8\), showing that the process or causes for this evolution is active at early times. We discuss these results in terms of ideas behind galaxy formation and evolution at early epochs, such as their importance in tracing processes driving size evolution, including minor mergers and AGN activity.
keywords: galaxies: structure - galaxies: evolution - galaxies: high-redshift
## 1 Introduction
Ever since the discovery of galaxies, their extended nature has been a clear indicator of their differing properties from unresolved sources such as stars. These resolved properties have been of interest for many years, and in many ways are the oldest studied properties of galaxies (e.g., Hubble, 1926; Buitrago et al., 2008; Conselice, 2014; Ferreira et al., 2022; van der Wel et al., 2014; Suess et al., 2022; Ferreira et al., 2022; Kartaltepe et al., 2022). As far back as the 18th century, William Herschel, with his sister Caroline and, later, his son John, catalogued what we now know to be extragalactic objects, commenting on their appearance (Herschel, 1786). Following up on this, Lord Rose at his castle in Ireland discovered spiral structures in galaxies, through his observations of M51 and other nearby galaxies (Rosse, 1850). After the inferring of distances to these objects, photography allowed Hubble and his immediate successors to develop the dominant morphological classification scheme we use today - the Hubble Sequence - which classifies extragalactic objects as spiral, elliptical, or irregular (Hubble, 1926). This has continued until this day, with James Webb Space Telescope (_JWST_) observations setting us on a new pathway towards understanding the structures and morphologies of the very first galaxies (e.g., Whitney et al., 2021; Ferreira et al., 2022;b; Suess et al., 2022; Kartaltepe et al., 2022; Huertas-Company et al., 2023; Ono et al., 2023; Jacobs et al., 2023; Tacchella et al., 2023; Morishita et al., 2023).
Galaxy structure and morphology remain the oldest studied and are still a set of crucial properties for us to understand the evolution of galaxies through cosmic time, although we know very little about these features at \(z>3\). Tracking the changes in the structural properties of galaxies from the era of early galaxy formation until the present day provides valuable insights into the processes of galaxy evolution. There have been major efforts over the past 30 years to study morphology and structure of distant galaxies with the Hubble Space Telescope (_HST_; Buitrago et al., 2008; Conselice, 2014; Delgado-Serrano et al., 2010), where the rest-frame optical properties of galaxies up to \(z\sim 3\) can be studied and examined. These HST observations have shown us that galaxies appear to become progressively more irregular and peculiar at higher redshifts (e.g., Conselice, 2003; Lotz et al., 2004; Mortlock et al., 2013; Delgado-Serrano et al., 2010; Schawinski et al., 2014; Conselice, 2014; Whitney et al., 2021).
The red Hubble filters are limited, in that they do not allow us to measure or observe the rest-frame optical light of galaxies back to within the first few Gyr of the universe; in fact the reddest HST filter, F160W on Hubble's Wide Field Camera 3 (WFC3), only probes the rest-frame visible light of galaxies up to \(z\sim 2.8\), but we know that galaxies exist at much higher redshifts (e.g., Adams et al., 2023; Austin et al., 2023; Curtis-Lake et al., 2023; Castellano et al., 2022; Naidu et al., 2022; Finkelstein et al., 2023; Atek et al., 2022; Yan et al., 2022; Donnan et al., 2022; Harikane et al., 2023). There has also been a large effort to study the structural and size evolution of galaxies theoretically, through the use of cosmological simulations, at a range of redshifts. These simulations predict that at higher redshift, galaxies become more compact, although that the dust distributions in galaxies attenuates bright cores, and increases the observed half-light radius. Simulations, in agreement with observations, have also shown that galaxy sizes typically increase with increasing stellar mass, and show that compact galaxies may grow in size due to mergers or renewed star formation, with high-redshift stars moving outwards (e.g., Furlong et al., 2017; Marshall et al., 2022; Roper et al., 2022).
The relatively recently launched James Webb Space Telescope allows us to obtain the same type of data with the Near Infrared Camera (NIRCam) probing rest frame optical light as far out as \(z\sim 9\), with filters as red as \(\sim 4.4\mu\)m. The superior resolution of JWST and the long wavelengths of its filters allow us to examine galaxy structure with much better fidelity than with _HST_(Ferreira et al., 2022). Early _JWST_ observations reveal morphological and structural features of galaxies at \(1.5<z<8\) that were not possible to discern fully with _HST_, thus resulting in the re-classification of many galaxies previously believed to have peculiar morphologies (e.g., Ferreira et al., 2020). A major discovery has been that galaxies appear morphologically much more disc-like than previously thought (e.g., Ferreira et al., 2022). While these early _JWST_ papers show that galaxies are indeed different at \(z>2\) than we thought on the basis of _HST_ imaging, a significant amount of quantitative analysis is still needed.
As such, in this paper we present an analysis of the sizes and Sersic indices of massive galaxies with stellar masses log \((M_{*}/M_{\odot})\)\(>9.5\), for which we can now quantitatively measure rest-frame structure up to \(z\sim 8\) with _JWST_. Whilst previously we could also measure galaxy sizes and morphologies with _HST_, these are often unreliable due to image fidelity and the range of wavelengths we were able to probe (Ferreira et al., 2022). Our results at \(0.5<z<8\) allow us to probe deeper and at higher redshifts than previous studies.
We present a quantitative analysis of measured galaxy shapes, based on the Sersic index, \(n\), obtained from Sersic profile fitting, and size measurements, based on half-light radii measurements, to determine the evolution of galaxy structure over most of cosmic time. In this paper we answer how galaxy sizes and their overall shapes change for a mass-complete sample which we divide into samples based upon two properties: specific star formation rate, and Sersic indices. This differs from previous work which has focused on either just the highest redshift galaxies (e.g., Ono et al., 2023) or those which are passive (e.g., Ito et al., 2023). The evolution of these properties within a stellar mass selection is a key observable for galaxy evolution as well as an important way to trace processes that drive galaxy size evolution in massive galaxies, including galaxy minor mergers and AGN activity (e.g., Bluck et al., 2012).
The structure of this paper is as follows: Section 2 discusses the data used and the data reduction process, Section 3 discusses properties of the galaxies within our sample. The fitting process is explained in Section 4, and we discuss how we select a sample of robust morphological fits in Section 5. We present our results, along with a comparison to simulations to confirm that our findings are not the result of redshift effects in Section 6. We assume a standard \(\Lambda\) CDM cosmology throughout of \(\Omega_{m}=0.3\), \(\Omega_{\Lambda}=0.7\), and \(H_{0}=70\) km s\({}^{-1}\)Mpc\({}^{-1}\). Where we reference galaxy'sizes', we are referring to the half-light radii of our objects. All magnitudes are given in the AB system (Oke, 1974; Oke & Gunn, 1983).
## 2 Data
We use _JWST_ NIRCam imaging (Rieke et al., 2022) to analyse the light profiles of a large sample of high-redshift galaxies in the F115W, F150W, and F200W short-wavelength (SW) bands, and F277W, F356W, F410M, and F444W long-wavelength (LW) bands. The Cosmic Evolution Early Release Science (CEERS, PID: 1345, PI: S. Finkelstein) Survey (Finkelstein et al., 2017, 2022; Bagley et al., 2023) is one of 13 _JWST_ ERS programmes, with the goal of examining galaxy formation at \(0.5<z<10\) and perhaps beyond. The galaxies analysed in this work are within the CEERS NIRCam footprint, and within the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) observations (Koekemoer et al., 2011). This is important as it allows us to use both the _JWST_ data as well as the deep data from _HST's_ WFC3 and Advanced Camera for Surveys (ACS), which also aids the determination of photometric redshifts. As such, the photometric redshifts, stellar masses, and star formation rates used in this paper are based on the original CANDELS+GOODS WFC3/ACS imaging and data, Spitzer/IRAC S-CANDELS observations (Ashby et al., 2015), and Canada-France-Hawaii Telescope (CFHT) ground-based observations (Stefanon et al., 2017). A summary of _HST_ filters used and their \(5\sigma\) depths is shown in Table 1.
The CANDELS survey was designed to investigate galaxy evolution and the birth of black holes at \(1.5<z<8\), and consists of multi-wavelength observations in five fields. The CANDELS/DEEP survey covers 125 arcmin\({}^{2}\) within GOODS-N and GOODS-S, and the remainder consists of the CANDELS/Wide survey, covering three additional fields (Extended Groth Strip, COSMOS, Ultra-Deep Survey), covering a total of 800 arcmin\({}^{2}\) across all fields (Grogin et al., 2011). The primary sample we select from originates from these deep observations within the EGS, where there is overlap with the CEERS _JWST_ NIRCam data, and whose analysis is described in detail in Duncan et al. (2019).
### CEERS Data Reduction
We process the _JWST_ data products on this field using a modified version of the official _JWST_ pipeline, explained in depth in Adams et al. (2023); Adams et al. (2023). We use the standard _JWST_ pipeline (pipeline version 1.8.2 and Calibration Reference Data System v095), with some minor modifications. Between Stage 1 and Stage 2, we subtract templates of 'wisp' artefacts from the F150W
\begin{table}
\begin{tabular}{c c} Filter Name & Depth \\ \hline _HST_/ACS F606W & 28.8 \\ _HST_/ACS F814W & 28.2 \\ _HST_/WFC3 F125W & 27.6 \\ _HST_/WFC3 F140W & 26.8 \\ _HST_/WFC3 F160W & 27.6 \\ \end{tabular}
\end{table}
Table 1: The \(5\sigma\) depths of the _HST_ photometric data covering the EGS, for full details see Stefanon et al. (2017).
and F200W data 1. After Stage 2 of the pipeline we apply a correction for 1/F noise, derived by Chris Willot.2 We extract the sky subtraction step from Stage 3 of the pipeline and run this on each NIRCam frame independently. We then align calibrated imaging for each exposure to GAIA Data Release 3 (Gaia Collaboration et al., 2021), using tweakreg from the mrizzlepac python3 package. We finally pixel-match the final mosaics using astropy reproject.4 The final resolution of our drizzled images is 0.03 arcsec per pixel. The total unmasked area of _JWST_ images used in this paper is 64.15 square arcminutes, and the average depths of each filter are listed in Table 2.
Footnote 1: [https://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam-instrument-features-and-caveats/nircam-claws-and-wisps](https://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam-instrument-features-and-caveats/nircam-claws-and-wisps)
Footnote 2: [https://github.com/chriswillott/jwst](https://github.com/chriswillott/jwst)
Footnote 3: [https://github.com/spacetlescope/drizzlepac](https://github.com/spacetlescope/drizzlepac)
Footnote 4: [https://reproject.readthedocs.io/en/stable/](https://reproject.readthedocs.io/en/stable/)
## 3 Galaxy Properties
### Photometric Redshifts, Stellar Masses and Star Formation Rates
We begin our analysis with a catalogue of 1649 massive objects with log \((M_{*}/M_{\odot})\)\(>9.5\), which have photometric redshifts and physical properties calculated in previous works. The photometric redshifts used in this paper are calculated in Duncan et al. (2019), using the EAZY (Brammer et al., 2008) photometric redshift code, with three separate template sets fitted to the observed photometry. These templates include zero-point offsets, which alter the input fluxes, and fix additional wavelength-dependent errors (see Duncan et al. (2018, 2018) for full details). A Gaussian process code (GPZ; Almosallam et al., 2016) is used to measure further empirical estimates, using a subset of the photometric bands. Individual redshift posteriors are calculated, and all measurements are combined in a statistical framework via a Bayesian combination to give a final estimate of redshift. From a comparison with spectroscopic redshifts these photometric redshifts are seen to have a high degree of accuracy; for full details, see Section 2.4 of Duncan et al. (2019).
The stellar masses we use are those measured in Duncan et al. (2014); Duncan et al. (2019), using a custom template fitting code (see Section 4 of Duncan et al. (2014)). With this custom spectral energy distribution (SED) fitting code, the stellar mass is measured at all redshifts within the photometric redshift fitting range. The masses also have a 'template error function', described in Brammer et al. (2008), accounting for uncertainties driven by the template set and wavelength effects. These stellar mass measurements assume a BC03 stellar population synthesis (SPS) model (Bruzual & Charlot, 2003), with a wide range of stellar population parameters, and a Chabrier (2003) initial mass function (IMF). The star formation histories used within these fits follow the form \(\mathrm{SFR}\propto\mathrm{e}^{-t/\tau}\), with timescales of \(|\tau|\)=\(0.25,0.5,1,2.5,5,10\), where negative values of \(\tau\) represent exponentially increasing histories. A short burst model is also used (\(\tau=0.05\)), as well as continuous star formation models (\(\tau=1/H_{0}\)). In order to ensure that our stellar masses do not suffer from systematic biases, they are compared to stellar masses calculated independently within the CANDELS collaboration (Santini et al., 2015). While there is some scatter between the two mass estimates, there is no significant bias (see Section 2.5 of Duncan et al. (2019) for a detailed discussion). We aim to calculate masses using _JWST_ data for galaxies at \(z>4.5\) in Harvey et al., in prep.
Star formation rates for our sample of galaxies are calculated using the UV slope (\(\beta\)) of the spectral energy distribution, which gives a measure of the dust attenuation within the galaxy. We aim to measure this with _JWST_ within this EPOCHS paper series (Austin et al., in prep). From this, we correct for dust and obtain the total star formation rates for our galaxies, which agree well with star formation rates derived directly from SED fitting (Duncan et al., 2014; Duncan et al., 2019).
### Visual Classifications
For 470 objects within our sample, we use visual classifications from Ferreira et al. (2022) as part of our analysis. The categories we use are as follows:
1. _Discs:_ Sources with a resolved disc with an outer area of lower surface brightness, that regularly increases towards the centre of the galaxy.
2. _Spheroids:_ Resolved sources that are symmetrical, with a centrally concentrated, smooth light profile, that are round or elliptical.
3. _Peculiar:_ Resolved sources with a disturbed morphology, which dominates any smooth components.
4. _Other:_ Mainly made up of sources classified as 'ambiguous', due to the classifiers not reaching a majority agreement on the classification of a source. This category also contains sources that are classified as point sources due to an angular size smaller than the full width half maximum (FWHM) of the point spread function (PSF), or clear spikes that are consistent with point sources, and any sources that were unable to be classified due faintness or image issues.
## 4 Morphological Fitting
We use GALFIT version 3.0.5 (Peng et al., 2002, 2010)to fit a single Sersic light profile to each galaxy. Ultimately, the overall goal with _JWST_ is to measure the light profiles in more detail, such as obtaining bulge to disk ratios and other features (Margalef-Bentabol et al., 2016). However, it is important to first determine structural properties using single profile fitting, and assess how they characterise the data (e.g., van der Wel et al., 2012, 2014; Suess et al., 2022).
GALFIT is a least-squares-fitting algorithm which finds the optimum solution to the surface brightness profiles for galaxies through using a Levenberg-Marquardt algorithm. GALFIT uses the reduced chi-squared, \(\chi^{2}_{\nu}\), to determine goodness-of-fit and finds the best fit model through \(\chi^{2}_{\nu}\) minimisation. The \(\chi^{2}_{\nu}\) is given by:
\[\chi^{2}_{\nu}=\frac{1}{N_{\mathrm{DOF}}}\sum_{x=1}^{nx}\sum_{y=1}^{ny}\frac{( f_{\mathrm{data}}(x,y)-f_{\mathrm{model}}(x,y))^{2}}{\sigma(x,y)^{2}} \tag{1}\]
\begin{table}
\begin{tabular}{l l} Filter & Depth \\ \hline F115W & 28.75 \\ F150W & 28.60 \\ F200W & 28.80 \\ F277W & 28.95 \\ F356W & 29.05 \\ F410M & 28.35 \\ F44W & 28.60 \\ \end{tabular}
\end{table}
Table 2: Average \(5\sigma\) depths in our reduced CEERS images, for point sources in 0.32” diameter apertures. Depths are calculated by placing random apertures in regions of the image that are empty based on the final segmentation maps.
summed over \(nx\) and \(ny\) pixels, and where \(N_{DOF}\) is the number of degrees of freedom. As seen in Equation 1, GALFIT requires a data image from which the galaxy surface brightness is measured, \(f_{\text{data}}(x,y)\) and a sigma image, \(\sigma(x,y)\), giving the relative error at each position within the image, which are then used to calculate the model image, \(f_{\text{model}}(x,y)\).
We run GALFIT for all available filters, but only report results here for the filters that best match the rest-frame optical wavelength of the source. This minimises, or even eliminates, the effect of morphological \(k\)-correction, as the qualitative and quantitative structure of galaxies changes as a function of wavelength (Taylor-Mager et al., 2007), which can result in significant structural changes between rest-frame UV and rest-frame optical images. The band selected at a given redshift is shown in Figure 1. The figure shows which filter we use within different redshift ranges, and what rest-frame wavelength we probe within that filter at that redshift. As can be seen, we are always probing the rest-frame optical at wavelengths redder than the Balmer break at all epochs in which we view our galaxy sample.
The Sersic profile we use has the form
\[I(R)=I_{e}\,\exp\left\{-b_{n}\left[\left(\frac{R}{R_{e}}\right)^{1/n}-1 \right]\right\}\,, \tag{2}\]
where \(I(R)\) is the intensity at a distance \(R\) from the centre of the galaxy, \(R_{e}\) is the half-light radius of galaxy (the radius where 50% of the total luminosity is enclosed), \(I_{e}\) is the intensity at the half-light radius, \(n\) is the Sersic index, which controls the shape of the light profile of the galaxy (Sersic, 1963; Ciotti, 1991; Caon et al., 1993), and \(b_{n}\) can be approximated as \(b(n)\approx 2n-\frac{1}{3}+\frac{4}{40Gn}+\frac{46}{25515n^{2}}\)(Ciotti and Bertin, 1999). GALFIT gives us a best fitting value for each of these terms. The errors on these values are also calculated through this method, and a full description of the GALFIT error calculation can be found in Peng et al. (2002).
### GALFIT Pipeline
We use a custom pipeline for single component Sersic fits with GALFIT. The process is as follows:
1. _Source Detection_: We use SExtractor(Bertin and Arnouts, 1996) to detect sources within the F444W images for each CEERS pointing, following the parameters and method in Adams et al. (2023b). These catalogues are then cross-matched within 1 arcsecond to the catalogues created from the analysis completed in Duncan et al. (2019) to create our final catalogues for each pointing. The average separation between both catalogues is \(\sim 0.15^{\prime\prime}\)
2. _Cutout and Mask Creation_: We create 200 x 200 pixel (\(6^{\prime\prime}\times 6^{\prime\prime}\)) cutouts of each source, in order to ensure the entire surface brightness profile of the galaxy is enclosed within the cutout, along with that of any neighbours that may need to be modelled simultaneously. We use the SExtractor segmentation maps to make masks, creating the same 200 x 200 pixel cutout of the segmentation map, and then masking the necessary objects.
In order to create masks for each object, and select which neighbouring objects must be masked, we use the Kron ellipses (Kron, 1980), as defined by SExtractor and plot circular apertures with a radius equal to the semi-major axis of the Kron ellipse. We do this to select galaxies that are sufficiently close enough for their surface brightness profile to interfere with that of the target, 'primary', object and must be fit simultaneously. If any neighbouring galaxy has an overlapping Kron aperture with that of the primary object, the neighbouring galaxy is deemed to be sufficiently close that it must be modelled alongside the primary galaxy, to account for both light profiles. We do this for as many neighbouring objects as necessary. Objects that do not have an overlapping Kron aperture are far enough away that they do not also need to be fit, and therefore are masked instead, primarily to save computational time. Through visual inspection of fits and residuals of fits from the data, we conclude that this criterion is good for determining when to fit neighbouring objects. The pixels that are masked are those that the SExtractor segmentation maps assigns to each source. An example of these selection criteria is shown in Figure 2.
GALFIT also requires a sigma image to give relative weight to the pixels in the image during the fit. As the input sigma image, we use the 'ERR' extension of the images, which is a measure of the noise of the image, and this is created using the same method as the object cutout images.
3. _Input Parameters_: In order to fit a single Sersic profile, initial estimates of parameters must be provided to GALFIT. The input parameters that GALFIT requires estimates for are \(x\) and \(y\) image coordinates, total magnitude, half-light radius, axis ratio, position angle, and Sersic index. Similarly to Kartaltepe et al. (2022), we use SExtractor catalogue for our initial parameter estimates, except for the initial Sersic index which we estimate as \(n=1\), although other values of \(n\) have virtually no effect on the output parameters, and only apply constraints to the image position of the sources within \(\pm\) 2 pixels to ensure the correct source is being fit.
4. _Point Spread Function_: GALFIT requires the appropriate PSF for each filter, which is obtained using WebbPSF (Perrin et al., 2014), and resampled to our pixel scale. We experimented with different PSFs created through this method and find that the results do not significantly change.
Although the central position of the source is constrained, all other parameters are allowed to vary freely, and a selection process is used to select good fits with physical parameters after fitting is complete, as the overuse of constraints can lead to GALFIT converging on unphysical results. Example fits can be seen in Figure 3.
### Comparison with IMFIT
IMFIT is an alternative light profile-fitting program which uses
Figure 1: Plot showing the rest-frame wavelengths at given redshifts, for all filters used within the JWST CEERS NIRCam observations, and used within this paper. The shaded regions show the selected filter we use to observe sources in the rest-frame optical at the given redshift. The grey dashed lines show where the filter we use to provide a rest-frame optical view of galaxies at different redshift changes.
Levenberg-Marquardt, Nelder-Mead, and Differential Evolution algorithms to find the best fit parameters (Erwin, 2015). In order to test the robustness of our method, we present a comparison of best fit half-light radii and Sersic indices for a representative sample of 146 objects in CEERS Pointing 1, which have GALFIT fits that meet the selection criteria explained in Section 5 and have a stellar mass of \(\log\left(M_{*}/M_{\odot}\right)\)\(>9.5\). We run IMFIT using the same input parameters and initial guesses as those used for GALFIT and again use the Levenberg-Marquardt algorithm for consistency. The results of this comparison are shown in Figure 4a and Figure 4b which plot the half-light radii and Sersic indices measured for these 146 objects using GALFIT and IMFIT, showing a good agreement between the measured values.
We use two numerical values to provide a further indicator of reliability, and measure these for the sample of 146 objects used across the comparison. Firstly, we use the outlier rate, defined as the fraction of radii/Sersic indices obtained by IMFIT that disagrees with the radii/Sersic indices obtained by GALFIT by more than 15% in \((1+x)\), where \(x\) is the measured quantity. Secondly, we use the Normalised Median Absolute Deviation (NMAD) (Hoaglin et al., 1983), which is defined as \(1.48\times\mathrm{median}[|\Delta x|/(1+x)]\), where \(\Delta x=x_{galfit}-x_{limfit}\). The NMAD is a measure of the spread of the IMFIT measurements around the GALFIT measurements, maintaining its reliability when outliers are present.
For the half-light radius we find an outlier rate of 6.2% and NMAD of 0.012, and for Sersic index we find an outlier rate of 9.7% and NMAD of 0.021, showing a slightly better agreement between codes for half-light radius than Sersic index. We also find a better agreement with Sersic index at lower values of Sersic index. We see greater disagreements for a few objects at higher Sersic index \(n\) due to larger contrast which exists at higher values of \(n\). A slight change in the fitting will provide a larger change in \(n\) when \(n\) is larger. Overall, however, we find a good agreement between these different codes and use the GALFIT results throughout the rest of this paper.
## 5 Sample Selection
We select massive galaxies with stellar masses \(\log\left(M_{*}/M_{\odot}\right)\)\(>9.5\), and then make further selections based on the goodness of fit achieved by GALFIT for each object. We do not make selections based on the \(\chi^{2}_{\nu}\) obtained, which is only used by GALFIT to determine when it has reached the best fit. Instead, we make our own selections based on the output parameters and the residual flux fraction (RFF).
### Galfit Parameters
In order to remove extreme cases, a fit must meet all of the following criteria:
1. The half-light radius must be within \(0.01<R_{e}(\mathrm{pixels})<100\). This ensures that fits where GALFIT has reached the minimum size possible or where the model would be larger than the cutout size are excluded.
2. The fit Sersic index lies within the range \(0.05<n<10\).
3. The fit axis ratio must be \((b/a)>0.01\), removing unphysical models. This is particularly prevalent in faint sources, where GALFIT sometimes converges upon a 'bad' fit with a small axis ratio (van der Wel et al., 2012). We do not make selections based upon GALFIT magnitude, and all models are fit by GALFIT with a rest-frame optical magnitude \(<28\).
Where neighbouring objects are being simultaneously modelled, these criteria are only applied to the best fit parameters of the central object. Where GALFIT does not converge, and gives no best fit parameters, or the best fit parameters do not meet the above criteria, we reject the fit. We start the fitting process with a sample of 1649 galaxies with \(\log\left(M_{*}/M_{\odot}\right)\)\(>9.5\), and through our selection criteria, we reject 192 galaxies, for which we repeat the fitting process with Sersic index held at a value of \(n=1\). The 'Fixed Sersic' fits must then
Figure 3: Example fits showing the data image, model image, and the residual image (data - model). Each cutout is \(6^{\circ}\times 6^{\circ}\). The redshift of each galaxy is shown to the left of the images.
Figure 2: Plots showing the Kron radii (semi-major axis of the Kron ellipse), where the red radius in each image is that of the primary source, and the blue radii are those of the neighbouring sources. We give several scenarios for how these systems would be found. Left: No other sources would be fit simultaneously to the primary, and the pixels belonging to all other objects according to the SExtractor segmentation maps are masked. Right: The source where the Kron radius overlaps that of the primary source are simultaneously fit in this instance, and all other sources would be masked. Cutout sizes shown are \(6^{\circ}\times 6^{\circ}\).
meet the above criteria for all other parameters, and out of these 192 galaxies, we still reject 93 galaxy fits, thus our sample contains 99 galaxies with a Sersic index fixed at \(n=1\), and 1457 objects with a free value of \(n\) at this stage, with a total sample size of 1556 galaxies.
### Residual Flux Fraction
We calculate the residual flux fraction for our fits that met the previous criteria in subsection 5.1. The residual flux fraction (RFF) is a measure of the signal in the residual image that cannot be explained by background fluctuations (Hoyos et al., 2012). As in Margalef-Bentabol et al. (2016), we define this as
\[\mathrm{RFF}=\frac{\sum_{(j,k)\in A}\left|I_{j,k}-I_{j,k}^{GALFIT}\right|-0.8 \sum_{(j,k)\in A}\sigma_{\mathrm{Bj}},k}{\mathrm{FLUX\_AUTO}} \tag{3}\]
where I is the NIRCam image of the galaxy, \(I^{GALFIT}\) is the model image created by GALFIT, \(\sigma_{B}\) is the background RMS image, and FLUX_AUTO is the flux of the galaxy calculated by SExtractor, all of which are in the rest-frame optical filter of the object. The factor of 0.8 in the numerator ensures that the expected value of the RFF is 0 for a Gaussian noise error image (Hoyos et al., 2011). We calculate the RFF within the Kron radius of the galaxy, where we define the Kron radius as the semi-major axis of the Kron ellipse. Calculating the RFF over a large radius leads to the RFF decaying to zero, where the outer areas can dominate the calculation, even if there is a complex residual at the centre of the image.
We calculate the background term following the method used in Margalef-Bentabol et al. (2016), where we assume that :
\[\sum_{(j,k)\in A}\sigma_{\mathrm{Bj},k}=\mathbf{N}\left(\sigma_{\mathrm{B}} \right), \tag{4}\]
where \(\left\langle\sigma_{B}\right\rangle\) is the mean value of the background sigma for the whole image. We calculate this by placing apertures on blank areas of sky in the image 'ERR' extension that we use for creating the GALFIT sigma images (see subsection 4.1), and calculating the mean value of these regions. The value of \(N\) is the number of pixels within the radius that we are using for the RFF calculation.
In order to remove any remaining objects that are poorly fit with large residuals, we complete visual checks to select an appropriate RFF cutoff value. As a result of this, we select objects with an RFF value below 0.5. This enables us to remove objects where the light is either very over- or under-accounted for, yet allows for features that are not modelled precisely (due to features such as spiral arms and bars not being accounted for in single Sersic fits) but where the measured properties are otherwise reasonable. Through RFF measurements, we reject a further 161 fits due to large residuals. This results in a final sample of 1395 robust galaxy fits, of which 1313 (94.1%) were fit with a free value of \(n\), and 82 (5.9%) were fit with a fixed value of \(n=1\), which we further analyse in section 6. The rejected fits are mostly comprised of lower mass galaxies, although there are fits rejected at all masses within our sample.
### Final Sample
We begin the fitting process with a sample of 1649 galaxies, and after our quality cuts we recover 1395 galaxies for our final sample. This recovery rate of 84.6% is higher than comparable analyses, such as Suess et al. (2022) (\(\sim\) 60%), although we repeat the fitting process with a fixed Sersic index where the first fitting procedure has failed. However, fits with a fixed Sersic index only account for 5.9% of our sample. We note that we have not made any selections based upon the magnitude of the GALFIT model, as the input and output magnitudes are in good agreement. In the rest-frame optical, 3.66% of the GALFIT best-fitting models have a half-light radius in pixels that is smaller than the FWHM of the PSF, thus almost all of our sources are resolved. This sample is used throughout this paper to carry out our analyses of the evolution of galaxy size and structure.
## 6 Results
In the following sections we describe the results of our analysis. This includes examining the sizes of our systems, as well as the overall shapes of these galaxies based on their Sersic indices and how these evolve with time. Where we state results for passive and star forming galaxies, whereby these are defined by having a specific star formation rate (sSFR) below or above the midpoint of the distribution of values within the redshift bin. We calculate the specific star formation rate using the star formation rates and stellar masses from Duncan et al. (2014); Duncan et al. (2019) (see subsection 3.1 for full details). We then define a galaxy with a specific star formation
Figure 4: A comparison of two measures - size (top) and Sérsic index (bottom) - as obtained with GALFIT and IMFIT. The black line shows the one-to-one relation. Size and Sérsic index are generally in good agreement, with more variation at larger values, in particular with the Sérsic index. We show the outlier rate (\(\eta\)) and NMAD in the bottom right corner of each figure.
rate greater (lower) than the median sSFR within the redshift bin, to be a star-forming (quiescent) galaxy.
### Half-light radii and Sersic indices
We measure the half-light radii of our objects using GALFIT, and convert the values from pixels to their physical half-light radii in kpc. Figure 5 shows the size evolution with redshift for our sample of 1395 galaxies. We use the radius from the filter nearest to the rest-frame optical for each object, and find that the sizes are well fit by the power-law relation:
\[\langle{\rm R_{e}}\rangle=4.50\pm 1.32(1+z)^{-0.71\pm 0.19}. \tag{5}\]
This is such that the average sizes of our sample, in terms of effective radii (\(\langle{\rm R_{e}}\rangle\)), become progressively smaller at increasing redshifts. This trend for galaxies at a given mass selection to become smaller at higher redshifts had been known to exist at \(z<3\) for many years (e.g., Trujillo et al., 2007; Buitrago et al., 2008; van der Wel et al., 2012), yet this is the first time this has been shown using JWST observations for similar types of studies. We also compare our power law function to those derived in comparable studies. We note that the curves presented in Buitrago et al. (2008) are normalised with respect to SDSS data, thus we perform an arbitrary normalisation of these curves to align them with the scale employed. We also extrapolate all curves to cover our entire redshift range. We compare to a range of individual points and power-law curves.
It is important to note that the evolution of galaxy size and Sersic index are highly dependent on the redshift ranges studied and the stellar mass and/or magnitude ranges which are included in the analysis. For example, if we study just the super massive galaxies at \(\log\,(M_{*}/M_{\odot})\)\(>11\), we would find that systems have a stronger evolution than at lower masses (e.g., Buitrago et al., 2008; Buitrago et al., 2013). As such it is important to be clear with what we are comparing with in this figure, as no previous study has measured galaxies in exactly the same way that we have here. The points from van der Wel et al. (2014) are for galaxies with \(\log\,(M_{*}/M_{\odot})\)\(\sim 10.75\), Bridge et al. (2019) and Kubo et al. (2017) are at \(\log\,(M_{*}/M_{\odot})\)\(\sim 10\), and Yang et al. (2022) select bright objects based on magnitudes. The curves given are for a mass selection of \(\log\,(M_{*}/M_{\odot})\)\(>9\)(Costantin et al., 2023), \(\log\,(M_{*}/M_{\odot})\)\(>11\)(Buitrago et al., 2008), and for a number density based selection (van Dokkum et al., 2010).
What we can see from Figure 5 is that there is a steep evolution for galaxies selected with the mass range that we have within this paper. We find, as quoted before a power-law decrease in size that scales as \(\sim(1+z)^{-0.71}\), which is less steep than previous results when comparing the evolution up to \(z\sim 3\). This is partially due to the fact that we are observing galaxies at higher redshifts where the size evolution tapers off, and does not continue as steeply at higher-z, although it is important to keep in mind that redshift (\(z\)) values do not scale linearly with time, and there is much more time at a given \(\delta z\) at low redshift than at the higher redshifts. Another reason for the difference, can be seen at the lowest redshifts, where our galaxies are on average smaller than the previous work. This is likely due to us using a lower mass cut to define our sample of galaxies, resulting in on average smaller systems. This is consistent with findings from simulations, where it has been shown that galaxy size correlates with stellar mass, thus resulting in lower mass samples having smaller sizes on average (Furlong et al., 2017; Ma et al., 2018).
We also investigate the difference in the size-redshift relation for populations of galaxies with high and low Sersic indices, defined as \(n>2\) and \(n<2\) respectively, although using \(n=2.5\) as the limit produces effectively the same results. We do this to determine how the size evolution depends on the shape of the profile, with those at \(n>2\) possibly more like the massive galaxies we see in the local universe and those with \(n<2\) possibly progenitors of disc galaxies or those undergoing mergers.
As can be seen in Figure 6, at low redshift, the galaxy populations have a clearly different size-redshift relation, but at redshifts higher than \(z\sim 3\), the relations show a greater similarity, suggesting that effective radius is less dependent upon the Sersic index at high redshift compared to lower redshifts. What this means is that galaxies do not differentiate between overall morphology, as measured by the Sersic index, until around \(z\sim 3\), consistent with findings at \(z<5\), where star forming galaxies exhibit inside out growth (Roper et al., 2023). This suggests that this aspect of the 'Hubble Sequence' was in place by at least \(z\sim 3\), with a disc-like (\(n<2\)) and elliptical-like (\(n>2\)) population clearly defined.
We also find that galaxies at higher redshift are almost all compact objects, regardless of their Sersic index, and are smaller than their low-redshift counterparts of similar mass, in agreement with previous studies and simulations (e.g, Trujillo et al., 2007; Buitrago et al., 2008; van der Wel et al., 2014; Costantin et al., 2023). Furthermore, we find that the sizes of the high and low Sersic index populations evolve differently at \(z<3\), with the objects with lower Sersic indices following a steeper size - redshift relation, suggesting a difference in structural growth mechanisms in each population, likely due to the onset of inside out growth in these systems with lower Sersic indices. We also show that the compact systems of the early universe are not representative of the galaxy population today, suggesting a strong size evolution must occur, continuing until the present day. This growth is such that we find an increase of a factor of three from \(z\sim 7\) to \(z\sim 1\), with roughly a doubling of size from \(z\sim 7\) to \(z\sim 3\), a time period of \(\sim 1.4\) Gyr. The relation is also still evolving at the highest redshifts, confirming that galaxy evolution as well as evolution in structure were already taking place in the first Gyr since the Big Bang.
For those objects fit with a free Sersic index, we also investigate the evolution of Sersic index with redshift, as shown in Figure 7. Figure 7a shows that for all free-fit \(n\) galaxies within our sample, Sersic index decreases with increasing redshift, suggesting a higher proportion of disc-type galaxies in the early universe. Whilst these objects have Sersic indices similar to modern pure disc galaxies, this does not necessarily imply that these are rotating disks (e.g., Buitrago et al., 2014). This is in contrast to previous findings using HST that there were fewer disc galaxies at \(z>1.5\), although the lower resolution of HST lead to misclassification of galaxies (Ferreira et al., 2022). Exploring this complicated subject is beyond the scope of this paper. The slight increase in Sersic index at \(z\sim 7.5\) could be due to an increase of spheroid galaxies reported in Ferreira et al. (2022), particularly in the star forming population (see Figure 7b), as spheroid galaxies have been found to account for a higher proportion of the sSFR budget (Ferreira et al., 2022) at high redshifts, but could also simply be due to random chance and increased errors at higher redshifts. We further discuss size and Sersic index changes as a function of morphology in subsection 6.4.
In Figure 7b, we investigate Sersic indices for passive and star forming populations, and find that the star forming galaxies have Sersic indices around \(n\sim 1\), suggesting that most star formation takes place within disc galaxies, which is also the case when identifying these systems through visual means (Ferreira et al., 2022). The slight deviation from this trend at the highest redshifts could be due to stars forming in young, compact sources, before evolving into disc-like galaxies. The populations start to show differing trends at
\(z\sim 3\), again hinting at the establishment of a bifurcation of the galaxy population in structure around this time. This shows that the star forming and passive galaxy populations possibly evolve through different mechanisms to create differing physical properties at later times, such as inside out star formation in star forming galaxies, and stellar migration in passive systems.
### Galaxy Size-Mass Distribution
We plot the galaxy size-mass relation in different redshift bins, as shown in Figure 8. For each redshift bin, we separate galaxies into either a star-forming or passive population, using the median specific star formation rate (sSFR) of each bin as the separating value. This midpoint is determined by plotting a histogram of all the sSFR values for galaxies within a given redshift bin. The median value is then measured, and galaxies which are lower than this we call 'passive', and those above this 'active'. This is the same method we used previously for separating star forming from non-star forming galaxies when investigating trends with size and Sersic index.
In general, we find that galaxy size increases with stellar mass, in both the quiescent and star forming populations up to \(z\sim 6\), although this tends to be more obvious for the star-forming galaxies. We also see that the star forming galaxies at the lower redshifts are nearly always larger at a given mass than the passive galaxies, although this tends to break down at the lower masses.
We also find that at redshifts higher than \(z>3\), the sizes of the quiescent and star forming populations are statistically the same, suggesting that at a fixed mass at high redshift, quiescent galaxies may not be smaller than star-forming galaxies, as predicted in Suess et al. (2022), likely due to transient quiescence driven by high redshift active galactic nuclei (AGN) activity (Lovell et al., 2023). This means that whatever is differentiating the sizes of galaxies as a function of
Figure 5: Power law fit showing the size evolution of all galaxies within our sample, compared to results from previous work. The black dashed line is of the form \(R_{e}(kpc)=4.50\pm 1.32(1+z)^{-0.71\pm 0.19}\), with the grey, shaded area showing the error on the power-law fit. The grey diamond points are the median galaxy sizes in each redshift bin, and error bars are \(1\sigma\) in length. The previous work shown in the Figure uses HST data, except for Yang et al. (2022), which uses JNST data, van Dokkum et al. (2010) which uses NOAO/YaleNEWFIRM Medium Band Survey Data, and Costantin et al. (2023) which uses the TNG50 simulation to produce mock CEERS observations. We note that we only plot redshift errors for Bridge et al. (2019), as radius errors are not provided.
shape or star formation rate does not come into play until past redshift \(z\sim 3\). This implies that the physical processes of growing galaxies is truncated for the passive galaxies, even if these systems still acquire stellar mass throughout their history. We discuss possible reasons for this later in the discussion section.
### Changes in Effective Radii as a Function of Wavelength
The changes in a galaxy's appearance and size as a function of wavelength can provide many clues to the formation history and stellar populations of modern galaxies (e.g., Windhorst et al., 2002; Taylor-Mager et al., 2007; Mager et al., 2018; Suess et al., 2022). An example of this is that if a galaxy forms inside-out, with the older stellar populations in the centre of the galaxy, then most likely these systems would appear larger at shorter wavelengths, and vice-versa if formation occurred via outside-in.
As such, we measure half-light radii in all available wavelengths for our sources, and use these sizes to probe the size evolution of our galaxy sample at differing observed wavelengths. As seen in Figure 9, average galaxy sizes become increasingly more compact and smaller when observed at longer wavelengths, for objects with both high and low Sersic indices at \(z<3\). We also see again, that those galaxies with higher Sersic indices are smaller on average at all wavelengths to those systems with lower indices. This is an indication that galaxy sizes are larger at shorter wavelengths where bluer and young light is probed, due to the formation mechanisms for these galaxies. This would be such that the outer parts of these systems consist of younger stars, compared to their inner portions made of older stars. More detailed analysis of the colour gradients and star formation gradients of these galaxies would answer this question. We also note that dust attenuation can increase the observed half-light radius (e.g., Marshall et al., 2022; Roper et al., 2022; Popping et al., 2022), although this would require a more in depth observational analysis.
At \(z<3\), we also find that galaxies with higher Sersic indices have smaller radii in both mass bins, with this effect being more noticeable in the highest mass bin. However, we do not see the same effect past \(z\sim 3\), where we see a much flatter relation, suggesting that galaxies at high redshift are forming stars throughout the entire galaxy, with blue and red light emitted throughout. However, when we divide the higher redshift galaxies into different bins, we obtain a noisier trend, and thus we hesitate to draw any further conclusions from this observed trend. For objects with Sersic indices of \(n>2\), there is much more scatter within the relation, although there are larger errors, due to the smaller number of galaxies with high Sersic index at \(z>3\).
To compare with previous JWST work on similar questions, we also compare the sizes of our galaxies measured in the F444W (\(4.4\mu m\)) band to the F150W (\(1.5\mu m\)) band sizes, as shown in Figure 10. This comparison is quoted in arcseconds, as a comparison of the on-sky sizes. We find that for galaxies at cosmic noon (\(1\leq z\leq 2.5\)), the \(4.4\mu m\) sizes are \(11.4\pm 1.28\%\) smaller than the \(1.5\mu m\) sizes on average, in agreement with the \(\sim 9\%\) difference found in Suess et al. (2022). Taking sizes measured in the near-infrared (observed with \(4.4\mu m\) at this redshift range) to be a reasonable proxy for stellar mass distributions, this shows that the stellar mass profiles of galaxies are smaller and more compact than their star forming 'light' profiles. We do not find that the galaxies outside of cosmic noon show the same effect, as the F150W and F444W filters no longer correspond to the rest-frame optical and rest-frame infrared of these galaxies. As we have shown that the smaller appearance in F444W is indeed due to the rest-frame infrared profile being smaller, thus
Figure 6: Power law fit for galaxies separated into two groups by Sérsic index as defined at \(n=2\). The sizes of these objects mostly diverge at the lowest redshifts. The diamond points are the median galaxy sizes in each redshift bin, and the error bars are \(1\sigma\) in length. The shaded region around each line represents the error on the power-law fit.
Figure 7: The Sérsic index - redshift relations for all galaxies (top), and passive and star forming populations (bottom). Large diamond and circle points are median values of each redshift bin, with error bars \(1\sigma\) in length. The grey dashed line at \(n=1\) represents the special case of the exponential disc profile.
showing the mass profile of the galaxy is smaller than the light profile, we investigate how this varies with the stellar masses of galaxies. This effect is dependent on the mass of the galaxy, as shown in Figure 11, where we examine the size difference at cosmic noon in two mass bins. We find that for galaxies with \(9.5\leq\)log \((M_{*}/M_{\odot})<10\), the F444W sizes are \(5.82\pm 1.76\%\) smaller than their F150W sizes. This increases with increasing mass, with sizes being \(17.9\pm 1.80\%\) smaller for objects with \(10\leq\)log \((M_{*}/M_{\odot})\). This implies that on average we see a greater difference in size between different wavelengths for the highest mass galaxies.
### Correlation with Visual Morphology
For 470 galaxies within our sample, we use visual morphological classifications from Ferreira et al. (2022b), as defined in subsec
Figure 8: The size-stellar mass distribution of ‘passive’ and ‘star forming’ galaxies, separated by the midpoint of the specific star formation rate in each bin, as explained in subsection 6.2. The larger points represent the median values in mass bins. We use a 50% error floor, with error bars one standard deviation in length, or representing a 50% error on the median, in cases where the error floor is larger.
tion 3.2 to determine how the properties and relations we have found are determined by overall morphology. This is a small sample of galaxies, but represents one of the first times that we can examine these measured properties with visual morphologies. Using these classifications, we present an analysis of size and Sersic index as a function of morphology, as shown in Figure 12.
Figure 12a shows that for all galaxy types, radius decreases with increasing redshift, but at all redshifts, spheroid-type galaxies are the smallest. Figure 12b shows how Sersic index varies with redshift for all galaxy types. Disc type galaxies have \(n\sim 1\) as expected, with " peculiar" and 'other' galaxies showing a slight decrease with redshift. Spheroid galaxies have the highest Sersic index at all redshifts, significantly above other galaxy types. The small sizes of these galaxies
Figure 9: The half-light radius as a function of wavelength. These plots show the radius evolution for \(z<3\) (left panels), and \(z>3\) (right panels) galaxies. In each redshift bin, we show the evolution for low-mass and high-mass galaxies, with the galaxies separated into high-Sérsic (red) and low-Sérsic (blue) index populations. These figures show that up to \(z\sim 3\) (left panels), galaxies are more compact when measured in redder filters, regardless of selection method. Past \(z=3\) (right panels), the relation is flatter, with the galaxies having \(n<2\) being smaller at this stage.
combined with their high Sersic index is a clear indicator of their compact, concentrated nature.
### Comparison with Image Simulations
One major issue with a study such as this, which deals with imaging and the analysis of structure at galaxies at vastly different redshifts, is the fact that the surface brightness measurements of galaxies declines as \((1+z)^{4}\), and this can produce significant changes in the way that structure for distant galaxies would be imaged by a telescope. In fact, it is clear that galaxy structure can in principle change substantially and that many galaxies are potentially being missed at the highest redshifts (Conselice, 2003; Whitney et al., 2020; Whitney et al., 2021). Therefore it is very important that we carry out simulations to determine if the trend we see in this paper, namely that galaxies get progressively smaller up to \(z\sim 8\), is due to a real evolution or simply due to the fact that the surface brightness of the galaxies just makes our galaxies appear smaller. Previous work using HST shows that whilst we are likely missing galaxies at the highest redshifts, we can still measure accurately their structural parameters (Whitney et al., 2020; Whitney et al., 2021).
To understand this issue in depth for JWST data, we take a sample of 186 low-redshift galaxies at redshifts \(0.5<z<1\), and create simulated images of these galaxies at higher redshifts, in intervals of \(\Delta z=0.5\), up to \(z=7.5\), including all known cosmological effects. This is done in order to separate real evolution effects from redshift effects. The galaxies selected are low-redshift galaxies within our final sample of galaxies where a good fit was obtained, ensuring the initial low-\(z\) measurements are reliable. To do this simulation we use the redshifting code AREIA5. We give here a brief overview of the steps taken are described below, for a more detailed discussion we refer the reader to Tohill et al. (2021) and Whitney et al. (2021).
Footnote 5: [https://github.com/astroferreira/areia](https://github.com/astroferreira/areia)
First, the source is extracted from the original stamp by measuring a segmentation map with GalClean6. Then, the image is geometrically re-binned from the source redshift to the target redshift based on the standard cosmology, preserving its flux. This is done to ensure that higher redshift sources have the appropriate geometric scaling due to the adopted cosmology. The resulting image from the rebinning has its flux scaled due to the cosmological dimming effect. Furthermore, shot noise is sampled from the source new light distribution, which is then convolved with the target JWST PSF of the rest-frame filter in the target redshift. Then, the final redshifted source is placed on a random real CEERS background to mimic a real observation. This results in a sample of 2418 simulated images.
Footnote 6: [https://github.com/astroferreira/galclean](https://github.com/astroferreira/galclean)
Although AREIA allows the user to include size corrections and brightness corrections to mimic redshift evolution, we keep all intrinsic properties of the galaxies constant at each redshift, simulating only observational effects.
Figure 11: Size comparison for different mass bins at cosmic noon. We find an increasing size difference with increased stellar mass. The black dashed line is the one-to-one relation. The percentages in the upper left corner are the average size difference for each redshift bin. The error bar represents a typical error of 0.2 dex.
Figure 10: A comparison of sizes measured in the F444W and F150W filters, showing that F444W sizes are smaller than those in F150W. This implies that galaxies measured in the near-infrared with _JWST_ are more compact than rest-frame optical sizes previously measured with _HST_. Galaxies at cosmic noon (\(1\leq z\leq 2.5\)) are shown as red diamonds whilst the grey hexagons represent all galaxies within our sample that do not fall within cosmic noon. These latter galaxies do not show the same effect. The black, dashed line is the one-to-one relation. The error bar represents a typical error of 0.2 dex on these measurements.
Following the same method described in subsection 4.1, we measure the sizes and Sersic indices of our new simulated galaxies. We follow the same selection method, described in section 5 to ensure robust results. For our Sersic index analysis, we further select objects with a GALFIT measured magnitude \(<30\).
We also analyse the trends for objects with values above and below the overall median, indicated by the 'high' and 'low' radius and Sersic groups.
We use the emcee package to obtain the lines of best fit (Foreman-Mackey et al., 2013). As shown in Figure 13, we find that our sizes and Sersic indices are best fit by linear fits, with gradients and errors stated in Table 3. We find our results are consistent with "flat slopes" - that is we find no change with redshift for the measured sizes and Sersic indices within these simulations. This shows that the simulated galaxies continue to have very similar size measurements at different redshifts, meaning that in the absence of evolution we would expect the same galaxies to have the same effective radii and Sersic indices measured at all redshifts. This confirms that our method recovers the same result regardless of the redshift in which the galaxy is observed.
This is vastly important for this work, and shows that the trends obtained in section 6, are due to a change in galaxy properties and populations with increasing redshift, not because the galaxies appear smaller at higher redshifts due to cosmological or redshift effects.
## 7 Discussion
The results in this paper reveal a myriad of new observational facts about the sizes and shape evolution of galaxies up to \(z\sim 8\). Our core result is that galaxies become progressively smaller at fixed stellar mass at higher redshifts. Whilst this was known for some time up to \(z\sim 3\)(Trujillo et al., 2007; Buitrago et al., 2008; van der Wel et al., 2012; van Dokkum et al., 2010), _JWST_ is now allowing an analysis of this in the rest-frame optical light at higher redshifts than previously possible. In fact, we find that from \(z\sim 7\) to \(z\sim 3\), galaxies with larger masses roughly double in size. This evolution is not as dramatic as is seen for the highest mass galaxies at \(z<3\)(e.g., Buitrago et al.
Figure 13: Sizes and Sérsic indices for our simulated galaxies in redshift bins. The error bars represent the standard error of the median for each bin. We divide these samples into different sub-types to determine how different selections would evolve differently within these simulations.
Figure 12: Size and Sérsic index evolution as a function of visually determined morphology from Ferreira et al. (2022). At all redshifts, we find that spheroid galaxies have the smallest radius, and the highest Sérsic index, displaying the compact, concentrated nature of these objects.
2008) and we will require that more area be covered before we have statistics to probe these very high mass galaxies at such high redshift, as very few are imaged due to the limited numbers and sky coverage with existing and reliable JWST data.
Another major result is that we find very little variation in the size distribution and size evolution for our sample at \(z>3\) irrespective of how we divide the sample. This is true for different Sersic cuts, as well as for different cuts in the sSFR for these galaxies. One of the main signatures of the formation of the Hubble sequence is a bifurcation in galaxies into morphologically distinct populations of star-forming and relatively passive systems (often simply divided into discs and ellipticals). Whilst we are not seeing the entire formation of this Hubble sequence at \(z\sim 3\), it is clear that the major bifurcation begins at this epoch and increasingly differentiates itself. This is likely due to different formation mechanisms coming into play at \(z<3\) that were not present at the higher redshifts. This is likely something to do with mergers and feedback from either AGN or star formation (e.g., Bluck et al., 2012), particularly inside out star formation, where star forming galaxies begin to grow more rapidly than their passive counterparts.
While we do not see much difference between galaxies at \(z>3\), we do find that galaxies have a well established difference in size and Sersic index as a function of the wavelength of observation. This is such that galaxies are more compact in redder wavelengths, showing that the outer parts are made up of more recent stars and star formation events. This could be a sign that galaxies are forming inside-out and that we are witnessing the formation of bulges and the cores of giant galaxies at these early times that are growing outward from minor mergers and/or accreted gas in star formation (e.g., Tacchella et al., 2015, 2018; Nelson et al., 2016; Nelson et al., 2019; Wilman et al., 2020; Matharu et al., 2022; Roper et al., 2023). This is consistent with the formation of bulges and disks occurring gradually at about \(z\sim 3\) and at lower redshifts (e.g., Margalef-Bentabol et al., 2016).
We also find that there is a correlation between stellar mass and size for galaxies - those that are star forming and those that are more passive - up to \(z\sim 3\). Our JWST results allow us to accurately probe these mass ranges, even down to the lower redshifts where HST has had a difficult time resolving these systems. This is another indication that something is regulating the sizes and masses of galaxies, and that this appears to be more present at \(z<3\) than at earlier times. When we compare with simulations, such as the TNG50 simulation (see Figure 5) we find that there is a good agreement. What remains to be determined is the causes of this change and the physics behind the mergers that produce these size increases within these simulations.
## 8 Summary and Conclusions
We present an analysis of 1395 carefully selected massive galaxies with stellar masseslog \((M_{*}/M_{\odot})\)\(>\)9.5, within the CEERS and CANDELS fields between z = 0.5 and z = 8, using light profile fitting. Our galaxy sample is taken from the CANDELS field to enable the use of robust masses, redshifts, and star formation rates from optical to NIR data, which is necessary to obtain accurate measurements for galaxies over our large redshift range. In this paper we fit single Sersic profiles to our galaxies and analyse their effective radius and Sersic index to probe the evolution of these properties through cosmic time. We also probe the variation in size and shape as a dependence on stellar mass and wavelength. The trends we find are robust to redshift effects as we show through simulations of placing low redshift galaxies at high redshift that parameters would not change simply due to being at higher redshifts as imaged with JWST.
To carry out our analysis we use a custom built GALFIT pipeline, fitting a single Sersic fit, and fitting neighbouring galaxies where appropriate. We verify this method via a comparison with another galaxy profile fitting code, IMFIT and find that our results are generally in good agreement between these two codes, and thus reliably measured. We then analyse the evolution of effective radius and Sersic index, for our sample as a whole, and for quiescent/star-forming population, and high/low Sersic index populations. We verify that our results are due to evolutionary effects rather than redshifting effects, by fitting the same galaxies redshifted to higher redshifts using the exact same process we apply on the real galaxies. We find "flat" relations between measured parameters and redshift, thus confirming the robustness of our method and its ability to recover the correct measurements regardless of the distance to the objects.
Our main findings are as follows:
* Galaxies at the mass ranges we probe become smaller with increasing redshift as \(\mathrm{R}_{e}\sim(1+z)^{-0.71\pm 0.19}\), with high and low Sersic index populations showing differing evolution only at \(z<3\), showing that this aspect of the 'Hubble Sequence' and galaxy bifurcation starting to be established at \(z\sim 3\). At higher redshifts we find no significant differences in the pattern of sizes with various properties.
* Sersic indices decrease on average with increasing redshift, suggesting a higher proportion of "disc-like" galaxies in the early universe. We find that passive and star-forming populations of galaxies show a different evolution of Sersic index with redshift, with star-forming galaxies having around value of \(n=1\), suggesting that most star formation occurs within disc-like galaxies, at least in terms of structure. We cannot however rule out that some of these galaxies are involved in mergers. In principle this confirms what has been found when classifying galaxies visually (Ferreira et al., 2022).
* In general, more massive galaxies have a larger effective radius up to at least \(z\sim 3\) compared to lower mass galaxies. At redshifts higher than \(z\sim 3\), at a fixed stellar mass, star-forming and quiescent galaxies have very similar sizes, suggesting that quiescent galaxies may not be smaller than star-forming galaxies at fixed stellar mass at high redshift, which has been a finding for almost 20 years at \(z<3\)(e.g.,Buitrago et al., 2008).
* We find that galaxies appear more compact and smaller when observed in redder filters, and demonstrate the mass dependence of this effect, where more massive galaxies are more compact in redder filters than their lower mass counterparts.
* We find that visually classified spheroid galaxies are smaller than other galaxy types at all redshifts, and have a higher Sersic index at all redshifts than other galaxy types, showing their small, compact nature.
\begin{table}
\end{table}
Table 3: Best fit results for linear fits to all galaxies, and high and low subsets for both radius and Sérsic index. The slopes are consistent with being flat - with little change with redshift, showing that our main findings are due to evolutionary effects, not redshift effects.
* We verify that our results are due to real evolutionary effects only, shown by fitting simulated high redshift galaxies with the intrinsic properties preserved, and we recover results that do not vary based on redshift effects.
Overall, this paper, we show that the evolution of galaxy size and structure continues to the highest redshifts, with disc-like galaxies forming most stars within the universe at all epochs. High redshift morphology studies are revealing a new picture of the structural evolution of galaxies, which will continue further with increasing numbers of high-redshift galaxies being discovered with JWST.
This study is just the start of this type of analysis. The benefit of the CEERS fields is that we have very accurate photometric redshifts at lower redshifts, due to overlap with existing HST observations. In the near future this will be available for many other and larger fields where more subtle changes in the size and structural features of galaxies will be studied, and this will lead to a more complete understanding of galaxy evolution over nearly the universe's entire history.
## Acknowledgements
QL, CC, JT, and NA acknowledge support from the ERC Advanced Investigator Grant EPOCHS (788113). DA and TH acknowledge support from STFC in the form of PhD studentships. LF acknowledges financial support from COordenaeao de Aperfeicoamento de Pessoal de Nivel Superior - Brazil (CAPES) in the form of a PhD studentship.
This work is based on observations made with the NASA/ESA _Hubble Space Telescope_ (HST) and NASA/ESA/CSA _James Webb Space Telescope_ (JWST) obtained from the Mikulski Archive for Space Telescopes (MAST) at the _Space Telescope Science Institute_ (STScI), which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST, and NAS 5-26555 for HST. The authors thank all involved in the construction and operations of the telescope as well as those who designed and executed these observations.
The authors thank Anthony Holloway and Sotirios Sanidas for providing their expertise in high performance computing and other IT support throughout this work. This research made use of the following Python libraries: Astropy (Astropy Collaboration et al., 2022); Morfometryka (Ferrari et al., 2015); Pandas (Reback et al., 2022); Matplotlib (Hunter, 2007); photutils (Bradley et al., 2020).
## Data Availability
The JWST/NIRCam images used in this work are available through the Mikulski Archive for Space Telescopes ([https://mast.stsci.edu/](https://mast.stsci.edu/)) under Proposal ID 1345. Additional data products will be made available upon reasonable request to the corresponding author.
|
2309.03418 | Large-scale asymmetry in the distribution of galaxy spin directions --
analysis and reproduction | Recent independent observations using several different telescope systems an
analysis methods have provided evidence of parity violation between the number
of galaxies that spin in opposite directions. On the other hand, other studies
argued that no parity violation can be identified. This paper provides detailed
analysis, statistical inference, and reproduction of previous reports that show
no preferred spin direction. Code and data used for the reproduction are
publicly available. The results show that the data used in all of these studies
agrees with the observation of a preferred direction as observed from Earth. In
some of these studies the datasets were too small, or the statistical analysis
was incomplete. In other papers the results were impacted by experimental
design decisions that lead directly to show non-preferred direction. In some of
these cases these decisions are not stated in the papers, but were revealed
after further investigation in cases where the reproduction of the work did not
match the results reported in the papers. These results show that the data used
in all of these previous studies in fact agree with the contention that
galaxies as observed from Earth have a preferred spin direction, and the
distribution of galaxy spin directions as observed from Earth form a
cosmological-scale dipole axis. This study also shows that the reason for the
observations is not necessarily an anomaly in the large-scale structure, and
can also be related to internal structure of galaxies. | Lior Shamir | 2023-09-07T00:43:58Z | http://arxiv.org/abs/2309.03418v1 | # Large-scale asymmetry in the distribution of galaxy spin directions - analysis and reproduction
###### Abstract
Recent independent observations using several different telescope systems an analysis methods have provided evidence of parity violation between the number of galaxies that spin in opposite directions. On the other hand, other studies argued that no parity violation can be identified. This paper provides detailed analysis, statistical inference, and reproduction of previous reports that show no preferred spin direction. Code and data used for the reproduction are publicly available. The results show that the data used in all of these studies agrees with the observation of a preferred direction as observed from Earth. In some of these studies the datasets were too small, or the statistical analysis was incomplete. In other papers the results were impacted by experimental design decisions that lead directly to show non-preferred direction. In some of these cases these decisions are not stated in the papers, but were revealed after further investigation in cases where the reproduction of the work did not match the results reported in the papers. These results show that the data used in all of these previous studies in fact agree with the contention that galaxies as observed from Earth have a preferred spin direction, and the distribution of galaxy spin directions as observed from Earth form a cosmological-scale dipole axis. This study also shows that the reason for the observations is not necessarily an anomaly in the large-scale structure, and can also be related to internal structure of galaxies.
## 1 Introduction
According to the Cosmological Principle, the Universe is expected to be homogeneous and isotropic. On the other hand, a large number of studies using numerous different probes show cosmological-scale isotropy and parity violations, suggesting that the Cosmological Principle might not be aligned with observations (Aluri et al., 2023). In addition to the cosmic microwave background radiation (Lue et al., 1999; Eriksen et al., 2004; Abramo et al., 2006; Cline et al., 2003; Gordon and Hu, 2004; Land and Magueijo, 2005; Campanelli et al., 2007; Mariano and Perivolaropoulos, 2013; Ade et al., 2014; Zhe et al., 2015; Santos et al., 2015; Dong et al., 2015; Gruppuso et al., 2018; Ashtekar et al., 2021; Luongo et al., 2022; Yeung and Chu, 2022), such probes also include dark energy (Adhav et al., 2011; Adhav, 2011; Perivolaropoulos, 2014; Colin et al., 2019), \(H_{o}\) anisotropy (Luongo et al., 2021; Dainotti et al., 2022), distribution of galaxy morphology types (Javanmardi and Kroupa, 2017), cosmic rays (Aab et al., 2017), quasars (Hutsemekers et al., 2014; Slagter, 2022; Secrest et al., 2020; Zhao and Xia, 2021; Secrest et al., 2021), LX-T scaling (Migkas et al., 2020), Ia supernova (Javanmardi et al., 2015; Lin et al., 2016), short gamma ray bursts (Meszaros, 2019), large-scale clustering of _Fermi_ blazars (Marcha and Browne, 2021), radio galaxies (Taylor and Jagannathan, 2016; Contigiani et al., 2017; Panwar et al., 2020), and the tetrahedral arrangements of sets of four galaxies (Philcox, 2022; Hou et al., 2022).
Large-scale cosmological structures that shift from the Cosmological Principle and the Friedmann-Lemaitre-Robertson-Walker (FLRW) Universe (Aluri et al., 2023) can be related to "alternative" theories that were proposed in the past few decades (Aluri et al., 2023). These theories include contraction prior to inflation (Piao et al., 2004), spinor-driven inflation (Bohmer and Mota, 2008), moving dark energy (Jimenez and Maroto, 2007), primordial anisotropic vacuum pressure (Rodrigues, 2008), spin foam cosmology (Rovelli and Vidotto, 2010; Kisielowski et al., 2012), multiple vacua (Piao, 2005), inhomogeneous Big Bang singularity (Schneider and Celerier, 1999), cosmological big bounce (Battisti and Marciano, 2010), double inflation (Feng and Zhang, 2003), ellipsoidal universe (Campanelli et al., 2006, 2007, 2011; Gruppuso, 2007; Cea, 2014), longitudinal gravitational wave cosmology (Mol, 2011), anisotropic dark energy (Adhav et al., 2011; Adhav, 2011), or rotating universe (Godel, 1949; Ozsvath and Schucking, 1962; Ozsvath and Schucking, 2001; Sivaram and Arun, 2012; Chechin, 2016; Seshavatharam and Lakshminarayana, 2020; Campanelli, 2021). Dipole cosmological models that are based on the \(\Lambda CDM\) cosmology expanded to support a dipole universe (Krishnan et al., 2023) and a dipole big bang (Allahyari et al., 2023) were also proposed.
The contention of charge conjugation parity (CP) symmetry violation had been proposed in particle physics as early as the 1950's (Lee and Yang, 1956), and clear evidence of CP violation was provided in the 1960s (Cronin et al., 1964). These surprising observations can have implications on parity violation and dipoles in the Universe (Hou, 2011; Bian et al., 2018; Gava and Volpe, 2010; Flanz et al., 1995). Possible charge conjugation, parity transformation, and time reversal (CPT) symmetry violation (Lehnert, 2016) has also been proposed as a factor linked to an anisotropic or polarized Universe (Crow
der et al., 2013; Mavromatos, 2017; Mavromatos and Sarkar, 2018; Pophawski, 2011).
One of the cosmological models aligned with the contention that the Universe is oriented around a large-scale axis is black hole cosmology (Pathria, 1972), according which the Universe is the interior of a black hole in another universe (Pathria, 1972; Stuckey, 1994; Easson and Brandenberger, 2001; Seshavatharam, 2010; Poplawski, 2010; Tatum et al., 2018; Christillin, 2014; Chakrabarty et al., 2020). Since black holes spin (Gammie et al., 2004; Takahashi, 2004; Volonteri et al., 2005; McClintock et al., 2006; Mudambi et al., 2020; Reynolds, 2021), a universe hosted inside a black hole should inherit its spin from its host black hole (Pophawski, 2010a; Seshavatharam, 2010; Seshavatharam and Lakshminarayana, 2014; Christillin, 2014; Seshavatharam and Lakshminarayana, 2020b, a).
If our Universe is the interior of a black hole in another universe, it agrees with the theory of multiverse (Carr and Ellis, 2008; Hall and Nomura, 2008; Antonov, 2015; Garriga et al., 2016; Debnath and Nag, 2022; Trimble, 2009; Kragh, 2009). Black hole cosmology is also supported by the agreement between the Schwarzschild radius of the Universe and the Hubble radius (Christillin, 2014), as well as the accelerated expansion of the Universe without the need to assume the existence of dark energy. A black hole universe is also expected to violate the CPT symmetry (Lehnert, 2016). Black hole cosmology is directly related to the ability to view space as a projection, which is the theory of holographic universe (Susskind, 1995; Bak and Rey, 2000; Bousso, 2002; Myung, 2005; Hu and Ling, 2006; Rinaldi et al., 2022; Sivaram and Arun, 2013; Shor et al., 2021).
Quantum Field Theory (QFT) has shown promising success in explaining microscopic phenomena such as subatomic structures (Davies, 1976). Early analysis predicted an anisotropic universe, noting that "It is a curious unexplained feature that the present condition of the Universe is one of high isotropy" (Davies, 1976). Given QFT, cosmological-scale gravitational dipoles can be expected (Faulkner et al., 2022). Such gravitational dipoles are driven by matter and antimatter that have gravitational charges, and can explain phenomena normally attributed to dark energy and dark matter, and does not assume primordial singularity or cosmic inflation (Hajdukovic, 2013, 2014). The quantum vacuum can then be explained by polarized gravity, and the gravitational dipoles can also explain disagreement between the observed and theoretically predicted dark energy density (Hajdukovic, 2013, 2014). The poles as gravitational sources in the Universe can change the large-scale symmetry, provoking fluctuations that contribute in the evolution of the Universe and its geometrical aspect, making its asymmetricity possible.
As _tidal torque theory_ can explain the initial origin of galaxy angular momentum (Hoyle, 1949; Peebles, 1969; Doroshkevich, 1970; Porciani et al., 2002; Schafer, 2009), it has also been associated with the large-scale structure, and a link between the alignment of galaxy rotation and the large-scale structure has been proposed (Schafer, 2009). Empirical evidence of a link between the large-scale structure and the alignment of galaxy spin directions was reported using the Sloan Digital Sky Survey (SDSS) and the Two-Micron All-Sky Survey (Jones et al., 2010), and such link was reported by other consequent studies (Jones et al., 2010; Tempel et al., 2013; Tempel and Libeskind, 2013; Tempel et al., 2014; Pahwa et al., 2016; Ganeshaiah Veena et al., 2018; Lee et al., 2018; Ganeshaiah Veena et al., 2019; Lee et al., 2019, 2019; Blue Bird et al., 2020; Welker et al., 2020; Kraljic et al., 2021; Lopez et al., 2021; Motloch et al., 2021).
In addition to the probes mentioned above, another probe that supports the contention of anisotropic Universe and a preferred direction is the asymmetric distribution of galaxies with opposite spin directions. Early reports of the asymmetry used a small number of manually annotated galaxies in the local supercluster, suggesting that the number of galaxies spinning clockwise is different from the number of galaxies spinning counterclockwise with certainty of 92% (MacGillivray and Dodd, 1985).
The deployment of robotic telescopes allowed the analysis of a larger number of galaxies. Perhaps the first major modern high-throughput autonomous digital sky survey was SDSS (York et al., 2000), with data acquisition power that was unprecedented at the time. Analysis of the distribution of the spin directions of spiral galaxies in SDSS images showed parity violation and a dipole axis, observed in several different experiments using manual and automatic annotation (Longo, 2011; Shamir, 2012, 2020c, b, 2021a, 2022c, a). In addition to SDSS, experiments with other sky surveys also showed similar large-scale parity violation in galaxy spin directions as observed from Earth. These experiments include data from Hubble Space Telescope (Shamir, 2020b), Pan-STARRS (Shamir, 2020c), DECam (Shamir, 2021b), DES (Shamir, 2022b), and DESI Legacy Survey (Shamir, 2022a). Some of the experiments included over \(10^{6}\) galaxies (Shamir, 2022a), providing strong statistical significance. The experiments showed clear separation between hemispheres, such that one hemisphere has an excessive number of galaxies spinning clockwise, while the opposite hemisphere contained more galaxies that seem to spin counterclockwise to an Earth-based observer. An analysis with \(\sim 1.3\cdot 10^{6}\) galaxies allowed to show a dipole axis alignment without the need to fit it in a certain statistical model (Shamir, 2022a). The parity violation seems to become stronger as the redshift increases (Shamir, 2020c, 2022c). Other studies include a smaller number of galaxies to show links between the spin directions of neighboring galaxies (Mai et al., 2022), including galaxies that are too far **from each other** to have gravitational interactions (Lee et al., 2019b). These links were defined "mysterious", suggesting that galaxies in the Universe are connected through their spin directions (Lee et al., 2019b). A correlation was also identified between the cosmic initial conditions and spin directions of galaxies (Motloch et al., 2021). Experiments showed that the strength of the asymmetry is not necessarily affected when limiting the size of the galaxies (Shamir, 2020c), but showed inconclusive evidence of \(1.1\sigma\) that the asymmetry increases as the density of the galaxies gets larger (Shamir, 2022c).
While these studies showed patterns of alignment in galaxies spin directions at scales far larger than any known astrophysical structure, numerous other studies showed alignment in the spin directions of galaxies at smaller scales. For instance, a set of 1,418 galaxies from the Sydney-Australian-Astronomical-Observatory Multi-object Integral-Field Spec
trograph (SAMI) survey (Bryant et al., 2015; Croom et al., 2021) showed spin alignment of galaxies within filaments (Welker et al., 2020), supported by a consequent study that also showed a link between spin alignment and the morphology of the galaxies (Barsanti et al., 2022). Spin direction alignment in cosmic web filaments has also been shown by multiple other studies (Tempel et al., 2013; Tempel and Libeskind, 2013; Kraljic et al., 2021), as well as in numerical simulations (Zhang et al., 2009; Davis and Natarajan, 2009; Libeskind et al., 2013, 2014; Forero-Romero et al., 2014; Wang and Kang, 2018; Lopez et al., 2019).
But while several studies showed non-random distribution of galaxy orientation, some experiments argued that the distribution is random (Iye and Sugai, 1991; Land et al., 2008; Hayes et al., 2017; Iye et al., 2021). One of the first documented experiments to claim for random distribution and no preferred handedness in the distribution of galaxy spin directions was an experiment based on manually collected galaxies that took place in the pre-information era in astronomy (Iye and Sugai, 1991). During that time, large-scale autonomous digital sky surveys did not yet exist, which limited the ability to collect and analyze large databases within reasonable efforts. The dataset included 3,257 galaxies annotated as spinning clockwise, and 3,268 galaxies annotated as spinning counterclockwise. The downside of the study was that due to the limited ability to collect data at the time, the dataset was relatively small. The small size of the dataset does not allow to provide statistical significance given the magnitude of the asymmetry reported here and in previous reports.
For instance, the asymmetry between the number of clockwise and counterclockwise galaxies as described in (Shamir, 2020c) is \(\sim\)1.4%. Given that asymmetry, showing a one-tailed P value of 0.05 requires 55,818 galaxies. Therefore, the dataset of a few thousand galaxies is not large enough to show a statistically significant asymmetry. But while these experiments use a small number of galaxies, other experiments used larger datasets, but still showed random distribution of the galaxy spin directions.
The purpose of this paper is to carefully examine the claims and experiments, and understand the conflicting results between different experiments that in some cases use the same data but reach different conclusions. Section 2 discusses the results provided by the _Galaxy Zoo_ citizen science initiative to annotate galaxies by their spin directions (Land et al., 2008), Section 3 reproduces the analysis of SDSS galaxies with spectra annotated by the _SpArcFiRe_ method (Hayes et al., 2017), Section 5 reproduces an experiment of SDSS galaxies annotated using the _Ganalyzer_ method (Iye et al., 2021), Section 6 discusses possible reasons for the observations, and Section 7 provides a possible explanation to the observation that does not necessarily require to shift from the standard model.
## 2 Early analyses with manual annotation and Galaxy Zoo crowdsourcing
Another notable experiment was based on galaxies annotated manually by a large number of volunteers through the _Galaxy Zoo_ platform (Land et al., 2008). The analysis by using manual annotation was limited by a very high error rate, but after selecting just the galaxies on which the rate of agreement was high, the error can be reduced. But more importantly, the annotations were systematically biased by the human perception or the user interface (Land et al., 2008). Even when using the "superclean" criterion, according which only galaxies on which 95% or more of annotations agreed are used, the asymmetry was \(\sim\)15%. That asymmetry was determined to be driven by the bias rather than a reflection of the real distribution of galaxies in the sky (Land et al., 2008).
When the bias was noticed, a new experiment was done by annotating a small subset of 91,303 galaxies using the same platform. In addition to annotating the original images, the second experiment also included the annotation of the mirrored image of each galaxy. The annotation of the mirrored images is expected to offset the bias.
After selecting the "sueprclean" annotations, the results showed 5.525% clockwise galaxies compared to 5.646% mirrored counterclockwise galaxies that were annotated as clockwise. The information is specified in Table 2 in (Land et al., 2008). That showed a 2.1% higher number of counterclockwise galaxies. Similar results were also shown with galaxies annotated as spinning counterclockwise, where 6.032% of the non-mirrored images were annotated as spinning counterclockwise, while 5.942% of the mirrored galaxies were annotated as clockwise. The magnitude and direction of the 1%-2% asymmetry is aligned with the asymmetry reported in (Shamir, 2020c), which is also based on SDSS galaxies with spectra, and therefore the footprint and limiting magnitude of the galaxies used in (Land et al., 2008) and in (Shamir, 2020c) are expected to be similar.
The primary difference between the experiment of (Land et al., 2008) and the experiment of (Shamir, 2020c) is the size of the datasets. While in (Shamir, 2020c) more than \(6\cdot 10^{4}\) galaxies were used, the set of "superclean" Galaxy Zoo galaxies that were annotated as mirrored and non-mirrored to offset for the human bias was just \(\sim 10^{4}\). Table 1 shows the number of galaxies that spin clockwise and counterclockwise as annotated by Galaxy Zoo when the original images were annotated, and when the mirrored images were annotated.
While the crowdsourcing analysis provided more than \(10^{5}\) annotated galaxies, it is still not of sufficient size to show statistically significant asymmetry. On the other hand, it does show a consistent higher number of counterclockwise galaxies compared to clockwise galaxies. For instance, the number of galaxies annotated as clockwise is lower than the number of mirrored galaxies annotated as clockwise, which is 5,044 and 5,155, respectively. That provides a P\(\simeq\)0.13 one-tailed statistical significance of the binomial distribution assuming the probability of each spin direction is 0.5.
The numbers of galaxies annotated as counterclockwise and
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & Original & Mirrored & \(\frac{p_{\rm obs}}{p_{\rm obs}}\) & P \\ \hline Clockwise & 5,044 & 5,155 & 1.022 & 0.13 \\ Counterclockwise & 5,507 & 5,425 & 1.015 & 0.21 \\ \hline \end{tabular}
\end{table}
Table 1: The number of galaxies in the original images and mirrored image annotated as spinning clockwise or counterclockwise through Galaxy Zoo (Land et al., 2008).
the number of mirrored galaxies annotated as counterclockwise were 5,507 and 5,425, respectively, providing a binomial distribution statistical signal of \(P\simeq 0.21\). That asymmetry cannot be considered statistically significant, but it also agrees in direction and magnitude with the asymmetry shown in (Shamir, 2020c) for the same footprint and distribution in the sky. It is possible that if the dataset used in (Land et al., 2008) was larger the results would have been statistically significant, but since the dataset is of a limited size that assumption cannot be proven or disproven.
While none of the two experiments show statistical significance, they both show a higher number of galaxies that spin counterclockwise in the SDSS footprint. The aggregated P-value of the two experiments is 0.0273. Although the annotations of each galaxy are different, some of the galaxies might exist in both experiments, where in one experiment the galaxy was annotated using its original image, and in the other experiment it was annotated using its mirrored image. The presence of galaxies that were used in both experiments therefore does not allow to soundly aggregate the P values. But in any case, even if the P value does not show statistical significance, the results observed with Galaxy Zoo data certainly do not conflict with the results shown in (Shamir, 2020c) for SDSS galaxies with spectra.
## 3 A previous experiment using Galaxy Zoo galaxies annotated by _SpArcFiRe_
Another analysis (Hayes et al., 2017) of SDSS galaxies with spectra used an automatic annotation to provide an analysis with a far larger number of galaxies compared to (Land et al., 2008). The SDSS galaxies were the galaxies also used by _Galaxy Zoo 1_. To separate the galaxies by their spin direction, the _SpArcFiRe_ method (Davis and Hayes, 2014) was used. _SpArcFiRe_ (SPiral ARC FInder and REporter) receives a galaxy image as an input, normally in the PNG image file format, and extract several descriptors of the galaxy arms. The algorithm works by first identifying arm segments in the galaxy image, and then grouping the pixels that belong in each segment. Once the pixels the different arm segments are identified, the pixels of each arm segment are fitted to a logarithmic spiral arc. The fitness of the pixels in the arm segment provides information about the arm, that can also be used to identify its curve direction, and consequently the spin direction of the galaxy. Source code of the implementation of the _SpArcFiRe_ algorithm is available at [https://github.com/waynehbayes/SpArcFiRe](https://github.com/waynehbayes/SpArcFiRe), and a full detailed description of the method is available at (Davis and Hayes, 2014). Processing of a 128\(\times\)128 galaxy galaxy image takes \(\sim\)30 seconds using an Intel Core-i7 processor, and therefore a set of 100 cores was used to annotate the \(\sim 6.7\cdot 10^{5}\) galaxies in the _Galaxy Zoo 1_ dataset.
After applying the _SpArcFiRe_ method to separate the galaxies by their spin direction, the results of the study showed that when separating the spiral galaxies from the elliptical galaxies through Galaxy Zoo 1 annotations, the asymmetry can be considered statistically significant. As shown in Table 2 in (Hayes et al., 2017), the statistical significance ranged between \(2\sigma\) to \(3\sigma\), depends on the agreement threshold of the _Galaxy Zoo_ separation between the elliptical and spiral galaxies. These results agree with the contention of a higher number of galaxies spinning counterclockwise in that footprint, but might be biased due to a possible human bias in the selection of spiral galaxies. For instance, if the human annotators tend to select more counterclockwise galaxies as spiral, that can lead to an observed asymmetry when annotating the spin directions of these galaxies (Hayes et al., 2017).
To avoid the bias in the human selection of spiral galaxies, another experiment was made by selecting the spiral galaxies by using a machine learning algorithm. Naturally, the machine learning algorithm was trained by spiral and elliptical galaxies. To avoid bias in the classification, the class of spiral galaxies included the same number of clockwise and counterclockwise galaxies (Hayes et al., 2017). However, in addition to having a balanced number of galaxies in each class, the machine learning algorithm was limited to features that cannot identify the spin direction of the galaxy. That is, all features that showed a certain correlation with the spin direction of the galaxy were removed manually and were not used in the analysis. As stated in (Hayes et al., 2017), "We choose our attributes to include some photometric attributes that were disjoint with those that Shamir (2016) found to be correlated with chirality, in addition to several SpArcFiRe outputs with all chirality information removed".
When manually removing the features that correlate with the asymmetry, it can be expected that the step of separation of the galaxies by their spin directions would provide a lower asymmetry signal. To test that empirically, the same experiment was reproduced with the same code, but without using machine learning to identify spiral galaxies, and therefore also without manually removing specific features. Two ways of selecting the galaxies were used. The first was by not applying any selection of spiral galaxies. That is, the SpArcFiRe algorithm was applied to all Galaxy Zoo 1 galaxies, and all galaxies that SpArcFiRe was able to determine their spin directions were used in the analysis (McAdam et al., 2023). That led to a dataset of 271,063 galaxies. The other way for selecting spiral galaxies was by applying the Ganalyzer algorithm (Shamir, 2011) to select spiral galaxies, but without separating the galaxies by their spin direction.
Ganalyzer works by first converting the galaxy image into its radial intensity plot, which is a 360\(\times\)35 image such that the value of the pixel at Cartesian coordinates \((x,y)\) is the value of the pixel in the polar coordinate \((\theta,r)\) in the original galaxy image, where the centre point is the center of the galaxy. The value of \(\theta\) is within \((0,360)\), and the radius r is in percentage of the galaxy radius. Then, a peak detection algorithm is applied to identify the maximum brightness in each line of the radial intensity plot. Since galaxy arms are brighter than the background pixels at the same radial distance from the galaxy center, the peaks are the galaxy arms. If the peaks are aligned in straight vertical lines, it means that the angle of the arm does not change with the radial distance, and therefore the galaxy is not spiral. But if the peak form a line that is not vertical, it shows that the arms are spiral. The algorithm is explained in full detail and experimental results in (Shamir, 2011).
Ganalyzer is not based on machine learning or pattern recognition, and therefore does not have a step of training or feature selection. The dataset of annotated galaxies after selecting spiral galaxies contained 138,940 galaxies. Both datasets can be accessed at [https://people.cs.ksu.edu/~lshamir/data/sparcfire](https://people.cs.ksu.edu/~lshamir/data/sparcfire). Figure 1 shows the results of the analysis when using the original galaxy images and the mirrored galaxy images.
As the figure shows, the results are different when using the original images and the mirrored images. That is expected due to the asymmetric nature of the SpArcFiRe software is discussed in the appendix of (Hayes et al., 2017). The statistical significance of the datasets ranges from 2.05\(\sigma\) when using the non-mirrored images without a first step of spiral selection, to 3.6\(\sigma\) when using the mirrored images when using the mirrored galaxy images annotated after a first step of selecting spiral galaxies.
## 4 Previous experiment by using deep neural networks for annotating the galaxies
In the past decade, deep neural networks became a very common tool for annotating galaxies by their morphology (Dielman et al., 2015; Graham, 2019; Mittal et al., 2019; Hosny et al., 2020; Cecotti, 2020; Cheng et al., 2020). Neural networks have been shown to provide superior performance in their ability to annotate images automatically, and due to the availability of libraries such as TensorFlow or PyTorch their implementation is normally more accessible compared to model-driven approaches. The downside of deep neural networks is that they work by complex data-driven rules, which makes them non-trivial to understand. That can add many unexpected biases that makes neural network imperfect for detecting subtle asymmetries reflected by the annotation of image data (Dhar and Shamir, 2022).
An application of a deep neural network to annotate galaxies by their sin direction was performed in Tadaki et al. (2020), annotating HCS galaxies by using deep convolutional neural networks. The annotation provided 38,718 galaxies spinning clockwise, and 37,917 spinning counterclockwise in the HCS footprint. The higher number of clockwise galaxies in the HCS footprint is in agreement with analysis using the DESI Legacy Survey, also showing a higher number of galaxies spinning clockwise around that part of the sky (Shamir, 2022). The one-tailed probability to such distribution to occur by chance is P=0.0019. Because the biases of deep neural networks are very difficult to control (Rudin, 2019; Dhar and Shamir, 2022), such analysis cannot provide a clear proof of non-random distribution of galaxy spin directions, as also state in the paper (Tadaki et al., 2020). But the differences as reported in (Tadaki et al., 2020) also definitely do not conflict with the contention that the distribution of galaxy spin directions is not necessarily symmetric, and in fact agree with that contention rather than disagree with it.
## 5 Reproduction of analysis of 72,888 SDSS galaxies
Another experiments that argued for no preferred handedness in the distribution of galaxy spin directions using a relatively large number of galaxies is (Iye et al., 2021). The analysis of Iye et al. (2021) is based on a dataset of 162,514 photometric objects used in (Shamir, 2017). The main argument of the work suggests that the dataset contains a large number of "duplicate objects", and once these objects are removed the dataset does not show non-random distribution (Iye et al., 2021).
The dataset used in (Shamir, 2017) was used for photometric analysis of objects that spin in opposite directions. The study (Shamir, 2017) does not make any claim for the presence or absence of any kind of axis in that data, and no such claim about that dataset was made in any other paper. While several previous experiments were made with SDSS galaxies to show a dipole axis formed by the distribution of galaxy spin directions (Shamir, 2012, 2020, 2021), none of these experiments were based on the dataset used in (Shamir, 2017). When using the photometric objects used in (Shamir, 2017) to study the distribution of galaxy spin directions, photometric objects that are part of the same galaxies indeed become "duplicate objects". But as mentioned above, (Shamir, 2017) does not make any claim for the presence of any kind of axis, and no such claim was made about that dataset in any other paper.
Although the dataset used in (Shamir, 2017) was not used in previous papers to analyze a dipole axis, it is still expected that the dataset used in (Shamir, 2017) would be consistent with the results shown by previous datasets. The "clean" dataset used by Iye et al. (2021) was compiled by removing duplicate objects from the dataset used in (Shamir, 2017). That provided a dataset of 72,888 galaxies (Iye et al., 2021). That dataset is available for download at [https://people.cs.ksu.edu/~lshamir/data/jye_et_al/galaxies.csv](https://people.cs.ksu.edu/~lshamir/data/jye_et_al/galaxies.csv).
As explained in Equation 2 in Section 2.1 of (Iye et al., 2021), the strength of the dipole D from a certain point in the sky is determined by \(\Sigma_{i=1}^{N}h^{i}\Omega^{i}P/N=\Sigma_{i=1}^{N}h^{i}cos\theta^{i}/N\), where \(P\) is the fiduciary pole vector, \(\Omega^{i}\) is the spin vector of galaxy \(i\), \(h^{i}\) is the spin direction of galaxy \(i\), \(\theta^{i}\) is the angle between the direction of the galaxy \(i\) and the direction of the pole vector P, and \(N\) is the total number of galaxies in the dataset.
The spin direction \(h^{i}\) of the galaxy \(i\) is within the set \(\{1,-1\}\). The statistical strength of a dipole to exist at a certain point in the sky was determined by the D when the spin directions of the galaxies were taken from the dataset, compared to the mean \(\overline{D}\) and standard deviation \(D_{\sigma}\) computed after running the same analysis when assigning the galaxies with random spin directions. The full implementation of the method is available at [https://people.cs.ksu.edu/~lshamir/data/jye_et_al](https://people.cs.ksu.edu/~lshamir/data/jye_et_al).
To follow the experiment of Iye et al. (2021), the mean and standard deviation of D when using random spin directions was done 50,000 times, although the change in the results becomes minimal after 2,000 runs. Following the description of (Iye et al., 2021), the spin direction of the galaxy d is taken from the same dataset of 72,888 galaxies used in (Iye
et al., 2021), and also available publicly at [https://people.cs.ksu.edu/~lshamir/data/iya_et_al/galaxies.csv](https://people.cs.ksu.edu/~lshamir/data/iya_et_al/galaxies.csv).
As also done in (Iye et al., 2021), the location of the most likely dipole axis was determined by the statistical \(\sigma\) difference between the D computed with the spin directions of the galaxies and the D computed when the galaxies are assigned with random spin directions determined the most likely position and the statistical signal of the dipole. The code that implements the method and step-by-step instructions to reproduce the results are available at [https://people.cs.ksu.edu/~lshamir/data/iya_et_al](https://people.cs.ksu.edu/~lshamir/data/iya_et_al).
### Results
Iye et al. (2021) performed experiments with two datasets. The first was the entire dataset used in (Shamir, 2017). The second dataset was a subset of the dataset used in (Shamir, 2017), such that the redshift of the galaxies was limited to less than 0.1. For instance, the low statistical significance of \(0.29\sigma\) reported in the abstract of (Iye et al., 2021) as the statistical significance of the sample was in fact the statistical significance observed in the sub-sample of galaxies with redshift lower than 0.1, as shown in Table 1 in (Iye et al., 2021).
The low statistical significance when limiting the redshift was reported previously in (Shamir, 2020c). For instance, Tables, 3, 5, 6 and 7 in (Shamir, 2020c) show random distribution when the redshift is limited to 0.1. Also, an experiment reported in (Shamir, 2020c) used galaxies limited to \(z<0.15\), and showed that the statistical significance of the dipole axis was below \(2\sigma\). A similar experiment (Shamir, 2022c) also showed that the dipole axis is not statistically significant in the lower redshift ranges, including \(0<z<0.1\). Therefore, limiting the redshift to 0.1 is expected to show low statistical significance. The low statistical significance shown in (Iye et al., 2021) agrees with the random distribution in that redshift range as shown in (Shamir, 2020c, 2022c).
When not limiting the redshift, Iye et al. (2021) argue that the statistical significance of the "clean" dataset of 72,888 galaxies exhibits a dipole axis in the galaxy spin directions with statistical significance of \(1.29\sigma\). That is specified in Table 1 in (Iye et al., 2021). However, the reproduction of the experiment using the exact same 72,888 galaxies and the code described in Section 5 shows that the statistical significance of the dipole axis is \(2.15\sigma\). The code, data, step-by-step instructions, and the output of the analysis are provided at [https://people.cs.ksu.edu/~lshamir/data/iya_et_al](https://people.cs.ksu.edu/~lshamir/data/iya_et_al).
Figure 2 provides a Mollweide projection that visualizes the statistical significance of a dipole axis from every possible integer combination of \((\alpha,\delta)\). The most likely dipole axis with statistical significance of \(2.15\sigma\) is observed at \((\alpha=170^{o},\delta=35^{o})\). That statistical significance is far higher than the \(1.29\sigma\) reported in Table 1 of (Iye et al., 2021). Reasons for the differences are discussed in Section 5.2.
A dipole axis formed by the distribution of galaxy spin directions means that one hemisphere has a higher number of galaxies that spin in one direction, while that asymmetry is inverse in the opposite hemisphere. Table 2 shows the distribution of galaxy spin directions when separating the sky into two hemispheres.
Statistically significant non-random distribution is observed in one of the hemispheres. In the opposite hemisphere the asymmetry is not statistically significant, but the asymmetry is inverse to the asymmetry in the hemisphere centred
Figure 1: Results of analysis of Galaxy Zoo 1 galaxies annotated by SpArcFiRe. Panel (a) is the result of the analysis with spiral galaxies selected by Ganalyzer when the images are mirrored, Panel (b) shows the analysis when the galaxies are not mirrored. Panels (c) and (d) show the results of the analysis without a first step of selection of spiral galaxies when the galaxy images are mirrored or non-mirrored, respectively.
at (RA=160\({}^{o}\)). When applying statistical Bonferroni correction for the two hemisphere, the statistical significance is still \(\sim 0.0104\). A Monte Carlo simulation showed that the probability to have such distribution by chance is \(P\simeq 0.007\). Code and instructions to reproduce the analysis are provided at [https://people.cs.ksu.edu/~lshamir/data/ive_et_al](https://people.cs.ksu.edu/~lshamir/data/ive_et_al). The fact that a simple separation into two hemisphere is statistically significant shows that a method that shows random distribution of this specific dataset is incomplete.
The algorithm used to annotate the galaxies works by first converting the galaxy image into its radial intensity plot transformation, and then detecting peaks in the transformation to identify the shift in the peaks, which show the shift in the arms, and therefore the curve (Shamir, 2011). To perform a correct identification of the spin direction of the galaxy, a sufficient number of peaks detected in the radial intensity plot is required. Therefore, if a galaxy does not have a certain minimum number of peaks detected in its radial intensity plot, that galaxy is rejected from the analysis.
The analysis shown in Figure 2 is the result of using the galaxies used in (Shamir, 2017). That experiment aimed at performing a photometric analysis rather than an attempt to identify a dipole axis in the spin directions of the galaxies as was done in (Iye et al., 2021). The minimum number of peaks detected in the radial intensity plot to determine the spin direction of a galaxy in (Shamir, 2017) was 10 peaks. That is, if after converting the galaxy image to its radial intensity plot 10 peaks or more were detected, the spin direction of the galaxy could be determined based on these peaks. The low number of peaks increased the number of annotated galaxies, but also led to a certain inaccuracy of the galaxy annotation. The inaccuracy of the annotations is discussed in (Shamir, 2017). In the previous experiments of identifying a dipole axis in the spin directions of the galaxies, the minimum number of peaks required to make an annotation was 30 (Shamir, 2020c). The certain inaccuracy of the annotations of the galaxy images can lead to weaker signal compared to previous studies with a similar number of galaxies such as (Shamir, 2020c). Another reason for the weaker signal is that the objects used in (Shamir, 2017) were relatively bright objects (\(i<18\)), and therefore objects with lower redshift. But despite these selection criteria, the signal observed in the experiment is stronger than \(2\sigma\).
### Reasons for the differences in the results
Iye et al. (2021) state that the "clean" dataset of 72,888 galaxies exhibits random distribution of \(1.29\sigma\) in the spin directions of the spiral galaxies. These results are in conflict with the results shown in Section 5.1, showing that the reproduction of the analysis with the exact same data shows a much stronger statistical significance, stronger than \(2\sigma\). One reason that can lead to lower statistical significance is that Iye et al. (2021) used the photometric redshift to limit the volume of the galaxies. For instance, the \(0.29\sigma\) highlighted in the abstract was determined after using the photometric redshift. The (Iye et al., 2021) paper uses the term "measured redshift", but as explained in (Shamir, 2017) the vast majority of the galaxies in that dataset do not have spectra, and therefore the galaxies could not have spectroscopic redshift. As stated in the journal version of (Iye et al., 2021), the source of the redshifts is the catalogue of (Paul et al., 2018), which is a catalogue of photometric redshift. The photometric redshift is highly
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline Hemisphere & \# Z-wise & \# S-wise & \(\frac{\#\sigma}{\#\Im}\) & P & P \\ (RA) & & & & (one-tailed) & (two-tailed) \\ \hline \(70^{\circ}-250^{\circ}\) & 23,037 & 22,442 & 1.0265 & 0.0026 & 0.0052 \\ \(>250^{\circ}U<70^{\circ}\) & 13,660 & 13,749 & 0.9935 & 0.29 & 0.58 \\ \hline \end{tabular}
\end{table}
Table 2: The number of galaxies in the (Iye et al., 2021) catalogue that spin in opposite directions when separating the sky into two hemispheres.
Figure 2: The statistical significance for a dipole axis to exist by chance from all \((\alpha,\delta)\) combinations. Reproduction of the analysis is available at [https://people.cs.ksu.edu/~lshamir/data/ive_et_al](https://people.cs.ksu.edu/~lshamir/data/ive_et_al).
inaccurate and can be systematically biased. The inaccuracy of the photometric redshift can add substantial bias to the results, and can lead to lower statistical signal due to the unexpected inaccuracies added to the data.
However, the analysis when using the entire "clean" dataset was done without limiting the volume, and without using the photometric redshift. Therefore, the photometric redshift cannot be the reason for the discrepancy between the reported and observed results. An inquiry to the National Astronomical Observatory of Japan (NAOJ), where the research was conducted, provided a reason for the differences. The analysis is available at (Watanabe, 2022). The full analysis of the NAOJ can be accessed at [https://people.cs.ksu.edu/~lshamir/data/iye_et_al/watanabe_NAOJ_reply.pdf](https://people.cs.ksu.edu/~lshamir/data/iye_et_al/watanabe_NAOJ_reply.pdf).
According to the analysis of the NAOJ (Watanabe, 2022), the analysis was done by assuming that the galaxies are distributed uniformly in the hemisphere. As the analysis summary states: "Because it is hard to verify the detail of simulations, we here calculate the analytic solution by Chandrasekhar (1943) which assumes uniform samples in the hemisphere." That is, the statistical significance can be reproduced to a certain extent if assuming that the galaxies used in the analysis are distributed uniformly in the hemisphere. When making that assumption, the statistical signal of the dipole axis is \(1.35\sigma\), which is close to the \(1.29\sigma\) reported in (Iye et al., 2021).
Iye et al. (2021) do not mention in the paper the analytic solution of Chandrasekhar (1943), or the assumption of a uniform sample in the hemisphere. More importantly, the assumption of uniform distribution in the hemisphere is not true for SDSS in general, and specifically not true for the dataset used here. for instance, Figure 3 shows the distribution of the RA of the galaxies. As the figure shows, the distribution is not uniform or close to uniformity, and some RA ranges are far more populated than other ranges.
To transform into a uniform sample, the locations of the galaxies need to change so that they are spread uniformly. These changes in the locations can lead to changes in the results of the analysis of the dipole axis. For instance, the file [https://people.cs.ksu.edu/~lshamir/data/iye_et_al/galaxies_uniform.csv](https://people.cs.ksu.edu/~lshamir/data/iye_et_al/galaxies_uniform.csv) is the same 72,888 galaxies such that their RA values are distributed uniformly in the range of \((0,360)\), and the declination values are distributed uniformly in the declination range of the galaxies in the original dataset, which is \((-24^{o},5,79^{o})\).
Figure 4 shows the same analysis as Figure 2, but with the uniformly distributed galaxies. The most likely dipole axis is identified at \((\alpha=172^{o},\delta=51^{o})\), with statistical significance of \(1.68\sigma\). That statistical signal is still higher than the \(1.35\sigma\) reported in (Watanabe, 2022), but it is lower than the statistical signal observed when using the real locations of the galaxies, without making any assumption regarding the nature of their distribution in the hemisphere. But the weaker signal shows that the assumption that the galaxies are distributed uniformly leads to a weaker signal, and could be the reason for the weaker signal observed by Iye et al. (2021).
## 6 The possibility of error in the galaxy annotation
The _Ganalyzer_ algorithm used to annotate the galaxies in Section 5 is a simple model-driven algorithm that follows defined rules. It does not use machine learning or pattern recognition paradigms that their high complexity make them a "black box", and are very difficult to analyze and validate (Rudin, 2019; Dhar and Shamir, 2022). The simple "mechanical" nature of _Ganalyzer_ allows it to ensure that the analysis is symmetric, as was shown experimentally in (Shamir, 2021, 2022, 2022, 2022). The same analysis shown in Section 5.1 was repeated after mirroring all galaxies by using the _ImageMagick_ "flop" command. Figure 5 shows the results, which is the same as Figure 2, and a very similar statistical signal of \(2.17\sigma\). The slight difference in statistical signal is expected due to the random assignment of spin directions.
Assuming that the annotation of the galaxies has a certain error, Equation 1 defines the asymmetry \(A\) between galaxies spinning in opposite directions in a certain part of the sky as observed from Earth
\[A=\frac{(N_{cw}+E_{cw})-(N_{ccw}+E_{ccw})}{N_{cw}+E_{cw}+N_{ccw}+E_{ccw}}, \tag{1}\]
where \(N_{cw}\) and \(N_{ccw}\) are the numbers of galaxies spinning clockwise and counterclockwise, respectively, and \(E_{cw}\) and \(E_{ccw}\) are the numbers of galaxies incorrectly annotated as spinning clockwise and counterclockwise, respectively.
Because the number of galaxies incorrectly annotated as spinning clockwise is expected to be about the same as the
Figure 4: The statistical significance of a dipole axis from all \((\alpha,\delta)\) combinations when the distribution of the RA and declination is uniform.
Figure 3: RA distribution of the galaxies.
number of galaxies incorrectly annotated as spinning counterclockwise. When \(E_{cw}\simeq E_{ccew}\), \(A\) can be defined by Equation 2.
\[A=\frac{N_{cw}-N_{ccew}}{N_{cw}+E_{cw}+N_{ccew}+E_{ccew}} \tag{2}\]
Since \(E_{cw}\) and \(E_{ccew}\) must be positive integers, \(A\) gets lower as \(E_{cw}\) and \(E_{ccew}\) gets higher. Therefore, an error in the annotation algorithm can only make the asymmetry \(A\) smaller. The effect of incorrectly annotated galaxies was studied in (Shamir, 2021). That analysis showed that when adding artificial error to the annotation in a symmetric manner the results do not change substantially, and the signal does not increase when the error is added. But when adding the error in a non-symmetric manner, even a small error of merely 2% leads to a dipole the peaks in the celestial pole, with very high statistical significance (Shamir, 2021).
Another aspect is the completeness of the analysis. While some of the galaxies were annotated by their spin directions, most galaxies in the initial dataset were not assigned with a spin direction, and were therefore excluded from the analysis. These galaxies did not have an identifiable spin directions. Clearly, many of these galaxies do spin in a certain direction, but the direction cannot be determined from the image. For instance, Figure 6 shows galaxies imaged by Pan-STARRS, SDSS, and HST.
As the figure shows, galaxies imaged by SDSS and Pan-STARRS do not have an identifiable spin direction, while the HST images of the same galaxies show that these galaxies have clear spin patterns. It is obvious that galaxies imaged by HST and do not have an identifiable spin patterns also cannot be assumed to have no spin direction, as HST also has a limiting magnitude. The symmetric nature of the algorithm is therefore critical to ensure the absence of a small bias that can lead to biased results. Other reasons that can affect the analysis are galaxies with leading arms, cosmic variance, hardware flaws, and atmospheric effect as discussed in Section 4 in (Shamir, 2022) or in (Shamir, 2022).
In all of these previously collected datasets, the direction of rotation of the galaxies were determined by the shape of the arms. Therefore, the galaxies are face-on galaxies, allowing an Earth-based observer to identify the arms and their curves. Obviously, the datasets used here are all datasets used in previous experiments, and were used here in the same manner. These experiments did not include the inclination
Figure 5: The statistical significance a dipole axis to exist by chance from different \((\alpha,\delta)\) combinations after mirroring the images.
Figure 6: Galaxies images by HST (left), Pan-STARRS (middle), and SDSS (right). The \((\alpha,\delta)\) coordinates of each galaxy are also specified. The images show that galaxies that have clear spin direction in HST cannot be annotated when using the Earth-based SDSS or Pan-STARRS.
of the galaxies in the analysis, as these galaxies are mostly face-on galaxies, but it can be assumed that the inclination of the face-on galaxies in not exactly 90 degrees for all of them. But it is also expected that these inclination variations will be distributed equally between galaxies that spin clockwise an galaxies that spin counterclockwise. Therefore, if the expected slight inclination variations affect the analysis, it is expected to affect clockwise and counterclockwise galaxies in a similar manner, and therefore cannot lead to a preferred spin direction.
## 7 Explanation to the observation that is not related to the large-scale structure
One of the explanations to the observation described in this paper is that the observation reflects the real Universe. The contention that the Universe is oriented around a major axis shifts from the standard cosmological models. It might be in agreement with other theories such as ellipsoidal universe (Campanelli et al., 2006; Gruppuso, 2007), rotating Universe (Godel, 1949; Ozsvath and Schucking, 1962; Ozsvath and Schucking, 2001), and black hole cosmology (Pathria, 1972; Stuckey, 1994; Easson and Brandenberger, 2001; Seshavatharam, 2010; Poplawski, 2010; Christillin, 2014; Dymnikova, 2019; Chakrabarty et al., 2020; Poplawski, 2021; Seshavatharam and Lakshminarayana, 2022; Gaztanaga, 2022a,b), which assume the existence of a cosmological-scale axis, but it is not aligned with the standard model. On the other hand, it is also possible that the Universe does not have a cosmological-scale axis, and the observed asymmetry is driven by internal structure of galaxies (Shamir, 2020a; McAdam and Shamir, 2023). In that case, the observation can be explained without the need to modify the standard cosmological model.
One of the indications that the observed asymmetry could be driven by internal structure of galaxies is that the peak of the dipole axes as determined by the different experiments and different telescopes tend to be close to the Galactic pole. Figure 7 displays the peaks of the axes observed in 11 previous experiments using SDSS (Land et al., 2008; Longo, 2011; Shamir, 2012, 2016, 2020c, 2021a), Pan-STARRS (Shamir, 2020c), DESI Legacy Survey (Shamir, 2022a), DES (Shamir, 2022b), and DECam (Shamir, 2021b). As the figure shows, the axes peak with proximity to the Galactic pole, leading to the possibility that the observation is driven by internal structure of galaxies and the rotation of the observed galaxies relative to the Milky Way. The analysis described in this paper also shows a dipole axis that peaks with close proximity to the galactic pole. That was observed with galaxies annotated by Ganalyzer, as well as the experiments with galaxies annotated by SPARCFIRE, as shown by Figure 1.
Comparison of the brightness of these galaxies shows a statistically significant difference in the brightness of clockwise and counterclockwise galaxies. That was also shown with galaxies from SDSS, Pan-STARRS, HST, and DESI Legacy Survey (Shamir, 2020a; McAdam and Shamir, 2023). The difference in galaxy brightness can be linked directly to the asymmetry in the number of galaxies that spin in opposite directions. Naturally, if galaxies around the galactic pole that rotate counterclockwise are slightly brighter than galaxies that rotate clockwise, more counterclockwise galaxies will be detected in that part of the sky. That will lead to asymmetry between the number of galaxies that spin in opposite directions, and a dipole axis that peaks around the galactic pole. That axis is not a feature of the large-scale structure, but driven by internal structure of galaxies.
To test that, it is possible to compare the brightness of galaxies that spin in opposite directions, and located around the Galactic pole. For instance, analyzing the exponential magnitudes of SDSS galaxies annotated automatically by Ganalyzer used in (Shamir, 2020c) provides 4,087 galaxies in the \(50^{\circ}\times 50^{\circ}\) part of the sky around the Northern galactic pole. The dataset is available at [https://people.cs.ksu.edu/~lshamir/data/sdss_phot](https://people.cs.ksu.edu/~lshamir/data/sdss_phot). Table 3 shows the average exponential magnitude of galaxies spinning clockwise and galaxies spinning counterclockwise.
As the table shows, galaxies spinning counterclockwise in the part of the sky around the Northern galactic pole are brighter than galaxies spinning counterclockwise in the same part of the sky. These results are aligned with the results of similar experiments with other telescopes (McAdam and Shamir, 2023), and explain the higher number of counterclockwise galaxies observed in that part of the sky. In this case, the asymmetry in the number of galaxies can be attributed to galaxy rotation and internal structure of galaxies, rather than to the large-scale structure of the Universe.
A similar analysis with the galaxies annotated by _Galaxy Zoo_ also shows similar magnitude difference. Table 4 shows a similar analysis using all Galaxy Zoo galaxies in the \(50^{\circ}\times 50^{\circ}\) window centred at the Northern galactic pole, that also met the "superclean" criterion of Galaxy Zoo. A galaxy annotation is considered "superclean" if 95% or more of the votes agree on the annotation (Land et al., 2008). That provided a dataset of 2,841 galaxies. The results show similar magnitude difference compared to the galaxies annotated by _Ganalyzer_ as shown in Table 3.
The "superclean" criterion of G
\begin{table}
\begin{tabular}{l c c c c} \hline Band & Mag & Mag & \(\Delta Mag\) & t-test P \\ & cw & ccw & & (one-tailed) \\ \hline G & 16.3852\(\pm\) 0.03 & 16.2964\(\pm\) 0.03 & 0.0618 & 0.07 \\ R & 15.7982\(\pm\) 0.03 & 15.7310\(\pm\) 0.03 & 0.0671 & 0.058 \\ Z & 15.3972\(\pm\) 0.03 & 15.3184\(\pm\) 0.03 & 0.0788 & 0.032 \\ \hline \end{tabular}
\end{table}
Table 4: The magnitude of SDSS galaxies that spin in opposite directions as annotated by Galaxy Zoo in the \(50^{\circ}\times 50^{\circ}\) window centred at the Northern galactic pole. The galaxies are annotated by include just galaxies that their annotation met the Galaxy Zoo “superclean” criterion.
\begin{table}
\begin{tabular}{l c c c} \hline Band & Mag & Mag & \(\Delta Mag\) & P \(\epsilon\)-test \\ & cw & ccw & (two-tailed) \\ \hline G & 17.3933\(\pm\)0.02 & 17.3217\(\pm\)0.02 & 0.0716 & 0.0112 \\ R & 16.8502\(\pm\) 0.02 & 16.7788\(\pm\)0.02 & 0.0714 & 0.0116 \\ Z & 16.4213\(\pm\)0.02 & 16.34909\(\pm\)0.02 & 0.0722 & 0.01 \\ \hline \end{tabular}
\end{table}
Table 3: The magnitude of SDSS galaxies with different spin directions. All galaxies are in the \(50^{\circ}\times 50^{\circ}\) window centred at the Northern galactic pole. The galaxies are annotated by _Ganalyzer_.
data, but also leads to the sacrifice of substantial number of galaxies that do not meet the criterion. Galaxy Zoo therefore has also the "clean" criterion, according which 80% or more of the annotation needs to agree. Table 5 shows the results with Galaxy Zoo "clean" galaxies, which provided a larger set of 9,512 galaxies. The results show that the absolute difference is smaller compared to using the "superclean" annotations, which can be explained by the higher number of incorrectly annotated galaxies in the "clean" annotations compared to the "superclean" annotations. The difference, however, is still statistically significant, also due to the higher number of galaxies that meet the "clean" criterion.
The brightness differences at the field around the Galactic pole shown in Tables 3 through 5 can be compared to a control field perpendicular to the Galactic pole. Tables 6 and 7 show the magnitude difference in the field centred at \((\alpha=102^{o},\delta=0^{o})\) with galaxies annotated by Ganalyzerzer and by Galaxy Zoo, respectively. The number of galaxies in these experiments are 3,781 and 5,094, respectively.
As both tables show, in the field perpendicular to the Galactic pole the brightness differences are far smaller, and statistically insignificant. Although the reason for the difference observed at around the Galactic pole is still unclear, it is possible that the motion of the observed galaxies relative to the Milky Way might be related to the observation. More work will be required to fully understand the reason for the observation.
"wet" laboratory experiments that may require highly trained researchers just to follow the protocols and reproduce the results. Yet, comprehensive analysis have shown that even in the case of papers published in the most regarded outlets with strict reproducibility policies, 76% of the published papers could not be reproduced (Stodden et al., 2018).
The attempt to reproduce the results and analyze the research aims at identifying the reasons that explain why different studies show conflicting conclusions. The results join a large number of other observations of cosmological-scale anisotropy reflected through multiple probes (Aluri et al., 2023), although another explanation to the observation that does not require violation of the cosmological principle is also proposed. Other explanations are also possible.
Future studies with more data will be required to profile the nature of the observation. The Vera C. Rubin Observatory will provide the world's largest astronomical database, and will allow to provide analysis with high resolution that will help to fully profile the observation and understand its nature. It is expected that such analysis will identify the exact location of the peak of the axis, which might allow to associate it with other observations that can include the Galactic pole, the CMB Cold Spot, the CMB dipole, etc. Instruments such as the Dark Energy Spectroscopic Instrument (Martini et al., 2018) will provide spectra of a large number of galaxies. That will allow to study whether the asymmetry changes with the redshift. Early observations using \(\sim 6.4\cdot 10^{4}\) galaxies with spectra showed evidence that the asymmetry increases as the redshift gets higher (Shamir, 2020c), suggesting that the asymmetry is of primordial origin. The Dark Energy Spectroscopic Instrument will allow to profile that change with an unprecedented number of galaxies with spectra.
## Acknowledgments
I would like to thank the three knowledgeable anonymous reviewers for the comments that helped to improve the paper. This study was supported in part by NSF grants AST-1903823 and IIS-1546079.
## Data Availability
The data used in this paper is publicly available. URLs were the data get be accessed are available inside the manuscript. SDSS galaxies analyzed by _SPARCFIRE_ are available at [https://people.cs.ksu.edu/~lshamir/data/sparcfire/](https://people.cs.ksu.edu/~lshamir/data/sparcfire/). The data used in (Shamir, 2022) are available at [https://people.cs.ksu.edu/~lshamir/data/jye_et_al](https://people.cs.ksu.edu/~lshamir/data/jye_et_al). Data for comparing the brightness of SDSS galaxies around the galactic pole is available at [https://people.cs.ksu.edu/~lshamir/data/sdss_phot](https://people.cs.ksu.edu/~lshamir/data/sdss_phot).
|
2309.05069 | Exploiting CLIP for Zero-shot HOI Detection Requires Knowledge
Distillation at Multiple Levels | In this paper, we investigate the task of zero-shot human-object interaction
(HOI) detection, a novel paradigm for identifying HOIs without the need for
task-specific annotations. To address this challenging task, we employ CLIP, a
large-scale pre-trained vision-language model (VLM), for knowledge distillation
on multiple levels. Specifically, we design a multi-branch neural network that
leverages CLIP for learning HOI representations at various levels, including
global images, local union regions encompassing human-object pairs, and
individual instances of humans or objects. To train our model, CLIP is utilized
to generate HOI scores for both global images and local union regions that
serve as supervision signals. The extensive experiments demonstrate the
effectiveness of our novel multi-level CLIP knowledge integration strategy.
Notably, the model achieves strong performance, which is even comparable with
some fully-supervised and weakly-supervised methods on the public HICO-DET
benchmark. | Bo Wan, Tinne Tuytelaars | 2023-09-10T16:27:54Z | http://arxiv.org/abs/2309.05069v1 | # Exploiting CLIP for Zero-shot HOI Detection Requires
###### Abstract
In this paper, we investigate the task of zero-shot human-object interaction (HOI) detection, a novel paradigm for identifying HOIs without the need for task-specific annotations. To address this challenging task, we employ CLIP, a large-scale pre-trained vision-language model (VLM), for knowledge distillation on multiple levels. Specifically, we design a multi-branch neural network that leverages CLIP for learning HOI representations at various levels, including global images, local union regions encompassing human-object pairs, and individual instances of humans or objects. To train our model, CLIP is utilized to generate HOI scores for both global images and local union regions that serve as supervision signals. The extensive experiments demonstrate the effectiveness of our novel multi-level CLIP knowledge integration strategy. Notably, the model achieves strong performance, which is even comparable with some fully-supervised and weakly-supervised methods on the public HICO-DET benchmark. Code is available at [https://github.com/bobwan1995/Zeroshot-HOI-with-CLIP](https://github.com/bobwan1995/Zeroshot-HOI-with-CLIP).
## 1 Introduction
HOI detection aims to identify triplets of \(\langle\)_human_, _object_, _interaction\(\rangle\)_ within the context of a given image, which requires localization of human and object regions and recognition of their interactive behavior, e.g., play-basketball. It enables the intelligent system to understand and interpret human behavior in real-world scenarios, thus playing an instrumental role in anomalous behavior detection [34, 40], motion tracking [39, 52] and visual scene understanding [27, 41].
HOI detection has typically been investigated in a fully-supervised learning paradigm [5, 10, 13, 54, 63], where aligned HOI annotations (i.e. human-object locations and interaction types) are provided during the training stage, as illustrated in Fig. 1(a). Despite the high performance due to such comprehensive annotations, it suffers from the labor-intensive process of labeling HOI instances. This limitation has led recent studies to transition towards a weakly-supervised setup [1, 22, 53, 64, 25], where only image-level HOI categories (without corresponding bounding boxes) are provided for model learning, as demonstrated in Fig. 1(b). While this setup significantly reduces the labeling costs and enhances robustness to label errors, it nonetheless necessitates image-level annotations on HOI datasets. These are still costly to obtain, often noisy and incomplete. Moreover, annotators may inadvertently introduce bias. Drawing inspiration from the great success of zero-shot learning [65, 66, 21, 66, 43, 26], we introduce the new challenging problem of zero-shot HOI detection, where _NO_ HOI annotations are required for model learning, as shown in Fig.1(c). Importantly, note how our setup diverges from previous zero-shot HOI detection frameworks [2, 14, 36, 38, 45], which predominantly focus on knowledge transfer from observed HOI concepts to unseen categories (c.f. Sec. 2 for more details). In contrast, this work takes it a step further by tackling the extreme scenario where none of the HOI categories have been annotated during training, although we assume the category space is known.
Large-scale pre-trained VLM, such as CLIP [43], have shown substantial promise in various domains of zero-shot learning, including image classification [7, 21, 28, 37, 46, 57, 60], object detection [66, 26], and instance segmentation [65]. However, extending these models to zero-shot HOI detection poses a unique challenge, primarily due to the high-level relational understanding required in this context. To the best of our knowledge, this paper makes the first effort to propose
Figure 1: **Comparison on training annotations for different setups: (a) Fully-supervised; (b) Weakly-supervised; (c) Zero-shot.**
and tackle this challenging task. A naive solution might be directly employing CLIP on union regions (the joint areas of human-object proposals detected by an external object detector) to produce corresponding HOI scores. However, this approach proves to be suboptimal in terms of both inference speed and performance, as illustrated in Sec. 4.4. Alternatively, we leverage the power of CLIP in two ways: i) In terms of model design, we build a multi-branch network that extracts CLIP-oriented features on multiple levels, thus incorporating its robust generalization capabilities into HOI representations. ii) For model learning, we use the CLIP scores generated from global images and union regions as supervisory signals to train our multi-branch network.
Prior works [36, 54, 56] have revealed that HOI relations manifest across various levels, including global images, relational union regions, and individual human-object instances. Inspired by this insight, our work employs a multi-branch neural network to leverage the capabilities of CLIP for multi-level HOI representation learning. As sketched in Fig. 2, the network architecture comprises a global branch, a union branch, and a human-object branch. The process begins with computing an HOI embedding by using the CLIP text encoder to encode HOI label prompts. This embedding is shared across all branches. Concurrently, we detect human-object proposals with an off-the-shelf object detector and generate an image feature map with the CLIP visual encoder. Each branch within our network adopts a similar design, where the HOI features of global image, union regions, and human-object pairs are extracted from the image feature map, and then serve to compute scores corresponding to the HOI embedding. Finally, we bring these branches together using a late fusion strategy: rather than focusing on merging semantic features, our approach concentrates on fusing the multi-level HOI scores, offering a comprehensive view of HOIs on different scales.
To generate supervision for model training, we leverage CLIP to produce meaningful HOI scores for both global images and local union regions. For global images, the entire image is re-scaled and fed into the CLIP model to derive a global HOI score that captures the entire context of the image. Given that CLIP is pre-trained to understand a wide range of image-text pairs, it can provide a holistic understanding of visual relationships. Similarly, for union regions, we extract regions of interest that include both the human and the object. These regions, capturing more focused and localized interactions, are separately fed into CLIP to generate local and region-specific HOI scores. The local HOI scores can be noisy due to the coexistence of multiple HOIs in the same region or the presence of HOI irrelevant distractions, but they complement the global supervision signal. We conduct ablative studies on the supervision strategy in Tab. 4, which demonstrate the application of global supervision on the global & human-object branch, and local supervision on the union branch, yields the most effective results.
By incorporating CLIP knowledge on multiple levels for both model design and learning, our approach substantially enhances the zero-shot detection capability in a variety of HOI scenarios. In summary, our main contributions are three-fold:
* We pioneer the challenging task of zero-shot HOI detection, a new learning setup where no HOI annotations are used during training. This is a significant leap forward in the field of HOI detection.
* We propose a multi-level knowledge distillation strategy from CLIP for this task, where we seamlessly incorporate CLIP into the model design for detecting HOIs on different scales, and capture both global and local contexts to provide rich supervision for model training.
* Extensive experiments are conducted to verify the effectiveness of our CLIP integration strategies. Impressively, our method achieves a strong performance even on par with some fully-supervised and weakly-supervised methods on HICO-DET benchmarks.
## 2 Related Works
HOI detection_Fully-supervised HOI detection_ has been the most common setup due to its superior performance. Research in this area generally falls into two categories: two-stage and one-stage frameworks. Two-stage methods [10, 11, 15, 29, 49, 54, 63, 67, 68] adopt a _hypothesize-and-classify_ strategy, which first generates a set of human-object proposals with the off-the-shelf object detector, and then enumerates all possible HOI pairs to classify their interactions. One-stage methods predict human & object locations and their interaction types simultaneously in an end-to-end manner, which are currently dominated by transformer-based architectures [4, 61, 24, 8, 62].
To decrease the reliance on HOI annotations, _weakly-supervised HOI detection_ is proposed to learn HOIs with only image-level annotations. Due to the lack of location annotations, current works in this domain adopts the two-stage framework, and they focus on recognizing HOIs by developing advanced network structures to encode context [1,
Figure 2: **Overview of our multi-level knowledge distillation.**
25] and integrating external knowledge for representation learning [50, 53]. In this work, we propose a novel zero-shot setup without the need for any manual annotations in HOI detection.
Zero-shot HOI detectionAs most HOI classes are distributed in a long-tail manner [14, 45, 53] due to the inherent compositionality of HOIs [38], previous works on zero-shot HOI detection [3, 14, 18, 19, 20, 32, 36, 42, 45, 56] aim to distill knowledge from observed HOI concept to unseen classes. They can be categorized into three scenarios: _unseen object_, _unseen action_, and _unseen combination_. There are mainly two streams of research for solving this problem. One stream [3, 14, 18, 20, 45] focuses on factorizing the human and object features by performing disentangled reasoning on verbs and objects, which allows the composition of novel HOI triplets for training and inference. Another stream [32, 36, 56] transfers knowledge from knowledge graphs or pre-trained VLM to recognize unseen HOI concepts. Despite the substantial success of these approaches in knowledge transfer, they still rely heavily on the base knowledge provided by the seen HOI categories. In contrast, our setup does not require any HOI annotations for learning.
## 3 Method
### Problem Setup
Formally, zero-shot HOI detection aims to learn an HOI detector that takes an image \(I\) as input and generates a collection of tuples \(\mathcal{O}=\{(\mathbf{b}_{h},\mathbf{b}_{o},r_{h,o},s_{h,o}^{r})\}\). Each tuple corresponds to a HOI instance, where \(\mathbf{b}_{h},\mathbf{b}_{o}\in\mathbb{R}^{4}\) indicate human and object bounding boxes, \(r_{h,o}\in\{1,...,N\}\) represents the interaction type between \(\mathbf{b}_{h}\) and \(\mathbf{b}_{o}\), and \(s_{h,o}^{r}\in\mathbb{R}\) is the confidence score of the detected interaction.
### Method Overview
To address the challenging zero-shot HOI detection task, we leverage CLIP for multi-level knowledge integration. To this end, we exploit the visual and textual encoders of CLIP to construct a multi-branch network for HOI representation learning, and use CLIP to generate global and local supervision for model training.
Model DesignDue to the lack of HOI location annotations, we adopt a typical _two-stage_ formulation [1, 25, 64] for HOI detection: in the first stage, we generate a group of human proposals \(\{(\mathbf{b}_{h},s_{h})\}\) and object proposals \(\{(\mathbf{b}_{o},c_{o},s_{o})\}\) with an off-the-shelf object detector [44], where \(s_{h},s_{o}\in\mathbb{R}\) are detection scores and \(c_{o}\in\{1,...,C\}\) is the object class. In the second stage, we pair up all human and object proposals and predict the interaction class for each combination.
In order to infuse the generalization capability of CLIP into the HOI representation, we design a multi-branch deep network by incorporating CLIP's visual and textual encoders, as sketched in Fig. 3. Specifically, the global branch performs image-level HOI recognition, utilizing the HOI embedding produced by CLIP textual encoder as a classifier. In parallel, for each detected human-object pair \((\mathbf{b}_{h},\mathbf{b}_{o})\), a union branch extracts the contextual cues in their shared region of interest, providing a comprehensive view of the surrounding environment and potential interactions. On top of that, a human-object branch focuses on fine-grained HOI features and encodes the specific relational attributes of the interactive pairs, which are used to predict their interaction types. All the branches are integrated with a late fusion strategy, where the HOI scores from different levels are combined to obtain the final predictions.
Model LearningTo train our model, we first employ CLIP on global image and local union regions to compute the corresponding HOI scores as supervision. Then we apply global image supervision on the global branch and human-object branch, and local union supervision on the union branch. Our training procedure can be viewed as a multi-level knowledge distillation approach from the pre-trained CLIP model. The primary objective of this strategy is to ensure that the HOI scores derived from distinct branches align with the CLIP scores.
### Model Design
#### 3.3.1 CLIP Backbone
CLIP builds a powerful vision-language model by pretraining on large-scale image-text pairs. It consists of a visual encoder \(\mathcal{F}_{V}\) (e.g., a ResNet [17] or Vision Transformer [9]), a self-attention module \(\mathcal{F}_{ATT}\) and a textual encoder \(\mathcal{F}_{T}\) (e.g., a Transformer [51]), to map the visual and textual inputs to a shared latent space.
Specifically, for an input image \(I\), the visual encoder \(\mathcal{F}_{V}\) produces a feature map \(\mathbf{\Gamma}\in\mathbb{R}^{H\cdot W\cdot D}\), where \(H,W,D\) denote the height, width, and depth of \(\mathbf{\Gamma}\), respectively. Then a self-attention module \(\mathcal{F}_{ATT}\) is adopted to encode \(\mathbf{\Gamma}\) to a feature vector \(v\in\mathbb{R}^{D}\): it takes a linear projection of the average pooling on the spatial dimensions of \(\mathbf{\Gamma}\) as query \(Q\in\mathbb{R}^{D}\), and a linear projection of the reshaped feature columns as key and value \(K,V\in\mathbb{R}^{(HW)\cdot D}\) :
\[Q=\mathcal{F}_{Q}(AvgPool(\mathbf{\Gamma}));\quad K,V=\mathcal{F}_ {K}(\mathbf{\Gamma}),\mathcal{F}_{V}(\mathbf{\Gamma})\] \[v=\mathcal{F}_{MHA}(Q,K,V) \tag{1}\]
where \(\mathcal{F}_{Q,K,V}\) are linear projection layers and \(\mathcal{F}_{MHA}\) is a standard multi-head attention module [51], all incorporated in \(\mathcal{F}_{ATT}\).
To leverage CLIP for HOI detection, we utilize CLIP textual encoder \(\mathcal{F}_{T}\) to generate HOI embedding \(\mathcal{W}_{T}\in\mathbb{R}^{N\cdot D}\). In a prompting strategy akin to CLIP, we adopt a common template 'a person is \(\{verb\}\)-ing \(\{object\}\)' to convert HOI labels into text prompts. For instance, 'play basketball' would be converted to 'a person is playing basketball'. These sentences are then processed by \(\mathcal{F}_{T}\) to create the HOI embedding \(\mathcal{W}_{T}\), which is used to classify different levels of visual features into corresponding HOI scores.
Modified Visual EncoderBy default, a CLIP model with a ResNet visual encoder downscales the feature map by a factor of 32. Consequently, the spatial dimension of the feature map is insufficient for detailed region-specific feature extraction. To adapt this visual encoder for HOI detection with a larger feature map, we tweak the original ResNet structure. This is done by discarding the last average pooling module and adding an upsampling layer, thereby reducing the feature map size to only 8 times smaller than the image resolution. To maintain compatibility with the CLIP's self-attention module for feature aggregation, we implement an ROI-Align module [16] to resize the cropped feature map to a \(7\times 7\) grid for global images, union regions, and human-object proposals.
#### 3.3.2 Global Branch
The global branch adapts the visual encoder and HOI embedding to perform an image-wise HOI recognition task, thereby capitalizing on the knowledge embedded within the CLIP model. Firstly, we adopt an ROI-Align on the image feature map followed by the self-attention module to compute a global feature vector \(v_{g}\in\mathbb{R}^{D}\). Then the global HOI scores \(s_{g}\in\mathbb{R}^{N}\) are predicted by conducting the inner product between the global vector \(v_{g}\) and the HOI embedding \(\mathcal{W}_{T}\): \(s_{g}=\text{Softmax}(\mathcal{W}_{T}\times v_{g})\), where \(\times\) is matrix multiplication.
#### 3.3.3 Union Branch
The union region encapsulates the contextual relationship between humans and objects, which is crucial in comprehending their context. To exploit these context cues, we compute the union region \(\mathbf{b}_{u}\in\mathbb{R}^{4}\) for each human proposal \(\mathbf{b}_{h}\) and object proposal \(\mathbf{b}_{o}\), and extract the corresponding appearance feature \(v_{u}\in\mathbb{R}^{D}\) via RoI-align over the feature map \(\mathbf{\Gamma}\) and self-attention aggregation. Similar union scores \(s_{u}\in\mathbb{R}^{N}\) are computed with HOI embedding: \(s_{u}=\text{Softmax}(\mathcal{W}_{T}\times v_{u})\).
#### 3.3.4 Human-object Branch
The human-object branch performs a fine-level classification for interaction pairs. For each human proposal \(\mathbf{b}_{h}\) and object proposal \(\mathbf{b}_{o}\), we crop the feature maps from \(\mathbf{\Gamma}\) using RoI-Align, followed by a self-attention operation to generate their appearance features \(v_{h},v_{o}\in\mathbb{R}^{D}\). We also compute a spatial feature \(v_{sp}\) by encoding the relative positions of their bounding boxes \((\mathbf{b}_{h},\mathbf{b}_{o})\)1. The holistic HOI representation \(v_{ho}\in\mathbb{R}^{D}\) is an embedding of the human and object appearance features and their spatial feature: \(v_{ho}=\mathcal{F}_{ho}([v_{h};v_{o};v_{sp}])\), where \([;]\) is the concatenation operation and \(\mathcal{F}_{ho}\) is a multi-layer perceptron (MLP). Finally, we use the shared HOI embedding to predict the pairwise interaction scores \(s_{ho}\in\mathbb{R}^{N}\) for each human-object combination: \(s_{ho}=\mathcal{W}_{T}\times v_{ho}\)
Footnote 1: For details c.f. Appendix
### Model Learning with CLIP Supervision
In this section, we first utilize CLIP to generate two types of HOI supervision that are based on global images and local union regions, and then design a multi-task loss on various levels to train our multi-branch network. The overall
Figure 3: **A detailed introduction of our method:** We design a multi-branch neural network that incorporates CLIP components to extract multi-level information for HOI detection, which is supervised by the CLIP scores derived from global images and local union regions.
loss function \(\mathcal{L}\) consists of three terms: i) an image-wise recognition loss \(\mathcal{L}_{g}\) to detect global HOIs; ii) a union loss \(\mathcal{L}_{u}\) for identifying contextual regional HOIs; and iii) a pairwise interaction classification loss \(\mathcal{L}_{ho}\) to guide the learning of instance-specific HOIs. Formally, the overall loss is written as: \(\mathcal{L}=\mathcal{L}_{g}+\mathcal{L}_{u}+\mathcal{L}_{ho}\).
CLIP Supervision GenerationTo obtain supervision for model learning, we directly employ CLIP on the image and union regions to generate the corresponding scores on the training set. As shown in Fig. 3(b), we crop the image to a square region at its center, with the side length equal to its shortest edge. This region is subsequently resized to \(224\times 224\) pixels and fed into the pre-trained CLIP model to generate the global supervision \(d_{g}\in\mathbb{R}^{N}\). Similarly, for each union box \(\mathbf{b}_{u}\), we crop the corresponding region in the raw image and resize it to \(224\times 224\) pixels, and then apply CLIP to generate local union supervision \(d_{u}\in\mathbb{R}^{N}\).
Image-wise loss \(\mathcal{L}_{g}\):Given the global image HOI scores \(s_{g}\) and CLIP supervision \(d_{g}\), \(\mathcal{L}_{g}\) is a standard Kullback-Leibler (KL) divergence defined as: \(\mathcal{L}_{g}=\mathcal{D}_{KL}(s_{g}||d_{g})\). Notably, \(d_{g}\) and \(s_{g}\) are independent predictions from the original CLIP and the global branch, respectively. The aim of \(\mathcal{L}_{g}\) is to align the up-scaled image feature map with the original CLIP representation, which plays a crucial role in extracting regional features for the union and human-object branches.
Union loss \(\mathcal{L}_{u}\):Similarly, we take a KL divergence on union HOI scores \(s_{u}\) and the local union supervision \(d_{u}\) to formulate the union loss: \(\mathcal{L}_{u}=\frac{1}{M}\sum_{m=1}^{M}\mathcal{D}_{KL}(s_{m}^{m}||d_{u}^{m})\). Here \(M\) is the total number of human-object combinations for a given image.
Human-object pairwise loss \(\mathcal{L}_{ho}\):Inspired by [53, 55], we adopt a Multiple Instance Learning (MIL) strategy to train the human-object branch. In detail, we first string together all the interaction scores to form a bag, denoted as \(S_{ho}=[s_{ho}^{1};...;s_{ho}^{M}]\in\mathbb{R}^{M\cdot N}\), where \(s_{ho}^{m}\) stands for the score of the \(m\)-th pair. Then we take maximization over all pairs to obtain the image-wise interaction scores: \(\hat{s}_{ho}=\underset{m}{max}\,S_{ho}\). This step essentially distills the most representative HOIs from the pairwise predictions, providing a consolidated representation of image-wise interaction scores that can be guided by the global CLIP supervision \(d_{g}\). Formally, the human-object loss \(\mathcal{L}_{ho}\) is a KL divergence defined on \(\hat{s}_{ho}\) and \(d_{g}\): \(\mathcal{L}_{ho}=\mathcal{D}_{KL}(\hat{s}_{ho},d_{g})\).
### Inference
During the inference stage, we combine multiple scores to obtain the final interaction score \(s_{h,o}^{r}\) for each human-object pair \((\mathbf{b}_{h},\mathbf{b}_{o})\). It includes the global HOI scores \(s_{g}\), the union score \(s_{u}\), the normalized pairwise interaction scores \(p_{ho}\) (rather than \(s_{ho}\)), and the object detection scores \(\langle s_{h},s_{o}\rangle\) as follows:
\[s_{h,o}^{r}=s_{g}\cdot s_{u}\cdot p_{ho}\cdot(s_{h}\cdot s_{o})^{\gamma} \tag{2}\]
where \(\gamma\) is a hyper-parameter to balance the HOI scores and the object detection scores.
It is noteworthy that we avoid using the original pairwise interaction score \(s_{ho}\) as it fails to measure the contribution of each pair when multiple pairs in an image share the same interaction. Instead, we institute a competitive environment among the pairs by applying a Softmax operation on \(S_{ho}\): \(\bar{S}_{ho}=\underset{m}{\text{Softmax}}(S_{ho})\). Following this, we derive the normalized pairwise interaction scores \(p_{ho}=\sigma(\hat{s}_{ho})\cdot\bar{s}_{ho}\), where \(\bar{s}_{ho}\) is a row from \(\bar{S}_{ho}\) and \(\sigma\) is Sigmoid function.
## 4 Experiments
### Datasets and Metrics
We use the public HOI detection dataset HICO-DET to benchmark our model. The dataset contains 37,633 training images and 9,546 test images. It includes \(C=80\) common objects (the same as MSCOCO [33]) and 117 unique action categories, together forming \(N=600\) HOI categories.
We adopt the mean average precision (mAP) metric [6] for evaluating HOI detection results. A human-object pair is deemed positive when the predicted human and object boxes have an IoU of at least 0.5 with their ground truth boxes, and the HOI class is correctly classified.
### Implementation Details
We use an off-the-shelf Faster R-CNN [44] pre-trained on MSCOCO to generate up to 100 object candidates for each image. It's crucial to note that we only keep the detection results and do not re-use the feature maps. We rather employ a CLIP with a ResNet-50 visual encoder, which is pre-trained on the YFCC-15M dataset [48], with an image resolution of \(224\times 224\). The CLIP model we used was implemented by OpenCLIP 2, which achieved an mAP of 35.5 on zero-shot image-wise HOI recognition on the test set, suggesting that it is capable of providing a comprehensive holistic understanding of HOIs.
Footnote 2: [https://github.com/mlfoundations/open_clip](https://github.com/mlfoundations/open_clip)
During training, we use a larger image resolution with a minimum edge length of 384, while maintaining the original aspect ratio of the input images in our multi-branch network. We freeze the weights of the pre-trained visual encoder and optimize the remaining modules by AdamW, with a learning rate of 1e-4 and batch size of 16. The model is trained for 30K iterations on two NVIDIA V100 GPUs. After 15K iterations, the learning rate is decayed by a factor of 10. Following previous works [63, 30], we set feature dimension \(D\) as 1024 and the detection score weight \(\gamma\) as 2.8.
### Quantitative Results
As shown in Tab. 1, with our multi-level knowledge integration strategy from CLIP, our approach achieves 17.12 mAP with ResNet-50, which is on par or even surpasses some fully-supervised (FS) [11, 12, 30, 54] and weakly-supervised (WS) [64, 25, 1] methods. For the FS and WS methods, we observe the mAP on Non-rare classes (i.e., those with more than 10 HOI instance annotations in the training set) is always higher than on Rare classes. This skew is to be expected given that HICO-DET is inherently an imbalanced dataset [45, 14]. Models tend to learn frequently occurring patterns for which they have training supervision. Despite certain HOIs being simpler to learn, their performance may lag due to the relative lack of supervision, compared to the more challenging yet annotated HOIs.
Remarkably, we obtain a higher mAP on Rare classes compared to Non-Rare classes in our results. This consequence stems from the integration of CLIP for HOI representation learning and model supervision. Firstly, CLIP is pre-trained on large-scale image-text pairs and has potentially encountered every imaginable HOI scenario during its pre-training phase. We build our model on top of CLIP components, which allows us to exploit its strong generalization capability for learning a better HOI representation. Secondly, when comparing our results with PGBL [53], which also exploits CLIP for HOI representation learning and achieves the best performance in a weakly-supervised setting (i.e., image-level HOI annotations are available). We experience a small drop on Rare classes (from \(22.41\to 20.26\)), but a significant drop on Non-Rare classes (from \(23.03\to 16.18\)). This disparity suggests that the learning of Non-Rare HOIs is more reliant on strong annotations, whereas Rare HOIs can be effectively learned by distilling the 'dark knowledge' from CLIP scores. Consequently, we sidestep issues associated with the long-tailed distribution.
### Ablation Studies
In this section, we mainly assess the effectiveness of each component with detailed ablation studies on HICO-DET dataset. We first introduce our baseline, based on which we will answer some interesting questions regarding the model design and learning.
Baseline:Our baseline model is constructed on top of the human-object branch, where the human-object representation \(v_{ho}\) is used to predict their normalized interaction scores \(s_{ho}\), which are supervised by \(\mathcal{L}g\). During the inference process, the final interaction scores in 2 are recomputed as \(s^{r}_{h,o}=p_{ho}\cdot(s_{h}\cdot s_{o})^{\gamma}\).
Why not a training-free (TF) approach?In a TF approach, we directly apply CLIP scores on union regions \(d_{u}\) along with object detection scores \(\langle s_{h},s_{o}\rangle\) for inference. This reformulates Eq. 2 as: \(s^{r}_{h,o}=d_{u}\cdot(s_{h}\cdot s_{o})^{\gamma}\).
While this straightforward approach only relies on a pre-trained CLIP and avoids the need for model training, it has two notable drawbacks compared to our method: (i) It is
\begin{table}
\begin{tabular}{l|l l l|c c c} \hline \hline S & Methods & Visual Encoder & Detector & \multicolumn{3}{c}{HICO-DET (\%)} \\ & & & Full & Rare & Non-Rare \\ \hline \multirow{6}{*}{**CIF**} & InteractNet [12] & RN50-FPN (COCO) & FRCNN (COCO) & 9.94 & 7.16 & 10.77 \\ & iCAN [11] & RN50 (COCO) & FRCNN (COCO) & 14.84 & 10.45 & 16.15 \\ & TIN [30] & RN50-FPN (COCO) & FRCNN (COCO) & 17.22 & 13.51 & 18.32 \\ & PMFNet [54] & RN50-FPN (COCO) & FRCNN (COCO) & 17.46 & 15.56 & 18.00 \\ & HOTR [23] & RN50+Transformer (COCO) & DETR (HICO-DET) & 25.10 & 17.34 & 27.42 \\ & QPIC [47] & RN101+Transformer (COCO) & DETR (COCO) & 29.90 & 23.92 & 31.69 \\ & GEN-VLKT [32] & RN50+Transformer (HICO-DET) & DETR (HICO-DET) & 33.75 & 29.25 & 35.10 \\ & HOICLIP [38] & RN50+Transformer (HICO-DET) & DETR (HICO-DET) & 34.69 & 31.12 & 35.74 \\ \hline \multirow{6}{*}{**CIF**} & Explanation-HOI\(\dagger\)[1] & ResNeXt101 (COCO) & FRCNN (COCO) & 10.63 & 8.71 & 11.20 \\ & MX-HOI [25] & RN101 (COCO) & FRCNN (COCO) & 16.14 & 12.06 & 17.50 \\ & PPR-FCN\(\dagger\)[64] & RN50 (YFCC-15M) & FRCNN (COCO) & 17.55 & 15.69 & 18.41 \\ & PGBL [53] & RN50 (YFCC-15M) & FRCNN (COCO) & 22.89 & 22.41 & 23.03 \\ \hline \multirow{6}{*}{**CIF**} & _baseline_ & RN50 (YFCC-15M) & FRCNN (COCO) & 10.48 & 9.45 & 10.78 \\ & _ours_ & RN50 (YFCC-15M) & FRCNN (COCO) & 17.12 & 20.26 & 16.18 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results comparison of different methods on HICO-DET test set** (a full table of results comparison c.f. Appendix). \(\dagger\)means re-implementation in [53]. Here FS, WS, and ZS indicate fully-supervised, weakly-supervised, and zero-shot HOI detection methods, respectively. The notation (D) means the visual encoder or the detector is pre-trained on dataset D, D\(\in\) {COCO, HICO-DET,YFCC-15M}.
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline \multirow{2}{*}{Exp} & \multirow{2}{*}{Speed (fps)} & \multicolumn{3}{c}{mAP (\%)} \\ & & Full & Rare & Non-Rare \\ \hline _base_ & **56.19** & 10.48 & 9.45 & 10.78 \\ _TF_ & 6.52 & 11.19 & 13.98 & 10.37 \\ _TF_* & 6.47 & 12.24 & 15.75 & 11.19 \\ _ours_ & 35.64 & **17.12** & **20.26** & **16.18** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparison of inference speed and performance** between baseline, training-free (TF) approach, and our method. TF* means \(d_{g}\) is added to predict \(s^{r}_{ho}\) on top of TF.
time-consuming for inference. For each image, the approach requires forwarding the CLIP model for the image and all union regions, resulting in \(M+1\) forward passes. As indicated in Tab. 2, given the detected bounding boxes, the inference speed for the TF approach is only 6.52 frames per second (fps) on a single Nvidia V100 GPU with a batch size of 1, whereas our method achieves an fps of 35.64. (ii) The performance is low, even enhanced by \(d_{g}\) (i.e., TF*). This issue arises because both \(d_{g}\) and \(d_{u}\) correspond to large regions, and as such, they fail to specify the interacted human and object pair. Consequently, the CLIP scores, when directly used for inference, tend to be noisy. In contrast, our method opts to distill knowledge from these scores at different levels, leading to a significant performance increase, from an mAP of 12.24 to 17.12.
Does multi-level CLIP knowledge distillation strategy work?As demonstrated in Tab. 3, the answer is definitively yes. We added the union branch and the global branch on top of our baseline model (Exp 0). The results indicate that the union branch enhances the mAP from 10.48 to 14.49 (Exp 0 vs. 1), while the global branch raises the mAP from 10.48 to 15.84 (Exp 0 vs. 2). When we combine both branches, the mAP sees a more significant improvement, from 10.48 to 17.12 (Exp 0 vs. 4). These results underscore the effectiveness of the multi-level CLIP knowledge distillation strategy.
How to incorporate contextual cues from the union region?In Tab. 3, we also explore two different designs for integrating the contextual union branch, including an early fusion strategy and a late fusion strategy. For the early fusion strategy, we concatenate the union feature \(v_{u}\) with human-object appearance features \(\langle v_{h},v_{o}\rangle\) and their spatial encoding \(v_{sp}\) to compose \(v_{ho}\). The experimental results (Exp 3 vs. 4) indicate that the early fusion strategy performs considerably worse than a late fusion strategy (14.64 vs 17.12), and even underperforms Exp 2 where the union branch is not included.
We hypothesize that this is because the union feature, encompassing a large region, may include other HOIs or background distractions, making it noisy. These irrelevant features may not offer valuable contextual information for HOI representation learning, particularly in a zero-shot setup where the training signals are also somewhat noisy. Conversely, with the late fusion strategy, the union branch aims to learn context-aware union HOI scores. Although these union scores cannot precisely describe the specific human-object pair, they provide some contextual HOI cues that are compatible with the HOI predictions from the other branches.
What kind of supervision works best for different branches?In Tab. 4, we compare different supervision strategies for each branch. By default, we supervise the image branch with global supervision \(d_{g}\). Then, we apply \(d_{g}\) on the union branch in the same manner as \(\mathcal{L}_{ho}\) in Exp 0. This results in an mAP drop from 17.12 to 15.05, indicating that the union features, being noisy and non-discriminative, are not suitable for a MIL strategy. Besides, we apply local supervision \(d_{u}\) on the human-object branch in Exp 1 and observe the drop in mAP to 14.22. This is because of the noisiness of \(d_{u}\), as it does not always correspond accurately to a specific human-object pair.
Experimental results reveal that the mismatch of the supervision signals can result in a performance drop, and Combining both types of supervision does not necessarily lead to better results (as seen in Exp 3 and 4).
Can our method generalize to unseen categories?To answer this question, we randomly select \(N^{\prime}\) out of \(N=600\) HOI categories on HICO-DET during training. This implies the class dimensions of predicted scores \(\langle s_{g},s_{u},s_{ho}\rangle\) and CLIP supervisions \(d_{g},d_{u}\) are set to \(N^{\prime}\). For inference, we evaluate across all 600 categories. As shown in Tab. 5, even though the training is limited to just 100 categories, we observed only a slight \(1.87\) mAP drop compared to the
\begin{table}
\begin{tabular}{c|c c|c c c} \hline \hline \multirow{2}{*}{Exp} & \multicolumn{2}{c|}{Branch} & \multicolumn{3}{c}{mAP (\%)} \\ & union & h-o & Full & Rare & Non-Rare \\ \hline \(0\) & g & g & 15.05 & 16.76 & 14.55 \\ \(1\) & u & u & 14.22 & 15.00 & 13.99 \\ \(2\) & u & g & **17.12** & **20.26** & **16.18** \\ \(3\) & u & g+u & 15.83 & 18.20 & 15.13 \\ \(4\) & g+u & g & 16.96 & 19.54 & 16.18 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation study of different supervision strategies on HICO-DET dataset. g** means supervision from \(d_{g}\) and **u** means supervision from union region CLIP scores \(d_{u}\). **g+u** indicates using both supervisions. In this table, the global branch is added for all the experiments and supervised with \(d_{g}\).
\begin{table}
\begin{tabular}{c|c c c|c c} \hline \hline \multirow{2}{*}{Exp} & \multicolumn{2}{c|}{Branch} & \multicolumn{3}{c}{mAP (\%)} \\ & h-o & union & global & Full & Rare & Non-Rare \\ \hline \(0\) & ✓ & - & - & 10.48 & 9.45 & 10.78 \\ \(1\) & ✓ & ✓ (late) & - & 14.49 & 16.33 & 13.95 \\ \(2\) & ✓ & - & ✓ & 15.84 & 17.91 & 15.21 \\ \(3\) & ✓ & ✓ (early) & ✓ & 14.64 & 16.09 & 14.21 \\ \(4\) & ✓ & ✓ (late) & ✓ & **17.12** & **20.26** & **16.18** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Ablation study of multi-level incorporation on HICO-DET dataset. The baseline is the human-object (h-o) branch, and we add other branches on top of it. We denote ’early’ as the early fusion of union features and ’late’ as the late fusion of union scores.**
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{\(N^{\prime}\)} & \multicolumn{3}{c}{mAP (\%)} \\ & Full & Rare & Non-Rare \\ \hline
100 & 15.25 & 18.97 & 14.15 \\
300 & 16.45 & 19.48 & 15.55 \\
600 & **17.12** & **20.26** & **16.18** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Generalization to Unseen Categories. We randomly select \(N^{\prime}\) HOI categories for model learning.**
final model. This indicates that our approach effectively captures common knowledge that can be shared across all HOI categories.
### Qualitative Results
Figure 4 offers a qualitative evaluation of our method. Due to the way we incorporate multiple scores into our final prediction \(s_{ho}^{r}\) in our methodology, it's not practical to directly compare the HOI scores with the baseline. Instead, our visualization and comparison with the baseline model are based on the relative ranking of the detection scores rather than their absolute values. Concretely, for each HOI prediction, we present its ranking amongst all predictions belonging to the same category across the entire test set. The ranking is exhibited in the form of a percentile (top \(p\%\)), wherein a lower percentile value (smaller \(p\)) signifies the model's strong confidence in the positive prediction (represented by green colored numbers).
As depicted in Fig.4(a), both our method and the baseline successfully identify some Rare HOI classes. However, when the objects are quite small, as shown in Fig.4(b), our model tends to offer more confident predictions, attributing this advantage to its ability to factor in contextual cues. For example, in the top image, our model infers that a baby is likely licking a spoon due to the context of sitting in a baby chair with residual sauce visible. In a similar vein, Fig.4(c) showcases situations where objects are heavily occluded. Despite this challenge, our model manages to discern some unapparent relationships by considering the overall environment. As an illustration, in the image at the bottom, a man in an office, engaged in computer work on the table, is more likely identified by our model as'sitting on a chair', which would be a challenging task for the baseline method.
## 5 Limitations and Future Works
Although inspiring results have been achieved by our method, the zero-shot HOI detection is far from satisfactory. As an example of its limitations, the top image in Fig.4(d) shows a typical failure case where our model incorrectly associates a person with a computer that is considerably distant. This error can be attributed to the absence of adequate supervision for pairwise associations. Furthermore, the bottom image exhibits a scenario where our method struggles to recognize relations when the object is completely obscured.
A potential area for further exploration based on this work involves the detection of ambiguous HOI associations. Previous research has investigated this issue within fully-supervised or weakly-supervised settings [31, 35, 53]. However, transferring these learnings to a zero-shot setup remains a largely unexplored area. Besides, this study employs a classic CLIP structure for zero-shot HOI detection. Nonetheless, it is intriguing to explore various adaptations of CLIP [58, 59, 60] for enhancing performance in this task.
## Acknowledgement
We acknowledge funding from European Research Council under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 101021347) and Flemish Government under the Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen programme.
Figure 4: **Visualization of the HOI detection results.** We compare our method with the baseline model based on the relative ranking of the detection scores, which is presented in percentile format. The percentiles highlighted in green signify the model’s confident HOI predictions, whereas those in red indicate negative HOI predictions that the model treats as background. |
2309.12515 | A Diamond Machine For Strong Evaluation | Abstract machines for strong evaluation of the $\lambda$-calculus enter into
arguments and have a set of transitions for backtracking out of an evaluated
argument. We study a new abstract machine which avoids backtracking by
splitting the run of the machine in smaller jobs, one for argument, and that
jumps directly to the next job once one is finished.
Usually, machines are also deterministic and implement deterministic
strategies. Here we weaken this aspect and consider a light form of
non-determinism, namely the diamond property, for both the machine and the
strategy. For the machine, this introduces a modular management of jobs,
parametric in a scheduling policy. We then show how to obtain various
strategies, among which leftmost-outermost evaluation. | Beniamino Accattoli, Pablo Barenbaum | 2023-09-21T22:43:10Z | http://arxiv.org/abs/2309.12515v2 | # A Diamond Machine for Strong Evaluation
###### Abstract
Abstract machines for strong evaluation of the \(\lambda\)-calculus enter into arguments and have a set of transitions for backtracking out of an evaluated argument. We study a new abstract machine which avoids backtracking by splitting the run of the machine in smaller _jobs_, one for argument, and that jumps directly to the next job once one is finished.
Usually, machines are also deterministic and implement deterministic strategies. Here we weaken this aspect and consider a light form of non-determinism, namely the _diamond property_, for both the machine and the strategy. For the machine, this introduces a modular management of jobs, parametric in a scheduling policy. We then show how to obtain various strategies, among which leftmost-outermost evaluation, by instantiating in different ways the scheduling policy.
Keywords:Lambda calculus, abstract machines, strong evaluation.
## 1 Introduction
An abstract machine for the \(\lambda\)-calculus, or for one of its extensions, is an implementation schema for a fixed evaluation strategy \(\rightarrow_{\mathtt{str}}\) with sufficiently atomic operations (accounting for the _machine_ part) and without too many implementative details (accounting for the _abstract_ part). An abstract machine for \(\rightarrow_{\mathtt{str}}\), ideally, refines the reduction of \(\rightarrow_{\mathtt{str}}\)-redexes realizing the following three tasks:
1. _Search_: searching for \(\rightarrow_{\mathtt{str}}\)-redexes;
2. _Names_: avoiding variable captures through some mechanism implementing \(\alpha\)-conversion, or allowing one to avoid \(\alpha\)-conversion altogether;
3. _Substitution_: refining meta-level substitution with an approximation based on delaying the substitution, which boils down to adopting a form of sharing, and replacing one variable occurrence at a time, in a demand-driven fashion.
These tasks are usually left to _meta-level operations_ in \(\lambda\)-calculi, meaning that they happen outside the syntax of the calculus itself, in a black-box manner. The role of abstract machines is to explicitly take care of these aspects, or at least of some of them, verifying them from the meta-level to the object-level. Concretely, this is obtained by enriching the specification of the operational semantics with dedicated data structures. Additionally, such a specification is usually designed as to be efficient, and usually the evaluation strategy \(\rightarrow_{\mathtt{str}}\) is deterministic.
Search, Backtracking, and JumpingA first motivation of this paper is obtaining a better understanding of the search mechanism of abstract machines. When pushed at the meta-level, search is usually specified via deduction rules or via a grammar of evaluation contexts, assuming that, at each application of a rewriting rule, the term is correctly split into an evaluation context and a redex. The meta-level aspect is the fact that the process of splitting the term (or applying the deductive rules) is not taken into account as an operation of the calculus.
For simple evaluation strategies such as, for instance, weak call-by-name in the pure \(\lambda\)-calculus (also known as _weak head reduction_), abstract machines (such as the Krivine or the Milner abstract machine) have one search transition for every production of the evaluation contexts for the meta-level definition of search. For less simple strategies, the searching for redexes by the machine often also accounts for the further mechanism of _backtracking search_, that is not visible in the operational semantics, not even at meta-level. Such a form of search--which is completely unrelated to _backtracking-as-in-classical-logic_--happens when the machine has finished evaluating a sub-term and needs to backtrack to retrieve the next sub-term to inspect. Typically, this happens when implementing strategies that evaluate arguments, and in particular for strategies evaluating to _strong_ normal form (that is, also under abstraction), the paradigmatic and simplest example of which is leftmost(-outermost4) evaluation.
Footnote 4: To ease the language, in the paper we shorten _leftmost-outermost_ to _leftmost_.
As an example, let \(f\) be a strong normal form and consider executing a machine for leftmost evaluation on \(\lambda x.xft\). The machine would go under \(\lambda x.\), find the head variable \(x\) and then start searching for a \(\beta\)-redex in \(f\). Since there are none, the machine arrives at the end of \(f\) and then it usually _backtracks_ through the structure of \(f\), as to exit it and then start searching inside \(t\). Backtracking search is natural when one sees the searching process as a _walk_ over the code, moving only between _adjacent_ constructors. This is the dominating approach in the design of abstract machines for \(\lambda\)-calculi, since the entirety of the (small) literature on machines for strong evaluation adopts it [19, 23, 3, 22, 13, 11, 7, 14, 15].
There is, however, a natural alternative approach which is _saving_ the position where one would next backtrack, and then directly _jumping_ to that position, instead of walking back to it. In this paper, we explore how to avoid backtracking search by adopting a form of _jumping search_. The idea is embarrassingly simple: creating a new _job_ when an argument ready to be evaluated is found, adding it to a pool of jobs; then when the active job terminates, jumping to another job in the pool, without backtracking out of the terminated one.
Diamond Non-DeterminismA second motivation of the paper is to understand how to deal with diamond non-determinism at the level of machines. It is often the case that a deterministic strong strategy can be relaxed as to be non-deterministic. For instance, on the head normal form \(x\,t\,u\,r\) leftmost evaluation would first evaluate \(t\), then \(u\), and then \(r\). But the evaluations of \(t\), \(u\), and \(r\) are in fact independent, so that one could evaluate them in any order. One could even interleave their evaluations, as it is done for instance by the least
level strategy, a non-deterministic strong strategy coming from the linear logic literature, introduced--we believe--by Girard [24] and studied for instance by de Carvalho et al. [18] on proof nets and by Accattoli et al. [8] on the \(\lambda\)-calculus.
Such a form of non-determinism is benign, as it does not affect the result, nor the length of the evaluation. Abstractly, it is captured by the _diamond property_ (here defined following Dal Lago and Martini [20], while Terese [31] defines it more restrictively, without requiring \(u_{1}\neq u_{2}\)), the strongest form of confluence:
What makes it stronger than confluence is that both the opening span from \(t\) and the closing one to \(r\) are made of _single_ steps, not of sequences of steps.
The diamond property can be seen as a liberal form of determinism, because--when it holds--all reductions to normal form have the same length, and if one such reduction exists then there are no diverging reductions.
External Strategy and Machine.Here we introduce a relaxed, diamond version of the leftmost strategy, which we deem _external strategy_. We are inspired by Accattoli et al. [7], who study a similar strategy for strong call-by-value.
Diamond strategies can be seen as uniform frameworks capturing different deterministic strategies (for instance the leftmost and the least level strategy). Accordingly, a non-deterministic machine implementing a diamond strategy would factor out the commonalities of different deterministic machines.
We then design a machine for the external strategy, the _EXternal Abstract Machine_ (EXAM) by building over the jumping search explained above. The idea, again, is very simple. It amounts to relaxing the scheduling of jobs from the pool, by allowing the machine to non-deterministically select whatever unfinished job at each step, instead of having to terminate the active job and then having to move to the next one in the pool.
In fact, we go one step further. We define a _pool interface_ and the definition of the EXAM is _abstract_ in that it only refers to the interface. Then one can define different _pool templates_ that implement various scheduling policies for jobs. The external strategy is implemented when adopting the most general, non-deterministic template. By only replacing the template, we show how the same machine can also implement the leftmost strategy. At the end of the paper, we also quickly overview a template for the least level strategy, as well as one for a fair strategy.
Related Work.This work adopts terminologies and techniques from the work on abstract machines by Accattoli and coauthors [2, 3, 1, 4, 6, 9, 7], and in particular refines their machine for leftmost evaluation [3]. They focus on the complexity of machines, while here we focus on _search_ and ignore complexity, since their work shows that search has _linear_ cost (in the time cost model, i.e. the number of \(\beta\)-steps) and thus it does not affect the asymptotic behavior, which is instead
linked to how the machine realizes the orthogonal _substitution_ task. The study of machines for strong evaluation is a blind spot of the field, despite the relevance for the implementation of proof assistants. The few studies in the literature have all been cited above. Search for abstract machines is related to Danvy and Nielsen's (generalized) _refocusing_[21, 16], which however has never dealt with the jumping search introduced here. General non-deterministic machines are studied by Biernacka et al. [12], but the setting is different as they are not diamond.
Proofs.: Proofs are in the Appendix.
## 2 Normal Forms and the Importance of Being External
Basics of \(\lambda\).The set \(\mathcal{T}\) of untyped \(\lambda\)-terms is defined by \(t::=x\mid\lambda x.\,t\mid t\,t\). The capture-avoiding substitution of \(x\) by \(u\) in \(t\) is written \(t\{x:=u\}\). The relation of \(\beta\)-reduction at the root \(\mapsto_{\beta}\subseteq\mathcal{T}\times\mathcal{T}\) is defined by \((\lambda x.\,t)\,u\mapsto_{\beta}t\{x:=u\}\).
We shall use various notions of contexts, which are terms with a single occurrence of a free variable \(\langle\cdot\rangle\), called a _hole_. If \(C\) is a context, \(C\langle t\rangle\) denotes the _plugging_ of \(t\) in \(C\) which is the textual substitution of \(\langle\cdot\rangle\) by \(t\) in \(C\). Plugging might capture variables, for instance if \(C=\lambda x.\lambda y.\langle\cdot\rangle\) then \(C\langle xy\rangle=\lambda x.\lambda y.xy\). Note that instead one would have \((\lambda x.\lambda y.z)\{z:=xy\}=\lambda x^{\prime}.\lambda y^{\prime}.xy\).
The relation of \(\beta\)-reduction \(\rightarrow_{\beta}\subseteq\mathcal{T}\times\mathcal{T}\) is the context closure of \(\mapsto_{\beta}\), i.e. \(C\langle t\rangle\rightarrow_{\beta}C\langle u\rangle\) if \(t\mapsto_{\beta}u\), compactly noted also as \(\rightarrow_{\beta}:=C\langle\mapsto_{\beta}\rangle\). An evaluation \(e:t\rightarrow_{\beta}\!^{*}u\) is a possibly empty sequence of \(\beta\)-steps.
Proposition 1 (Normal forms).: \(\beta\)_-Normal forms are described by:_
\[\begin{array}{ll}\mbox{Neutral terms}&n::=x\mid nf&\mbox{Normal forms}&f::=n\mid\lambda x.f\\ \end{array}\]
Weak Head Reduction and External Redexes.The simplest evaluation strategy is weak head reduction \(\rightarrow_{wh}\), which is obtained as the closure \(A\langle\mapsto_{\beta}\rangle\) of the root \(\beta\)-rule \(\mapsto_{\beta}\) by the following notion of applicative contexts:
\[\begin{array}{ll}\mbox{Applicative contexts}&A::=\langle\cdot\rangle\mid A\,t.\\ \end{array}\]
Example: \((\lambda x.t)ur\rightarrow_{wh}t\{x{\leftarrow}u\}r\). Weak head reduction is deterministic. It fails to compute \(\beta\)-normal forms because it does not reduce arguments nor abstraction bodies, indeed \(r((\lambda x.t)u)\not\rightarrow_{wh}r(t\{x{\leftarrow}u\})\) and \(\lambda y.((\lambda x.t)u)\not\rightarrow_{wh}\lambda y.t\{x{\leftarrow}u\}\).
The key property of the weak head redex is that it is _external_, a key concept from the advanced rewriting theory of the \(\lambda\)-calculus studied by many authors [26, 27, 28, 25, 29, 10, 17, 30, 31, 5]. Following Accattoli et al. [5], the intuition is that a redex \(R\) of a term \(t\) is _external_ if:
1. _Action constraint_: no other redex in \(t\) can act on (that is, can erase or duplicate) \(R\), and
2. _Hereditary clause_: the same it is true, hereditarily, for all the residuals of \(R\) after any other redex.
In \(\delta(\mathtt{I}x)\), where \(\delta:=\lambda y.yy\) is the duplicator combinator and \(\mathtt{I}:=\lambda z.z\) is the identity combinator, the redex \(\mathtt{I}x\) can be duplicated by \(\delta\), so it is not external because the action constraint is not respected. In \(\mathtt{I}\delta(\mathtt{I}x)\), instead, the redex \(\mathtt{I}x\) respects the action constraint, because \(\mathtt{I}x\) is an outermost redex, and yet \(\mathtt{I}x\) is not external because it does not validate the hereditary clause: its only residual after the step \(\mathtt{I}\delta(\mathtt{I}x)\rightarrow_{\beta}\delta(\mathtt{I}x)\) can be duplicated by \(\delta\).
Defining external redexes requires the introduction of the theory of residuals, which is heavy and beyond the scope of this paper. The intuition behind it however guides the study in this paper, and we consider it a plus--rather than a weakness--that this can be done circumventing the theory of residuals.
Leftmost Reduction.One way to extend weak head reduction to compute \(\beta\)-normal forms as to always reduce external redexes is provided by _leftmost-outermost reduction_\(\rightarrow_{lo}\) (shortened to _leftmost_). The definition relies on the notion of neutral term \(n\) used to describe normal forms, and it is given by the closure \(L\langle\mapsto_{\beta}\rangle\) of root \(\beta\) by the following notion of leftmost contexts, defined by mutual induction with neutral contexts:
\[\textsc{Neutral ctxs}\quad N::=\langle\cdot\rangle\mid nL\mid Nt\qquad \textsc{Leftmost ctxs}\;L::=N\mid\lambda x.L\]
Some examples: \(y((\lambda x.t)u)\rightarrow_{lo}y(t\{x{\leftarrow}u\})\) and \(\lambda y.((\lambda x.t)u)\rightarrow_{lo}\lambda y.t\{x{\leftarrow}u\}\) but \(\mathtt{I}y((\lambda x.t)u)\not\rightarrow_{lo}\mathtt{I}y(t\{x{\leftarrow}u\})\). Leftmost reduction is deterministic and _normalizing_, that is, if \(t\) has a reduction to normal form \(f\) then leftmost reduction reaches \(f\). Formally, if \(t\rightarrow_{\beta}{}^{*}f\) with \(f\) normal then \(t\rightarrow^{*}_{lo}f\)--for a recent simple proof of this classic result see Accattoli et al. [8]. The normalization property can be seen as a consequence of the fact that the strategy reduces only external redexes. Note that the outermost strategy (that reduces redexes not contained in any other redex) is instead not normalizing, as the following \(\Omega\) redex is outermost (but not leftmost), where \(\Omega:=\delta\delta\) is the paradigmatic looping \(\lambda\)-term:
\[(\lambda x.\,\lambda y.\,x)\,z\,\Omega\rightarrow_{\beta}(\lambda x.\, \lambda y.\,x)\,z\,\Omega\rightarrow_{\beta}\dots \tag{1}\]
The key point is that the outermost strategy does reduce redexes that cannot be acted upon, but it does not satisfy the hereditary clause in the intuitive definition of external redex given above, for which one also needs an additional requirement such as selecting the leftmost redex among the outermost ones.
External Reduction.It is possible to define a strategy relaxing leftmost reduction, still reducing only external redexes, what we call _external reduction_. The definition uses the auxiliary notions of rigid terms and contexts, plus the applicative contexts \(A\) used for weak head reduction. The terminology is inspired by Accattoli et al.'s similar strategy for strong call-by-value evaluation [7].
Definition 1: The following categories of terms and contexts are defined mutually inductively by the grammar:
\[\textsc{Rigid terms}\quad r ::=\;x\;\mid\;r\,t\] \[\textsc{Rigid ctxs}\quad R ::=\langle\cdot\rangle\mid r\,E\mid R\,t\qquad\textsc{External ctxs}\quad E ::=R\mid\lambda x.\,E\]
External reduction _is the rewriting relation \(\to_{\tt x}\subseteq\mathcal{T}\times\mathcal{T}\) on \(\lambda\)-terms defined as the closure of root \(\beta\)-reduction under external contexts, that is, \(\to_{\tt x}:=E\langle\mapsto_{\beta}\rangle\)._
Alternative streamlined definitions for these notions are:
\[r ::=x\,t_{1}\ldots t_{n}\] \[R ::=\langle\cdot\rangle\,t_{1}\ldots t_{n}\ |\ x\,u_{1}\ldots u_{m}\,E\,t_{1} \ldots t_{n}\qquad E::=\lambda x_{1}\ldots x_{k}.\,R\]
As proved below, the leftmost strategy is a special case of the external one. The converse does _not_ hold: \(t=x(\mathtt{I}y)(\mathtt{I}z)\to_{\tt x}x(\mathtt{I}y)z=u\) but \(t\not\to_{lo}u\). Instead, \(t\to_{lo}xy(\mathtt{I}z)=r\). Note also a case of diamond: \(t\to_{\tt x}r\) and \(r\to_{\tt x}xyz\ _{\tt x}\gets u\).
Proposition 2 (Properties of external reduction):
1. Leftmost is external: _if_ \(t\to_{lo}u\) _then_ \(t\to_{\tt x}u\)_._
2. External diamond_: if_ \(u\,{}_{\tt x}\leftarrow\,\cdot\,\to_{\tt x}r\) _with_ \(u\neq r\) _then_ \(u\to_{\tt x}\,\cdot\,_{\tt x}\leftarrow\,r\)_._
## 3 Preliminaries: Abstract Machines
Abstract Machines Glossary.Abstract machines manipulate _pre-terms_, that is, terms without implicit \(\alpha\)-renaming. In this paper, an _abstract machine_ is a quadruple \(\mathtt{M}=(\mathtt{States},\leadsto,\cdot\,\triangleleft;,\_)\) the component of which are as follows.
* _States_. A state \(s\in\mathtt{States}\) is composed by the _active term_\(t\), and some data structures. Terms in states are actually pre-terms.
* _Transitions_. The pair \((\mathtt{States},\leadsto)\) is a transition system with transitions \(\leadsto\) partitioned into \(\beta\)_-transitions_\(\leadsto_{\beta}\) (usually just one), that are meant to correspond to \(\beta\)-steps, and _overhead transitions_\(\leadsto_{\tt o}\), that take care of the various tasks of the machine (searching, substituting, and \(\alpha\)-renaming).
* _Initialization_. The component \(\triangleleft\subseteq\Lambda\times\mathtt{States}\) is the _initialization relation_ associating \(\lambda\)-terms to initial states. It is a _relation_ and not a function because \(t\triangleleft s\) maps a \(\lambda\)-term \(t\) (considered modulo \(\alpha\)) to a state \(s\) having a _pre-term representant_ of \(t\) (which is not modulo \(\alpha\)) as active term. Intuitively, any two states \(s\) and \(s^{\prime}\) such that \(t\triangleleft s\) and \(t\triangleleft s^{\prime}\) are \(\alpha\)-equivalent. A state \(s\) is _reachable_ if it can be reached starting from an initial state, that is, if \(s^{\prime}\leadsto^{*}s\) where \(t\triangleleft s^{\prime}\) for some \(t\) and \(s^{\prime}\), which we abbreviate using \(t\triangleleft s^{\prime}\leadsto^{*}s\).
* _Read-back_. The read-back function \(\,\cdot\,\colon\mathtt{States}\to\Lambda\) turns reachable states into \(\lambda\)-terms and satisfies the _initialization constraint_: if \(t\triangleleft s\) then \(\underline{s}=_{\alpha}t\).
Further Terminology and Notations.A state is _final_ if no transitions apply. A _run_\(\rho:s\leadsto^{*}s^{\prime}\) is a possibly empty finite sequence of transitions, the length of which is noted \(|\rho|\); note that the first and the last states of a run are not necessarily initial and final. If \(a\) and \(b\) are transitions labels (that is, \(\leadsto_{a}\subseteq\leadsto\) and \(\leadsto_{b}\subseteq\leadsto\)) then \(\leadsto_{a,b}:=\leadsto_{a}\cup\leadsto_{b}\) and \(|\rho|_{a}\) is the number of \(a\) transitions in \(\rho\).
Well-Namedness and Renaming.For the machines at work in this paper, the pre-terms in initial states shall be _well-named_, that is, they have pairwise distinct bound names; for instance \((\lambda x.x)(\lambda y.yy)\) is well-named while \((\lambda x.x)(\lambda x.xx)\) is not. We shall also write \(t^{\mathfrak{R}}\) in a state \(s\) for a _fresh well-named renaming_ of \(t\), _i.e._\(t^{\mathfrak{R}}\) is \(\alpha\)-equivalent to \(t\), well-named, and its bound variables are fresh with respect to those in \(t\) and in the other components of \(s\).
Implementation Theorem, Abstractly.We now formally define the notion of a machine implementing a strategy.
Definition 2 (Machine implementation): A machine \(\mathtt{M}=(\mathtt{States},\leadsto,\cdot\triangleleft,\_)\)\(\mathtt{\cdot}\)\(\mathtt{\cdot
## 4 Preliminaries: The Milner Abstract Machine
The new machine for the external strategy that we are about to introduce builds on the Milner Abstract Machine (shortened to MAM) for weak head reduction by Accattoli et al. [2], that is probably the simplest abstract machine for the \(\lambda\)-calculus in the literature. In this section, we overview the MAM, the data structures and transitions of which are defined in Fig. 1.
Data Structures.The MAM has two data structures, the stack \(S\) and the environment \(E\), which are lists. We use ':' for consing a single element onto a list, but also for list concatenation, so for instance \(S:S^{\prime}\) stands for the concatenation of stacks. The set of variables bound by an environment \(E=[x_{1}{\leftarrow}t_{1}]\ldots[x_{k}{\leftarrow}t_{k}]\) is \(\{x_{1},\ldots,x_{k}\}\) and it is noted \(\texttt{dom}\,E\).
Transitions of the MAM.A term \(t\) is initialized into an initial state \(t{\triangle}s\) by simply using an arbitrary well-named renaming \(t^{\texttt{R}}\) as active term together with empty stack and environment. The MAM _searches_ for \(\beta\)-redexes in the active term by descending on the left of applications via transition \(\rightsquigarrow_{\texttt{sea}_{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ }}}}}}}}}}}}}}\), while accumulating arguments on the (applicative) _stack_, which is simply a stack of terms. If it finds an abstraction \(\lambda x.t\) and the stack has \(u\) on top, then it performs the machine analogous of a \(\beta\)-redex, that is a \(\rightsquigarrow_{\beta}\) transition, which adds the entry \([x{\leftarrow}u]\) on top of the _environment_, to be understood as a delayed, explicit substitution. If the MAM finds a variable \(x\), then it looks up in the environment \(E\) if it finds an associated entry \([x{\leftarrow}t]\), and replaces \(x\) with an \(\alpha\)-renamed \(t^{\texttt{R}}\) copy of \(t\).
Transitions \(\rightsquigarrow_{\texttt{sea}_{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ \texttt{ \texttt{ }}}}}}}}}}}}}}\) are the overhead transitions of the MAM, that is, \(\rightsquigarrow_{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ \texttt{ \texttt{ }}}}}}}}}}}}}\), \(\rightsquigarrow_{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ \texttt{ \texttt{ \texttt{ \texttt{ }}}}}}}}}}}\)\) is its only \(\beta\)-transition. The MAM is deterministic.
Read-Back.The read-back of MAM states to terms can be defined in at least two ways, by either first reading back the environment or the stack. Here we give an environment-first definition, which shall be used also for the EXAM.
Definition 4 (MAM read-back).The read-back \(\underline{t}_{E}\) and \(\underline{S}_{E}\) of terms and stack with respect to an environment \(E\) are the terms and stacks given by:
Figure 1: Definition of the Milner Abstract Machine (MAM).
\[\begin{array}{rcl}\text{Terms}&\underline{t}_{c}:=t&\underline{t}_{[x+u]:E}:= \left(t\{x:=u\}\right)_{E}\\ \text{Stacks}&\underline{\epsilon}_{E}:=\epsilon&\underline{t:S}_{E}:= \underline{t}_{E}:\underline{S}_{E}\end{array}\]
_The read-back \(\underline{t}_{S}\) of \(t\) with respect to a stack \(S\) is the term given by:_
\[\underline{t}_{c}:=t\qquad\qquad\underline{t}_{u:S}:=\left(t\,u\right)_{S}\]
_Finally, the read-back of a state is defined as \(\underline{(t\mid S\mid E)}:=\underline{t}_{\underline{E}_{\underline{S}_{E}}}\)._
Theorem 4.1 ([2]): _The MAM implements weak head reduction \(\rightarrow_{wh}\)._
Environments are defined as _lists_ of entries, but they are meant to be concretely implemented as a store, without a rigid list structure. The idea is that variables are implemented as memory locations, as to obtain constant-time access to the right entry of the environment via the operation \(E(x)\). It is nonetheless standard to define environments as lists, as it helps one stating invariants concerning them. For more implementative details, see Accattoli and Barras [4].
Comparison with the KAM.For the reader acquainted with the famous Krivine Abstract Machine (KAM), the difference is that the stack and the environment of the MAM contain _codes_, not _closures_ as in the KAM, and that there is a single _global_ environment instead of many _local_ environments. A global environment indeed circumvents the complex mutually recursive notions of _local environment_ and _closure_, at the price of the explicit \(\alpha\)-renaming \(t^{\mathbb{R}}\) which is applied _on the fly_ in \(\leadsto_{\mathtt{sub}}\). The price however is negligible, at least theoretically, as the asymptotic complexity of the machine is not affected, see Accattoli and co-authors [2, 4] (the same can be said of variable names vs de Bruijn indexes/levels).
## 5 From the MAM to Normal Form Using Backtracking
Here, we quickly recall the usual way in which a machine such as the MAM can be extended to implement reduction to normal form--in this case leftmost reduction--by searching for redexes into abstractions bodies and arguments, and by adding backtracking. We follow the pattern used by Accattoli et al. [3], to which we point the interested reader for extended explanations.
Extending the MAM.The extended MAM in Fig. 2 has an _abstraction stack_\(A\) collecting the list of abstractions into which the machine entered, and a _dump_\(D\) to navigate the tree structure of applications. Moreover, it has two phases: \(\blacktriangledown\) for the usual search for redexes and evaluation, and \(\blacktriangle\) for backtracking. Search into arguments is done by first switching to backtracking.
The next section shows how to avoid backtracking, by changing the data structures. The obtained machine shall actually be more general, as the new data structures enable more flexibility with respect to the implemented strategy.
## 6 The External Abstract Machine
In this section, we define the EXternal Abstract Machine (EXAM), an abstract machine for the external strategy \(\rightarrow_{\mathbf{\pi}}\), by using the MAM as a sort of building block. The EXAM is given in Fig. 3 and explained in the following paragraphs. An example of run is given at the end of this section.
Data Structures.The EXAM has three data structures, two of which are new with respect to the MAM:
* The _approximant_ (of the normal form) \(\mathbb{A}\), which collects the parts of the normal form already computed by the run of the EXAM. The approximant is a _named multi-context_, defined below, that is, a context with zero, one, or more named holes \(\langle\cdot\rangle_{\alpha}\), each one identified by a distinct name \(\alpha\), \(\beta\), etc.
* The _pool_\(P\), which is a data structure containing a set of named MAM _jobs_, providing operations for scheduling the execution of these jobs. Each named job \(j_{\alpha}\) has shape \((t,S)_{\alpha}\), that is, it contains a term and a stack. The idea is that the job \(j_{\alpha}=(t,S)_{\alpha}\) of name \(\alpha\) is executing the term corresponding to \((t,S)\) and that the result of such execution shall be plugged in the approximant \(\mathbb{A}\), replacing the hole \(\langle\cdot\rangle_{\alpha}\). Pools are discussed in detail below.
* The _environment_\(E\), which is as for the MAM except that it is shared among all the jobs in the pool.
Transitions and Functioning of the EXAM.A term \(t\) is initialized into an initial state \(t\triangleleft s\) by creating a pool with a single named job \((t^{\mathtt{R}},\epsilon)_{\alpha}\) (having a well-named \(t^{\mathtt{R}}\) version of \(t\) and an empty stack) and pairing it with the approximant \(\mathbb{A}=\langle\cdot\rangle_{\alpha}\) and empty environment. The EXAM proceeds as the MAM until it reaches a MAM final state. Let us consider the normal forms for weak head reduction and the corresponding final states of the MAM, which are of two kinds:
1. _Abstractions (with no arguments)_: the \(\rightarrow_{wh}\) normal form is \(\lambda x.u\) which is the read-back of a final MAM state \((\lambda x.t,\epsilon,E)\) with empty stack (that is,
Figure 2: An extended MAM computing strong normal forms and resting on backtracking.
\(u=\underline{t}_{E}\)). In this case, the EXAM performs a \(\leadsto_{\mathtt{sea}_{\lambda}}\) transition, storing \(\lambda x.\langle\cdot\rangle_{\alpha}\) into the approximant \(\mathbb{A}\) at \(\alpha\), and adding a named job \((t,\epsilon)_{\alpha}\) to the pool \(P\).
2. _Possibly applied variables (with no substitution)_: the \(\to_{wh}\) normal form is \(xu_{1}\ldots u_{n}\) with \(n\geq 0\), which is the read-back of a final state \((x,t_{1}:\ldots:t_{n},E)\) with \(x\notin\mathtt{dom}\,E\) (that is, \(u_{i}=\underline{t}_{i_{E}}\)). In this case, the EXAM performs a \(\leadsto_{\mathtt{sea}_{\lambda}}\) transition. If \(n=0\) then \(\leadsto_{\mathtt{sea}_{\lambda}}\) simply adds \(x\) into the approximant \(\mathbb{A}\) at \(\alpha\). If \(n>0\) then \(\leadsto_{\mathtt{sea}_{\lambda}}\) adds \(n\) new named jobs \((t_{1},\epsilon)_{\beta_{1}},..,(t_{n},\epsilon)_{\beta_{n}}\) to the pool \(P\) and adds \(x\langle\cdot\rangle_{\alpha_{1}}..\langle\cdot\rangle_{\alpha_{1}}\) into the approximant \(\mathbb{A}\) at \(\alpha\).
Transitions \(\leadsto_{\mathtt{sea}_{\lambda}}\), \(\leadsto_{\mathtt{sea}_{\lambda}}\), and \(\leadsto_{\mathtt{sea}_{\lambda}}\) are the search transitions of the EXAM. Together with \(\leadsto_{\mathtt{sub}}\), they are the overhead transitions of the EXAM, that is, \(\leadsto_{\mathtt{o}}:=\leadsto_{\mathtt{sea}_{\lambda},\mathtt{sub},\mathtt{sea }_{\lambda},\mathtt{sea}_{\lambda}}\), and \(\leadsto_{\beta}\) is its only \(\beta\)-transition. The transition relation of the EXAM is the union of all these relations, _i.e._\(\leadsto_{\mathtt{EXAM}}:=\leadsto_{\mathtt{sea}_{\lambda},\beta,\mathtt{sub}, \mathtt{sea}_{\lambda},\mathtt{sea}_{\lambda}}\).
Pool Interface and Templates.The EXAM selects at each step a (named) job from the pool--the one performing the step--according to a possibly non-deterministic policy, and drops it back in the pool after the transition, unless the job is over, which happens in transition \(\leadsto_{\mathtt{sea}_{\mathtt{V}}}\). In general, _dropping a job back into a pool_ and _adding a job to a pool_ are not the same operation, since the former applies to jobs that were in the pool before being selected, while addition is reserved to new jobs. We actually abstract away from a job scheduling policy and from the details of the pool data structure: the pool is an abstract _interface_ which can be realized by various concrete data structures called _pool templates_.
Definition 5 (Pool templates): A pool template is a data structure \(P\) coming with the following five operations of the pool interface:
Figure 3: Definition of the EXternal Abstract Machine (EXAM).
* Names, support, and new_: there are a name function_\(\mathtt{names}(P)=\{\alpha_{1},..,\alpha_{n}\}\) _providing the finite and possibly empty set of the names of the jobs in the pool (_\(\mathbb{N}\ni n\geq 0\)_), a support function_\(\mathtt{supp}(P)=\{j_{\alpha_{1}},..,j_{\alpha_{n}}\}\) _providing the set of jobs in the pool (indexed by_\(\mathtt{names}(P)\)_), and a function_\(\mathtt{new}(j_{\alpha})\) _creating a pool containing_ \(j_{\alpha}\)_, that is, such that_\(\mathtt{supp}(\mathtt{new}(j_{\alpha}))=\{j_{\alpha}\}\)_._
* Selection_: there is a selection relation_\(\stackrel{{\mathtt{sel}}}{{\leftarrow}}(P,j_{\alpha},P^{\prime})\) _such that_\(j_{\alpha}\in\mathtt{supp}(P)\) _and_\(\mathtt{supp}(P^{\prime})=\mathtt{supp}(P)\setminus\{j_{\alpha}\}\)_. The intuition is that_ \(P^{\prime}\) _is_ \(P\) _without_ \(j_{\alpha}\)_, which has been selected and removed from_ \(P\)_. There is a_ choice constraint_: if_ \(P\) _is non-empty then_\(\stackrel{{\mathtt{sel}}}{{\leftarrow}}(P,j_{\alpha},P^{\prime})\) _holds for some_ \(j_{\alpha}\) _and_ \(P^{\prime}\)_. We write_\(j_{\alpha}\stackrel{{\mathtt{sel}}}{{\leftarrow}}P^{\prime}\) _for a pool_ \(P\) _such that_\(\stackrel{{\mathtt{sel}}}{{\leftarrow}}(P,j_{\alpha},P^{\prime})\)_._
* Dropping_: there is a dropping function_\(\stackrel{{\mathtt{dr}}}{{\rightarrow}}(j_{\alpha},P)=P^{\prime}\) _defined when_\(\alpha\notin\mathtt{names}(P)\) _and such that_\(\mathtt{supp}(P^{\prime})=\mathtt{supp}(P)\cup\{j_{\alpha}\}\)_. Dropping is meant to add a job_ \(j_{\alpha}\) _back to a pool_ \(P\) _from which_ \(j_{\alpha}\) _was previously selected. It is not necessarily the inverse of selection. We write_\(j_{\alpha}\stackrel{{\mathtt{dr}}}{{\rightarrow}}P\) _for the pool_\(\stackrel{{\mathtt{dr}}}{{\rightarrow}}(j_{\alpha},P)\)_._
* Adding_: similarly, there is an adding function_\(\stackrel{{\mathtt{add}}}{{\rightarrow}}(j_{\alpha},P)=P^{\prime}\) _defined when_\(\alpha\notin\mathtt{names}(P)\) _and such that_\(\mathtt{supp}(P^{\prime})=\mathtt{supp}(P)\cup\{j_{\alpha}\}\)_. Adding is meant to add a new job_ \(j_{\alpha}\) _to a pool_ \(P\)_, that is, a job that has never been in_ \(P\)_. We write_\(j_{\alpha}\stackrel{{\mathtt{add}}}{{\rightarrow}}P\) _for_\(\stackrel{{\mathtt{add}}}{{\rightarrow}}(j_{\alpha},P)=P^{\prime}\)_, and extend it to lists as follows:_\(\epsilon\stackrel{{\mathtt{add}}}{{\rightarrow}}P:=P\)_, and_\(j_{\alpha_{1}}:...:j_{\alpha_{n}}\stackrel{{\mathtt{add}}}{{ \rightarrow}}P:=j_{\alpha_{n}}\stackrel{{\mathtt{add}}}{{ \rightarrow}}\left(j_{\alpha_{1}}:...:j_{\alpha_{n-1}}\stackrel{{ \mathtt{add}}}{{\rightarrow}}P\right)\)_._
Set EXAM.The simplest pool template is the _set template_ where pools \(P\) are sets of named jobs, the support is the pool itself (and the name set is as expected), \(\mathtt{new}(j_{\alpha})\) creates a singleton with \(j_{\alpha}\), selection is the relation \(\{(P,j_{\alpha},P\setminus\{j_{\alpha}\})\,|\,j_{\alpha}\in P\}\), and both dropping and adding are the addition of an element. The set template models the most general behavior, as _any_ job of the pool can then be selected for the next step. The EXAM instantiated on the set template is called _Set EXAM_. Other templates shall be considered at the end of the paper, motivating in particular the distinction between dropping and adding.
Approximants and Named Multi-Contexts.The definition of the EXAM rests on approximants, which are stable prefixes of normal forms, that is, normal forms from which some sub-terms have been removed and replaced with named holes. In fact, we are going to introduce more general _(named) multi-contexts_ to give a status to approximants in which some but not all holes have been replaced by an _arbitrary term_--which shall be needed in proofs (when manipulating the read-back)--thus losing their "normal prefix" property.
Definition 6 (Named multi-contexts): A _(named) multi-context_\(\mathbb{C}\) is a \(\lambda\)-term in which there may appear free occurrences of (named) holes, \(\mathtt{i.e.}\):
\[\textsc{(Named) Multi-contexts}\quad\mathbb{C}::=x\mid\langle\cdot\rangle_{ \alpha}\mid\lambda x.\,\mathbb{C}\mid\mathbb{C}\,\mathbb{C}\]
The plugging \(\mathbb{C}\langle\mathbb{C}^{\prime}\rangle_{\alpha}\) of \(\alpha\) by \(\mathbb{C}^{\prime}\) in \(\mathbb{C}\), is the capture-allowing substitution of \(\langle\cdot\rangle_{\alpha}\) by \(\mathbb{C}^{\prime}\) in \(\mathbb{C}\). We write \(\mathtt{names}(\mathbb{C})\) for the set of names that occur in \(\mathbb{C}\). We shall use only multi-contexts where named holes have pairwise distinct names.
Note that a multi-context \(\mathbb{C}\) without holes is simply a term, thus the defined notion of plugging subsumes the plugging \(\mathbb{C}\langle t\rangle_{\alpha}\) of terms in multi-contexts.
Approximants \(\mathbb{A}\) are defined in Fig. 3 by mutual induction with rigid approximants \(\mathbb{R}\), and are special cases of multi-contexts. Alternative streamlined definitions for (rigid) approximants are (possibly \(y=x_{i}\) for some \(i\in\{1,\ldots,n\}\)):
\[\mathbb{R}::=x\,\mathbb{A}_{1}..\mathbb{A}_{n}\qquad\qquad\mathbb{A}::=\lambda x _{1}..x_{n}.\,\langle\cdot\rangle_{\alpha}\mid\lambda x_{1}..x_{n}.\,y\, \mathbb{A}_{1}..\mathbb{A}_{n}\]
Note that in \(\mathbb{A}\) and \(\mathbb{R}\) holes are never applied, that is, they are _non-applying_ multi-contexts. For the sake of readability, in the paper we only give statements about approximants, which are then reformulated in the Appendix by pairing them with a similar statement about rigid approximants, and proving the two of them simultaneously by mutual induction.
We prove two properties of approximants. Firstly, to justify that transitions \(\leadsto_{\texttt{sea}_{\lambda}}\) and \(\leadsto_{\texttt{sea}_{\mathcal{V}}}\) are well-defined, we show that the first component of the state on their right-hand side is indeed an approximant. Secondly, we relate approximants with normal forms, to justify the terminology.
Lemma 1 (Inner extension of approximants): _If \(\mathbb{A}\) is an approximant and \(\beta_{1},\ldots,\beta_{n}\notin\texttt{names}(\mathbb{A})\) then \(\mathbb{A}\langle\lambda x.\,\langle\cdot\rangle_{\alpha}\rangle_{\alpha}\) and \(\mathbb{A}\langle x\,\langle\cdot\rangle_{\beta_{1}}..\langle\cdot\rangle_{ \beta_{n}}\rangle_{\alpha}\) are approximants._
Lemma 2: _An approximant \(\mathbb{A}\) without named holes is a normal form._
Read-Back.To give a notion of read-back that is independent of the pool template, we define the read-back using a set \(X\) of uniquely named jobs--standing for the support \(\texttt{supp}(P)\) of the pool--rather than the pool \(P\) itself. Moreover, we need a way of applying the substitution induced by an environment to named jobs and sets of named jobs, which is based on the notions \(\underline{t}_{E}\) and \(\underline{S}_{E}\) for terms and stacks given for the MAM, from which we also borrow the definition of \(\underline{t}_{S}\).
Definition 7 (EXAM read-back): Applying an environment \(E\) to jobs and job sets is defined as follows:
\[\begin{array}{cc}\texttt{Jobs/jobs sets}&\underline{(t,S)_{\alpha}}_{E}:=( \underline{t}_{E},\underline{S}_{E})_{\alpha}&\underline{\{j_{\alpha_{1}},.., j_{\alpha_{n}}\}}_{E}:=\{\underline{j_{\alpha_{1}}}_{E},..,\underline{j_{ \alpha_{n}}}_{E}\}\end{array}\]
The read-back of jobs, and of a multi context \(\mathbb{C}\) with respect to a set of uniquely named jobs \(\{j_{\alpha_{1}},..,j_{\alpha_{n}}\}\) are defined as follows:
\[\begin{array}{cc}\underline{(t,S)_{\alpha}}:=\underline{t}_{S}&\underline{ \mathbb{C}}_{\{j_{\alpha_{1}},..,j_{\alpha_{n}}\}}:=\mathbb{C}\langle \underline{j_{\alpha_{1}}}\rangle_{\alpha_{1}}..\langle\underline{j_{\alpha_{ n}}}\rangle_{\alpha_{n}}\end{array}\]
An EXAM state \(s\) is read-back as a multi-context setting \(\underline{\mathbb{A}}\mid P\mid E\rrbracket:=\underline{\mathbb{A}}_{ \texttt{supp}(P)_{E}}\).
Diamond: Since the selection operation is non-deterministic, the EXAM in general is non-deterministic. The most general case is given by the _Set EXAM_, which is the EXAM instantiated with the set template for pools described after Definition 5. As for the external strategy, the Set EXAM has the diamond property up to a slight glitch: swapping the order of two \(\beta\)-transitions on two different jobs, adds entries to the environment in different orders.
Let \(\approx\) be the minimal equivalence relation on environments containing the following relation:
\[E:[x{\leftarrow}t]:[y{\leftarrow}u]:E^{\prime} \sim E:[y{\leftarrow}u]:[x{\leftarrow}t]:E^{\prime}\quad\text{ if }x\notin u\text{ and }y\notin t\]
Let \(\equiv\) be the relation over states s. t. \(\llbracket\mathbb{A}\mid P\mid E_{1}\rrbracket\equiv\llbracket\mathbb{A}\mid P \mid E_{2}\rrbracket\) if \(E_{1}\approx E_{2}\).
Proposition 3: _The Set EXAM is diamond up to \(\equiv\), i.e., if \(s\leadsto_{\texttt{EXAM}}s_{1}\) and \(s\leadsto_{\texttt{EXAM}}s_{2}\) then \(\exists s_{1}^{\prime}\) and \(s_{2}^{\prime}\) such that \(s_{1}\leadsto_{\texttt{EXAM}}s_{1}^{\prime}\), \(s_{2}\leadsto_{\texttt{EXAM}}s_{2}^{\prime}\), and \(s_{1}^{\prime}\equiv s_{2}^{\prime}\)._
Example 1: The following is a possible run of the Set EXAM--that is, the EXAM with the set template for pools--on the term \(t:=x(\mathtt{I}_{y}z)(\delta_{w}z)\) where \(\mathtt{I}_{y}=\lambda y.y\) and \(\delta_{w}=\lambda w.ww\), ending in a final state.
\begin{tabular}{|c|c|c||c|c|} \hline Approx. & Pool & Env & Tran. & Selected Job \\ \hline \hline \(\langle\cdot\rangle_{\alpha}\) & \(\{(x(\mathtt{I}_{y}z)(\delta_{w}z),\epsilon)_{\alpha}\}\mid\) & \(\epsilon\) & \(\leadsto_{\texttt{sea}\alpha}\) & \(\alpha\) \\ \(\langle\cdot\rangle_{\alpha}\) & \(\{(x(\mathtt{I}_{y}z),\delta_{w}z)_{\alpha}\}\mid\) & \(\epsilon\) & \(\leadsto_{\texttt{sea}\alpha}\) & \(\alpha\) \\ \(\langle\cdot\rangle_{\alpha}\) & \(\{(x,\mathtt{I}_{y}z:\delta_{w}z)_{\alpha}\}\mid\) & \(\epsilon\) & \(\leadsto_{\texttt{sea}\nu}\) & \(\alpha\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \(\{(\mathtt{I}_{y}z,\epsilon)_{\beta},(\delta_{w}z,\epsilon)_{\gamma}\}\mid\) & \(\epsilon\) & \(\leadsto_{\texttt{sea}\alpha}\) & \(\gamma\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \(\{(\mathtt{I}_{y}z,\epsilon)_{\beta},(ww,\epsilon)_{\gamma}\}\mid\) & \(\epsilon\) & \(\leadsto_{\texttt{sea}\alpha}\) & \(\beta\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \(\{(\mathtt{I}_{y}z,\beta)_{\beta},(ww,\epsilon)_{\gamma}\}\mid\) & \([w{\leftarrow}z]\) & \(\leadsto_{\texttt{sea}\alpha}\) & \(\gamma\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \(\{(\mathtt{I}_{y},z)_{\beta},(ww,\gamma)_{\gamma}\}\mid\) & \([w{\leftarrow}z]\) & \(\leadsto_{\texttt{sea}\alpha}\) & \(\gamma\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \(\{(\mathtt{I}_{y},z)_{\beta},(w,w)_{\gamma}\}\mid\) & \([w{\leftarrow}z]\) & \(\leadsto_{\texttt{$\beta$}}\) & \(\beta\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \(\{(y,\epsilon)_{\beta},(w,w)_{\gamma}\}\mid\) & \([y{\leftarrow}z]\) & \(\leadsto_{\texttt{$\texttt{sub}$}}\) & \(\beta\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \(\{(z,\epsilon)_{\beta},(w,w)_{\gamma}\}\mid\) & \([y{\gets}z]\) & \(\leadsto_{\texttt{$\texttt{sub}$}}\) & \(\gamma\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \(\{(z,\epsilon)_{\beta},(z,w)_{\gamma}\}\mid\) & \([y{\gets}z]\) & \(\leadsto_{\texttt{$\texttt{sea}\nu$}}\) & \(\gamma^{\prime}\) \\ \(x\langle\cdot\rangle_{\beta}(x\langle\cdot\rangle_{\gamma^{\prime}})\) & \(\{(z,\epsilon)_{\beta},(w,\epsilon)_{\gamma^{\prime}}\}\mid\) & \([y{\gets}z]\) & \(\leadsto_{\texttt{$\texttt{sub}$}}\) & \(\gamma^{\prime}\) \\ \(x\langle\cdot\rangle_{\beta}(x\langle\cdot\rangle_{\gamma^{\prime}})\) & \(\{(z,\epsilon)_{\beta},(z,\epsilon)_{\gamma^{\prime}}\}\mid\) & \([y{\gets}z]\) & \(\leadsto_{\texttt{$\texttt{sea}\nu$}}\) & \(\beta\) \\ \(xz(x\langle\cdot\rangle_{\gamma^{\prime}})\) & \(\{(z,\epsilon)_{\gamma^{\prime}}\}\) & \([y{\gets}z]\) & \(\leadsto_{\texttt{$\texttt{sea}\nu$}}\) & \(\gamma^{\prime}\) \\ \(xz(zz)\) & \(\emptyset\) & \([y{\gets}z]\) & \([w{\leftarrow}z]\) & \\ \hline \end{tabular}
## 7 Runs to Evaluations
In this section, we develop the projection of EXAM runs on external evaluations, and then instantiates it with a deterministic pool template obtaining runs corresponding to leftmost evaluations.
Overhead Transparency.By the abstract recipe for implementation theorems in Sect. 3, to project runs on evaluations we need to prove _overhead transparency_ and _\(\beta\)-projection_. Overhead transparency is simple, it follows from the definition of read-back plus some of its basic properties (in the Appendix.).
Proposition 4 (Overhead transparency): _If \(s\leadsto_{\texttt{o}}s^{\prime}\) then \(\underline{s}=\underline{s^{\prime}}\)._
Invariants.To prove the \(\beta\)-projection property, we need some invariants of the EXAM. Firstly, we deal with a set of invariants concerning variable names, hole names, and binders. A notable point is that they are proved by using only properties of the pool interface, and are thus valid for every pool template instantiation of the EXAM.
Terminology: a _binding occurrence_ of a variable \(x\) is an occurrence of \(\lambda x.\,t\) in \(\mathbb{A}\), \(P\) or \(E\), or an occurrence of \([x{\leftarrow}t]\) in \(E\), for some \(t\).
Lemma 3 (EXAM Invariants): _Let \(s=\llbracket\mathbb{A}\mid P\mid E\rrbracket\) be an EXAM reachable state reachable. Then:_
1. _[label=0., ref=0]_
2. _Uniqueness._ _There are no repeated names in_ \(\mathbb{A}\)_._
3. _Freshness._ _Different binding occurrences in_ \(s\) _refer to different variable names._
4. _Bijection._ _The set of names in_ \(\mathbb{A}\) _is in 1-1 correspondence with the set of names in_ \(P\)_, that is,_ \(\mathtt{names}(\mathbb{A})=\mathtt{names}(P)\)_._
5. _Freeness._ _The free variables of_ \(\mathbb{A}\) _are globally free, that is,_ \(\mathtt{fv}(\mathbb{A})\cap\mathtt{dom}\,E=\emptyset\)_._
6. _Local scope._ _For every sub-term of the form_ \(\lambda x.\,t\) _in a job in_ \(\mathtt{supp}(P)\) _or in_ \(E\)_, there are no occurrences of_ \(x\) _outside of_ \(t\)_. Moreover, in the environment_ \([x_{1}{\leftarrow}t_{1}]:..:[x_{n}{\leftarrow}t_{n}]\)_, there are no occurrences of_ \(x_{i}\) _in_ \(t_{j}\) _if_ \(i\leq j\)_._
The read-back of an EXAM state is defined as a multi-context, but for reachable terms it is in fact a term, as stated by the second point of the next lemma, which is proved by putting together the bijection invariant for reachable states (Lemma 3.3) and the first point.
Lemma 4 (Reachable states read back to terms):
1. _Let_ \(\mathbb{A}\) _be an approximant and let_ \(X\) _be a set of uniquely named jobs such that_ \(\mathtt{names}(\mathbb{A})\subseteq\mathtt{names}(X)\)_. Then_ \(\underline{\mathbb{A}}_{X}\) _is a term._
2. _Let_ \(s\) _be a reachable state. Then its read-back_ \(\underline{s}\) _is a term._
Contextual Read-Back.The key point of the \(\beta\)-projection property is proving that the read-back of the data structures of a reachable state without the active term/job provides an evaluation context--an external context in our case. This is ensured by the following lemma. It has a simple proof (using Lemma 4) because we can state it about approximants without mentioning reachable state, given that we know that the first component of a reachable state is always an approximant (because of Lemma 1). The lemma is then used in the proof of the \(\beta\)-projection property.
Lemma 5 (External context read-back): _Let \(X\) be a set of uniquely named jobs and \(\mathbb{A}\) be an approximant with no repeated names such that \(\mathtt{names}(\mathbb{A})\setminus\mathtt{names}(X)=\{\alpha\}\). Then \(\underline{\mathbb{A}}_{X}\langle\langle\cdot\rangle\rangle_{\alpha}\) is an external context._
Theorem 5.1 (\(\beta\)-projection): _If \(s\rightsquigarrow_{\beta}s^{\prime}\) then \(\underline{s}\rightarrow_{\mathtt{x}}\underline{s^{\prime}}\)._
Now, we obtain the _runs to evaluations_ part of the implementation theorem, which by the theorem about sufficient conditions for implementations (Theorem 5.1) follows from overhead transparency and \(\beta\)-projection.
Corollary 1 (EXAM runs to external evaluations): _For any EXAM run \(\rho:t\triangleleft s\rightsquigarrow_{\mathtt{EXAM}}^{*}s^{\prime}\) there exists a \(\rightarrow_{\mathtt{x}}\)-evaluation \(e:t\rightarrow^{*}_{\mathtt{x}}\underline{s^{\prime}}\). Moreover, \(|e|=|\rho|_{\beta}\)._
Last, we analyze final states.
Proposition 5 (Characterization of final states): _Let \(s\) be a reachable final state. Then there is a normal form \(f\) such that \(s=\llbracket f\mid\emptyset\mid E\rrbracket\) and \(\underline{s}=f\). Moreover, if \(s\equiv s^{\prime}\) then \(s^{\prime}\) is final and \(\underline{s}^{\prime}=f\)._
### Leftmost Runs to Leftmost Evaluations
Now, we instantiate the EXAM with the stack template for pools, obtaining a machine implementing leftmost evaluation.
Leftmost Exam.Let the _Leftmost EXAM_ be the deterministic variant of the EXAM adopting the stack template for pools, that is, such that:
* Pools are lists \(j_{\alpha_{1}}:..:j_{\alpha_{n}}\) of named jobs, \(\mathtt{new}(j_{\alpha})\) creates the list containing only \(j_{\alpha}\), and the support of a pool is the set of jobs in the list;
* Selection pops from the pool, that is, if \(P=j_{\alpha_{1}}:..:j_{\alpha_{n}}\) then \(j_{\alpha}\stackrel{{\mathtt{sel}}}{{\leftarrow}}P\) pops \(j_{\alpha}\) from the list \(j_{\alpha}:j_{\alpha_{1}}:..:j_{\alpha_{n}}\);
* Both dropping and adding push on the list, and are inverses of selection.
Example 2: The Leftmost EXAM run on the same term \(t:=x(\mathtt{I}_{y}z)(\delta_{w}z)\) used for the Set EXAM in Example 1 follows (excluding the first three transitions, that are the same for both machines, as they are actually steps of the MAM).
\begin{tabular}{|c|c|c||c|c|} \hline Approx. & Pool & Env & Trans. & Selected Job \\ \hline \hline \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \((\mathtt{I}_{y}z,\epsilon)_{\beta}:(\delta_{w}z,\epsilon)_{\gamma}\) & \(\epsilon\) & \(\leadsto_{\mathtt{seaa}}\) & \(\beta\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \((\mathtt{I}_{y},z)_{\beta}:(\delta_{w}z,\epsilon)_{\gamma}\) & \(\epsilon\) & \(\leadsto_{\beta}\) & \(\beta\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \((y,\epsilon)_{\beta}:(\delta_{w}z,\epsilon)_{\gamma}\) & \([y{\leftarrow}z]\) & \(\leadsto_{\mathtt{sub}}\) & \(\beta\) \\ \(x\langle\cdot\rangle_{\beta}\langle\cdot\rangle_{\gamma}\) & \((z,\epsilon)_{\beta}:(\delta_{w}z,\epsilon)_{\gamma}\) & \([y{\leftarrow}z]\) & \(\leadsto_{\mathtt{seav}}\) & \(\beta\) \\ \(xz\langle\cdot\rangle_{\gamma}\) & \((\delta_{w}z,\epsilon)_{\gamma}\) & \([y{\leftarrow}z]\) & \(\leadsto_{\mathtt{seaa}}\) & \(\gamma\) \\ \(xz\langle\cdot\rangle_{\gamma}\) & \((\delta_{w},z)_{\gamma}\) & \([y{\leftarrow}z]\) & \(\leadsto_{\beta}\) & \(\gamma\) \\ \(xz\langle\cdot\rangle_{\gamma}\) & \((ww,\epsilon)_{\gamma}\) & \([w{\leftarrow}z]:[y{\leftarrow}z]\) & \(\leadsto_{\mathtt{seaa}}\) & \(\gamma\) \\ \(xz\langle\cdot\rangle_{\gamma}\) & \((w,w)_{\gamma}\) & \([w{\leftarrow}z]:[y{\leftarrow}z]\) & \(\leadsto_{\mathtt{sub}}\) & \(\gamma\) \\ \(xz\langle\cdot\rangle_{\gamma}\) & \((z,w)_{\gamma}\) & \([w{\leftarrow}z]:[y{\leftarrow}z]\) & \(\leadsto_{\mathtt{seav}}\) & \(\gamma\) \\ \(xz(z\langle\cdot\rangle_{\gamma^{\prime}})\) & \((w,\epsilon)_{\gamma^{\prime}}\) & \([w{\leftarrow}z]:[y{\leftarrow}z]\) & \(\leadsto_{\mathtt{sub}}\) & \(\gamma^{\prime}\) \\ \(xz(z\langle\cdot\rangle_{\gamma^{\prime}})\) & \((z,\epsilon)_{\gamma^{\prime}}\) & \([w{\leftarrow}z]:[y{\leftarrow}z]\) & \(\leadsto_{\mathtt{seav}}\) & \(\gamma^{\prime}\) \\ \(xz(zz)\) & \(\epsilon\) & \([w{\leftarrow}z]:[y{\leftarrow}z]\) & \(\leadsto_{\mathtt{seav}}\) & \(\gamma^{\prime}\) \\ \hline \end{tabular} Proving that Leftmost EXAM runs read back to leftmost evaluations requires a new \(\beta\)-projection property. Its proof is based on the following quite complex invariant about instantiations of the approximants computed by the Leftmost EXAM. _Terminology_: a context \(C\) is _non-applying_ if \(C\neq D\langle\langle\cdot\rangle t\rangle\) for all \(D\) and \(t\), that is, it does not apply the hole to an argument.
Proposition 6 (Leftmost context invariant): _Let_
* \(s=\llbracket\mathbb{A}\,|\,j_{\alpha_{1}}:..:j_{\alpha_{n}}\,|\,E\rrbracket\) _be a reachable Leftmost EXAM state with_ \(n\geq 1\)_;_
* \(f_{j}\) _be a normal form for all_ \(j\) _such that_ \(1\leq j<n\)_,_
* \(t_{j}\) _be a term for all_ \(j\) _such that_ \(1<j\leq n\)_, and_
* \(L\) _be a non-applying leftmost context._
_Then \(C^{s}_{f_{1},..,f_{i-1}|L|t_{i+1},..,t_{n}}:=\mathbb{A}\langle f_{1}\rangle_{ \alpha_{1}}..\langle f_{i-1}\rangle_{\alpha_{i-1}}\langle L\rangle_{\alpha_{i}} \langle t_{i+1}\rangle_{\alpha_{i+1}}..\langle t_{n}\rangle_{\alpha_{n}}\) is a non-applying leftmost context for every \(i\in\{1,\ldots,n\}\)._
From the invariant, it follows that a reachable state less the first job reads back to a leftmost context, which implies the leftmost variant of \(\beta\)-projection, in turn allowing us to project Leftmost EXAM runs on leftmost evaluations.
Lemma 6 (Leftmost context read-back).: _Let \(s=\llbracket\mathbb{A}\,|\,j_{\alpha_{1}}:..:j_{\alpha_{n}}\,|\,E\rrbracket\) be a reachable Leftmost EXAM state with \(n\geq 1\) and \(s_{\bullet}:=\llbracket\mathbb{A}\,|\,j_{\alpha_{2}}:..:j_{\alpha_{n}}\,|\,E\rrbracket\). Then \(\underline{s_{\bullet}}\langle\langle\cdot\rangle\rangle_{\alpha_{1}}\) is a non-applying leftmost context._
Proposition 7 (Leftmost \(\beta\)-projection).: _Let \(s\) be a reachable Leftmost EXAM state. If \(s\rightsquigarrow_{\beta}s^{\prime}\) then \(\underline{s}\to_{lo}\underline{s^{\prime}}\)._
Corollary 2 (Leftmost EXAM runs to leftmost evaluations).: _For any Leftmost EXAM run \(\rho:t\triangleleft s\rightsquigarrow_{\mathtt{EXAM}}^{*}s^{\prime}\) there exists a \(\to_{lo}\)-evaluation \(e:t\to_{lo}^{*}\underline{s^{\prime}}\). Moreover, \(|e|=|\rho|_{\beta}\)._
## 8 Evaluations to Runs
Here, we develop the reflection of external evaluations to EXAM runs. By the abstract recipe for implementation theorems in Sect. 3, one needs to prove _overhead termination_ and _\(\beta\)-reflection_. At the level of non-determinism, the external strategy is matched by the most permissive scheduling of jobs, that is, the set template. Therefore, we shall prove the result with respect to the Set EXAM.
Overhead Termination.To prove overhead termination, we define a measure. The measure does not depend on job names nor bound variable names, which is why the definition of the measure replaces them with underscores (and it is well defined even if it uses the renaming \(E(x)^{\mathtt{R}}\)).
Definition 8 (Overhead measure).: Let \(j_{\alpha}\) be a job and \(E\) be an environment satisfying the freshness name property (together) of Lemma 3, and \(s\) be a reachable state. The overhead measures \(|j_{\alpha},E|_{\circ}\) and \(|s|_{\circ}\) are defined as follows:
\[\begin{array}{rcl}|(\lambda_{\_}t,u:S)_{\_},E|_{\circ}&:=&0\\ &|(\lambda_{\_}t,\epsilon)_{\_},E|_{\circ}&:=&1+|(t,\epsilon)_{\_},E|_{\circ}\\ &|(tu,S)_{\_},E|_{\circ}&:=&1+|(t,u:S)_{\_},E|_{\circ}\\ &|(x,\epsilon)_{\_},E|_{\circ}&:=&1+|(E(x)^{\mathtt{R}},\epsilon)_{\_},E|_{ \circ}&\text{if }x\in\mathtt{dom}\,E\\ |(x,t_{\_}1:..:t_{\_}n)_{\_},E|_{\circ}&:=&1+\sum_{i=1}^{n}|(t_{\_}i,\epsilon)_ {\_},E|_{\circ}&\text{with }n\geq 0\text{, if }x\notin\mathtt{dom}\,E\\ &|\llbracket\mathbb{A}\,|\,P\,|\,E\rrbracket|_{\circ}&:=&\sum_{j_{\alpha}\in \mathtt{supp}(P)}|j_{\alpha},E|_{\circ}\end{array}\]
Proposition 8 (Overhead termination).: _Let \(s\) be a Set EXAM reachable state. Then \(s\rightsquigarrow_{\circ}^{|s|_{\circ}}s^{\prime}\) with \(s^{\prime}\rightsquigarrow_{\circ}\)-normal._
Addresses and \(\beta\)-Reflection.For the \(\beta\)-reflection property, we need a way to connect external redexes on terms with \(\beta\)-transitions on states. We use _addresses_.
Definition 9 (Address and sub-term at an address): An _address a_ is a string over the alphabet \(\{l,r,\lambda\}\). The sub-term \(t|_{\mathrm{a}}\) of a term \(t\) at address a is the following partial function (the last case of which means that in any case not covered by the previous ones \(t|_{\mathrm{a}}\) is undefined):
\[t|_{\mathrm{e}} :=t\qquad(tu)|_{l:\mathrm{a}}\mathrel{\mathop{:}}=t|_{\mathrm{a}} \qquad(tu)|_{r:\mathrm{a}}:=u|_{\mathrm{a}}\qquad(\lambda x.t)|_{\lambda: \mathrm{a}}\mathrel{\mathop{:}}=t|_{\mathrm{a}}\] \[\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{}}}}}}}}}}}}}}_{ \mathrm{c}:\mathrm{a}} \mathrel{\mathop{:}}=\bot\]
The sub-term \(\mathbb{C}|_{\mathrm{a}}\) at a of a multi-context is defined analogously. An address a is defined in \(t\) (resp. \(\mathbb{C}\)) if \(t|_{\mathrm{a}}\neq\bot\) (resp. \(\mathbb{C}|_{\mathrm{a}}\neq\bot\)), and undefined otherwise.
There is a strong relationship between addresses in the approximant of a state and in the read-back of the state, as expressed by the following lemma. The lemma is then used to prove \(\beta\)-reflection, from which the _evaluation to runs_ part of the implementation theorem follows.
Lemma 7: _Let \(s=\llbracket\mathbb{A}\mid P\mid E\rrbracket\) be a state and a defined address in \(\mathbb{A}\). Then a is a defined address in \(\underline{s}\), and \(\underline{s}|_{\mathrm{a}}\) starts with the same constructor of \(\mathbb{A}|_{\mathrm{a}}\) unless \(\mathbb{A}|_{\mathrm{a}}\) is a named hole._
Proposition 9 (\(\beta\)-reflection): _Let \(s\) be a \(\leadsto\)-normal reachable state. If \(\underline{s}\to_{\mathrm{x}}u\) then there exists \(s^{\prime}\) such that \(s\leadsto_{\beta}s^{\prime}\) and \(\underline{s^{\prime}}=u\)._
Corollary 3 (Evaluations to runs): _For every \(\to_{\mathrm{x}}\)-evaluation \(e:t\to_{\mathrm{x}}^{*}u\) there exists a Set EXAM run \(\rho:t\triangleleft s\leadsto_{\mathtt{EXAM}}^{*}s^{\prime}\) such that \(\underline{s^{\prime}}=u\)._
A similar result for leftmost evaluation and the Leftmost EXAM follows more easily from the characterization of final states (Proposition 5), overhead termination (Proposition 8), and determinism of the Leftmost EXAM--this is the standard pattern for deterministic strategies and machines, used for instance by Accattoli et al. for their machine for leftmost evaluation [3].
Names and Addresses.It is natural to wonder whether one can refine the EXAM by using addresses a as a more precise form of names for jobs. It is possible, it is enough to modify the EXAM as to extend at each step the name/address. For instance, transition \(\leadsto_{\mathtt{sea}_{\mathtt{a}}}\) would become:
\begin{tabular}{|c|c|c||c|c|c|} \hline Ap. & Pool & Env & & Ap. & Pool & Env \\ \hline \hline \(\mathbb{A}\) & \(\mid(t\,u,S)_{\mathrm{a}}\stackrel{{\mathtt{sel}}}{{\leftarrow}}P \mid\) & \(E\) & \(\leadsto_{\mathtt{sea}_{\mathtt{a}}}\) & \(\mathbb{A}\) & \(\mid(t,u:S)_{l:\mathrm{a}}\stackrel{{\mathtt{dr}}}{{\rightarrow}}P \mid\) & \(E\) \\ \hline \end{tabular} Then a \(\beta\)-transition of address a in a reachable state \(s\) corresponds exactly to a \(\beta\)-redex of address a in \(\underline{s}\). We refrained from adopting addresses as names, however, because this is only useful for proving the \(\beta\)-reflection property of the EXAM, the machine does not need such an additional structure for its functioning.
## 9 Further Pool Templates
Here we quickly discuss how by changing the pool template of the EXAM we can implement different forms of evaluation, justifying the design choice of presenting pools as an interface.
Least Level.Another sub-strategy of external reduction that computes \(\beta\)-normal forms is provided by _least level reduction_\(\rightarrow_{\ell\ell}\), a notion from the linear logic literature. Picking a redex of minimal level, where the level is the number of arguments in which the redex is contained, is another predicate (similarly to the leftmost one) that ensures externality of an outermost redex. Note that the \(\Omega\) redex in (1) (page 5) is not of minimal level (it has level 1 while the redex involving \(z\) has level 0). Least level reduction is non-deterministic but diamond. For instance the two redexes (of level 1) in \(x(\mathtt{I}y)(\mathtt{I}z)\) are both least level. Note that the leftmost redex might not be least level, as in \(x(x(\mathtt{I}y))(\mathtt{I}z)\), where the leftmost redex is \(\mathtt{I}y\) and has level 2, while \(\mathtt{I}z\) has level 1.
By slightly changing the pool template, we can turn the Leftmost EXAM into a machine for least level evaluation. The key observation is that when new jobs are created, which is done only by transition \(\leadsto_{\mathtt{sea}_{\mathcal{V}}}\), they all have level \(n+1\) where \(n\) is the level of the active job. To obtain a machine that processes job by increasing levels, then, it is enough to add the new jobs _at the end of the pool_, rather than at the beginning. This template is an example where dropping (which pushes an element on top of the list of jobs) and adding (which adds at the end) are _not_ the same operation.
Fair Template.Another interesting template is the one where pools are lists and dropping always adds at the end of the list. In this way the EXAM is _fair_, in the sense that even when it diverges, it keeps evaluating all jobs. This kind of strategies are of interest for infinitary \(\lambda\)-calculi, where one wants to compute all branches of an infinite normal form, instead of being stuck on one.
## 10 Conclusions
This paper studies two simple ideas and applies them to the paradigmatic case of strong call-by-name evaluation. Firstly, avoiding _backtracking on the search for redexes_ by introducing _jobs_ for each argument and jumping to the next job when one is finished. Secondly, modularizing the scheduling of jobs via a _pool interface_ that can be instantiated by various concrete schedulers, called _pool templates_.
The outcome of the study is a compact, modular, and--we believe--elegant abstract machine for strong evaluation. In particular, we obtain the simplest machine for leftmost evaluation in the literature. Our study also gives a computational interpretation to the diamond non-determinism of strong call-by-name.
For the sake of simplicity, our study extends the MAM, which implements weak head reduction using global environments. Our technique, however, is reasonably modular in the underlying machine/notion of environment. One can, indeed, replace the MAM with Krivine abstract machine (KAM), which instead uses local environments, by changing only the fact that the jobs of the EXAM have to carry their own local environment. Similarly, the technique seems to be adaptable to the CEK or other machines for weak evaluation. It would be interesting to compare the outcome of these adaptations with existing machines for strong call-by-value [13, 7, 14] or strong call-by-need [11, 15]. |
2303.18063 | The Many Qualities of a New Directly Accessible Compression Scheme | We present a new variable-length computation-friendly encoding scheme, named
SFDC (Succinct Format with Direct aCcesibility), that supports direct and fast
accessibility to any element of the compressed sequence and achieves
compression ratios often higher than those offered by other solutions in the
literature. The SFDC scheme provides a flexible and simple representation
geared towards either practical efficiency or compression ratios, as required.
For a text of length $n$ over an alphabet of size $\sigma$ and a fixed
parameter $\lambda$, the access time of the proposed encoding is proportional
to the length of the character's code-word, plus an expected
$\mathcal{O}((F_{\sigma - \lambda + 3} - 3)/F_{\sigma+1})$ overhead, where
$F_j$ is the $j$-th number of the Fibonacci sequence. In the overall it uses
$N+\mathcal{O}\big(n \left(\lambda - (F_{\sigma+3}-3)/F_{\sigma+1}\big) \right)
= N + \mathcal{O}(n)$ bits, where $N$ is the length of the encoded string.
Experimental results show that the performance of our scheme is, in some
respects, comparable with the performance of DACs and Wavelet Tees, which are
among of the most efficient schemes. In addition our scheme is configured as a
\emph{computation-friendly compression} scheme, as it counts several features
that make it very effective in text processing tasks. In the string matching
problem, that we take as a case study, we experimentally prove that the new
scheme enables results that are up to 29 times faster than standard
string-matching techniques on plain texts. | Domenico Cantone, Simone Faro | 2023-03-31T13:49:18Z | http://arxiv.org/abs/2303.18063v1 | # The Many Qualities of a New
###### Abstract
We present a new variable-length computation-friendly encoding scheme, named SFDC (Succinct Format with Direct aCcesibility), that supports direct and fast accessibility to any element of the compressed sequence and achieves compression ratios often higher than those offered by other solutions in the literature. The SFDC scheme provides a flexible and simple representation geared towards either practical efficiency or compression ratios, as required. For a text of length \(n\) over an alphabet of size \(\sigma\) and a fixed parameter \(\lambda\), the access time of the proposed encoding is proportional to the length of the character's code-word, plus an expected \(\mathcal{O}((F_{\sigma-\lambda+3}-3)/F_{\sigma+1})\) overhead, where \(F_{j}\) is the \(j\)-th number of the Fibonacci sequence. In the overall it uses \(N+\mathcal{O}\big{(}n\left(\lambda-(F_{\sigma+3}-3)/F_{\sigma+1}\right)\big{)} =N+\mathcal{O}(n)\) bits, where \(N\) is the length of the encoded string. Experimental results show that the performance of our scheme is, in some respects, comparable with the performance of DACs and Wavelet Tees, which are among of the most efficient schemes. In addition our scheme is configured as a _computation-friendly compression_ scheme, as it counts several features that make it very effective in text processing tasks. In the string matching problem, that we take as a case study, we experimentally prove that the new scheme enables results that are up to 29 times faster than standard string-matching techniques on plain texts.
## 1 Introduction
The problem of text compression involves modifying the representation of any given text in plain format (referred to as _plain text_) so that the output format requires less space (as little as possible) for its storage and, consequently, less time for its transmission. The compression schemes must guarantee that the plain text can be reconstructed exactly from its compressed representation.
Compressed representations allow one also to speed up algorithmic computations, as they can make better use of the memory hierarchy available in modern PCs, reducing disk access time. In addition, compression finds application in the fields of compressed data structures [20, 21], which allow the manipulation of data directly in their encoded form, finding heavy use in bioinformatics, database systems, and search engines. In these contexts, compression schemes that allow direct and fast access to the elements of the encoded sequence are of fundamental importance.
We tacitly assume that, in an uncompressed text \(y\) of length \(n\) over an alphabet \(\Sigma\) of size \(\sigma\), every symbol is represented by \(\lceil\log\sigma\rceil\) bits, for a total of \(n\lceil\log\sigma\rceil\) bits.1 On the other hand, using a more efficient variable-length encoding, such as the Huffman's optimal compression scheme [14], each character \(c\in\Sigma\) can be represented with a code \(\rho(c)\), whose (variable) length depends on the absolute frequency \(f(c)\) of \(c\) in the text \(y\), allowing the text to be represented with
\(\sum_{c\in\Sigma}f(c)|\rho(c)|\leqslant n\lceil\log\sigma\rceil\) bits. The other side of the coin is that variable-length codes suffer from a significant problem, namely the impossibility of directly accessing the \(i\)-th code-word in the encoded string, for any \(0\leqslant i<n\), since its position in the encoded text depends on the sum of the lengths of the encodings of the characters that precede it. This is why, over the years, various encoding schemes have been presented that are able to complement variable-length codes with direct access to the characters of the encoded sequence. Dense Sampling [11], Elias-Fano codes [8], Interpolative coding [19, 24], Wevelet Trees [13], and DACs [2, 3] are just some of the best known and most efficient solutions to the problem. However, the possibility of directly accessing the encoding of any character of the text comes at a cost, in terms of additional space used for the encoding, ranging from \(\mathcal{O}(n\log N)\) for Dense Sampling to \(\mathcal{O}(N)\) for the case of Wavelet Trees.
### Our Results
In this paper, we present a new variable-length encoding format for integer sequences that support direct and fast access to any element of the compressed sequence. The proposed format, dubbed SFDC (Succinct Format with Direct aCcesibility), is based on variable-length codes obtained from existing compression methods. For presentation purposes, in this paper we show how to construct our SFDC from Huffman codes. We also prove experimentally that the proposed new format is efficient and compares very well against other coding schemes that support direct accessibility.
The proposed encoding is based on a very simple format that can be outlined in brief as follows. For a text \(y\) of length \(n\) and a fixed parameter \(\lambda\), the SFDC maintains \(\lambda\) bitstreams, called _layers_. The last layer is special and is called _dynamic layer_. The encodings of the characters in the text are not concatenated sequentially, rather the encoding of the character \(y[i]\) (where \(0\leqslant i<n\)) is spread across the \(i\)-th positions of the first \(\lambda-1\) layers. If the encoding of \(y[i]\) has more than \(\lambda-1\) bits, the exceeding bits are considered as pending. All pending bits are arranged in the dynamic layer according to a first-in first-out strategy. That's all.
Despite its apparent simplicity, which can be regarded as a value in itself, many interesting (and surprising) qualities emerge from the proposed encoding scheme. To the extent that we show in this paper, the SFDC encoding is relevant for the following reasons:
* it allows direct access to text characters in (expected) constant time, through a conceptually simple and easy to implement model;
* it achieves compression ratios that, under suitable conditions, are superior to those offered by other solutions in the literature;
* it offers a flexible representation that can be adapted to the type of application at hand, where one can prefer, according to her needs, aspects relating to efficiency versus those relating to space consumption;
* it is designed to naturally allow parallel accessing of multiple data, parallel-computation, and adaptive-access on textual data, allowing its direct application to various text processing tasks with the capability of improving performance up to a factor equal to the word length.
Unlike other direct-access coding schemes, SFDC is based on a model that offers a constant access time in the expected case, when character frequencies have a low variability along the text. Specifically, we prove that our compression scheme uses, in the overall, a number of bits equal to \(N+\mathcal{O}\big{(}n\left(\lambda-(F_{\sigma+3}-3)/F_{\sigma+1}\right)\big{)} =N+\mathcal{O}(n)\), where \(F_{j}\) is the \(j\)-th number of the Fibonacci sequence and \(N\) is the length of the encoded string. In addition our encoding allows accessing
each character of the text in time proportional to the length of its encoding, plus an expected \(\mathcal{O}((F_{\sigma-\lambda+3}-3)/F_{\sigma+1})\) time overhead.
We also suggest some modifications to further reduce the space used by giving up some features of the encoding, proving that these reductions are significant.
From an experimental point of view, we show that our scheme is particularly efficient in practical cases and is, in some respects, comparable with the performance of DACs [2, 3] and Wavelet Trees [13], which are among the most efficient schemes in the literature.
Finally, we show experimentally how SFDC can be effectively used in text processing applications. Through the adaptation of a simple string-matching algorithm to the case of encoded strings matching, we show how the new scheme enables results that are up to 29 times faster than standard string-matching techniques on plain texts.
The paper is organized as follows. In Section 2, we briefly review the terminology and notation used throughout the paper. Then, in Section 3, we describe the Huffman encoding and present some results related to the main direct-access representations based on variable-length encoding. In Section 4, we introduce in details our SFDC representation, presenting its encoding and decoding procedures and discussing its potential practical applications to text processing problems. Subsequently, in Section 6, we estimate the performance of SFDC in terms of decoding delay. In Section 7, a more succinct variant, named \(\gamma\)-SFDC, is presented and its encoding and decoding procedures are discussed, along with a brief performance analysis. Finally, in Section 8, we present some experimental results on real data in order to compare SFDC with DACs, one of the most recent and effective representations based on variable-length codes that allows for direct access. Conclusions and possible future developments are discussed in Section 9.
## 2 Notation and Terminology
In this section we briefly review the terminology and notation used throughout the paper. A string \(y\) of length \(n\geqslant 0\) over a finite alphabet \(\Sigma\) is represented as an array \(y[0\,..\,n-1]\) of characters from \(\Sigma\). In particular, when \(n=0\), we have the empty string \(\varepsilon\). For \(0\leqslant i\leqslant j<n\), we denote by \(y[i]\) and \(y[i\,..\,j]\) the \(i\)-th character of \(y\) and the substring of \(y\) contained between the \(i\)-th and the \(j\)-th characters of \(y\), respectively. For every character \(c\in\Sigma\), we denote by \(f(c)\) the number of occurrences of \(c\) in \(y\) and by \(f^{r}(c)\) the value \(f(c)/n\), where \(n\) is the length of \(y\), and call the maps \(f\colon\Sigma\to\{0,1,\ldots,n-1\}\) and \(f^{r}\colon\Sigma\to[0,1]\) the _absolute_ and the _relative frequency maps_ of \(y\), respectively.
A string \(y\) over the binary alphabet \(\Sigma=\{0,1\}\), represented as a bit vector \(y[0\,..\,n-1]\), is a _binary string_. Bit vectors are usually structured in sequences of _blocks_ of \(q\) bits, typically bytes (\(q=8\)), half-words (\(q=16\)), or words (\(q=32\)). A block of bits can be processed at the cost of a single operation. If \(p\) is a binary string of length \(n\) we use the symbol \(P[i]\) to indicate the \(i\)-th block of \(p\). For a block \(B\) of \(q\) bits, we denote by \(B[j]\) (or \(B_{j}\)) the \(j\)-th bit of \(B\), where \(0\leqslant j<q\). Thus, for \(i=0,\ldots,n-1\) we have \(p[i]=P[\lfloor i/q\rfloor][i\mod q]\).
The word size of the main memory in the target machine is denoted by \(w\). We assume that the value \(w\) is fixed, so that all references to a string will only be to entire blocks of \(w\) bits.
Finally, we recall the notation of some bitwise infix operators on computer words, namely the bitwise and "&", the bitwise or "\(|\)", and the left shift "\(\ll\)" and right shift "\(\gg\)" operators, which shift to the left or to the right, respectively, their first argument by a number of bits equal to their second argument.
## 3 Huffman Variable-Length Encoding and Direct Accessibility
The best known method to compress a string \(y\) of length \(n\) is the _statistical encoding_. Assume that \(\Sigma=\{c_{0},c_{1},\ldots,c_{\sigma-1}\}\) is the set of all the distinct characters appearing in \(y\), ordered by their frequencies in \(y\), so that each character \(c_{i}\) occurs at most as frequently in \(y\) than every other character \(c_{j}\) such that \(0\leqslant i<j<\sigma\). Statistical encoding consists in mapping any character \(c\in\Sigma\) to a variable-length binary string \(\rho(c)\), assigning shorter encodings to more frequent characters and, consequently, longer encodings to less frequent characters, in order to minimize the size of the endoded text. Huffman coding [14] is the best (and best known) method in the statistical encoding approach, achieving the minimum encoding length, in number of bits, yet allowing unique decoding.
There are several examples of variable-length coding, and to list them all is beyond the scope of this paper. Instead, we take Huffman's encoding as a reference point for the design of our succinct representation. As with other representations, the idea behind our proposal is flexible enough to be adapted to other variable-length encodings.
The Huffman algorithm computes an optimal _prefix code_, relative to given frequencies of the alphabet characters, where we recall that a prefix code is a set of (binary) words containing no word that is a prefix of any other word in the set. Thanks to such a property, decoding is particularly simple. Indeed, a binary prefix code can be represented by an ordered binary tree, whose leaves are labeled with the alphabet characters and whose edges are labeled by \(0\) (left edges) and \(1\) (right edges) in such a way that the code-word of an alphabet character \(c\) is the word labeling the branch from the root to the leaf labeled by the same character \(c\).
Prefix code trees, as computed by the Huffman algorithm, are called _Huffman trees_. These are not unique, by any means. The usually preferred tree for a given set of frequencies, out of the various possible Huffman trees, is the one induced by _canonical Huffman codes_[22]. Such tree has the property that, the sequence of the leaves depths, scanned from left to right, is non-decreasing.
The main problem with variable-length encodings lies in the impossibility of directly accessing the \(i\)-th element of the encoded string, since its position in the encoded sequence depends on the sum of the lengths of the encodings of the characters that precede it. This is not a problem in applications where data must be decoded from the beginning of the string. For example, in the implementation of the well-known Knuth Morris and Pratt algorithm [15] for exact string matching, the algorithm scans the entire text, proceeding from left to right. This would allow for an efficient sequential decoding of the characters in the text. However, wishing to remain in the same problem, the direct translation of the equally well-known Boyer-Moore algorithm [1] would not be possible, since it would require to access positions in the text before accessing the ones that precede it.
The typical solution to provide direct access to a variable length encoded sequence is _sparse sampling_, which consists in maintaining the initial position in the encoded string of each of its \(h\)-th elements, for a fixed integer parameter \(h>1\). This results in a space overhead of \(\lceil n/h\rceil\lceil\log N\rceil\) bits and a time overhead of \(\mathcal{O}(h\rho_{max})\) to directly access any element, where \(\rho_{max}\) is the maximum length of any code-word generated by the Huffman algorithm and \(N\) is the length of the encoded text. An improvement in this direction is represented by _dense sampling_[11], which codes the \(i\)-th character of the text using only \(\log y[i]\) bits and two levels of pointers to characters in the text, allowing to directly access any character at the cost of a \(\mathcal{O}(n(\log\log N+\log\log\sigma))\) extra space.
Another solution is _Interpolative Coding_[19, 24], designed to allow fast access both to the symbol having a given position and to the symbol for which the sum of all symbols preceding it exceeds a certain threshold. Both operations are supported in \(\mathcal{O}(\log n)\) time by means of a balanced virtual tree on the sequence of the encoded values, where the encoding of a subtree is preceded by the sum
of the values and the encoding size of the left subtree. The space overhead is only \(\mathcal{O}(n\log N/\log n)\) bits, offering a good combination of space and time efficiency for both operations, but does not perform well from a practical point of view when compared to the other solutions.
An additional elegant solution is provided by _Wavelet Trees_[13], by Grossi _et al._, a data structure that can be used for the representation of a sequence of \(n\) symbols encoded with variable-length encoding, allowing for direct access in time proportional to the length of the encoding. The root of the wavelet tree is a bitmap containing the first bits of all code-words \(\rho(y_{i})\), for \(0\leqslant i<n\). If there are some bits equal to \(0\) in this bitmap, then the left child of the node contains the next bit of the code-words that begin with a bit set to \(0\). Similarly, if there are some bits set to \(1\) in the bitmap, then the right child of the node contains the second bit of those code-words that begin with a bit set to \(1\). The operation continues recursively on both children. The total number of bits in the wavelet tree is exactly \(O(N)\) but the decoding operation makes use of a rank function that can be computed in constant time using \(o(N)\) additional bits.
Finally, we mention the _Directly Addressable Codes_ (DACs) [2, 3] a scheme that makes use of the generalized Vbyte coding [26]. Given a fixed integer parameter \(b>1\), the symbols are first encoded into a sequence of \((b+1)\)-bit chunks, which are then arranged in \(\lceil\log(\sigma)/b\rceil\) bitstreams, were the i-th bitstream contains the \(i\)-th least significant chunk of every code-word.2 Each bitstream is separated into two parts. The lowest \(b\) bits of the chunks are stored contiguously in an array \(A\), whereas the highest bits are concatenated into a bitmap \(B\) that indicates whether there is a chunk of that code-word in the next bitstream. To find the position of the corresponding chunk in the next bitstream, a rank query data structure is needed on the bitmap \(B\). The overall space is essentially that of the Vbyte representation of the sequence, plus (significantly) lower-order terms, and specifically \(\mathcal{O}(\log\sigma)\) space for the pointers and \(\mathcal{O}((N\log\log N)/(\sqrt{N_{0}/n}\log N))\) for the rank data structure, where \(N_{0}\coloneqq\sum_{i=0}^{n-1}(\lfloor\log y[i]\rfloor+1)\) is the total length of the representation if we could assign just the minimum number of bits required to represent each symbol of the text.
Footnote 2: Using Vbyte coding, the best choice for the space upper bound is \(b=\sqrt{N_{0}/n}\), achieving \(N\leqslant N0+2n\sqrt{N_{0}/n}\), which is still worse than other variable length encoding schemes but allows for very fast decoding, making DACs particularly efficient solutions in practice.
Table 1 summarises the main direct access compression schemes based on variable length codes, indicating the number of bits required for the encoding and the access time to a character of the text.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Method & Reference & Overall Space & Access to \(y[i]\) \\ \hline Sparse Sampling & - & \(N+\lceil n/h\rceil\lceil\log(N)\rceil\) & \(\mathcal{O}(h\rho_{max})\) \\ Dense Sampling & [11] & \(N+n(\log\log N+\log\log\sigma)\) & \(\mathcal{O}(|\rho(y[i])|)\) \\ Interpolative Coding & [19, 24] & \(N+\mathcal{O}(n\log(N)/\log(n))\) & \(\mathcal{O}(\log n)\) \\ Wavelet Tree & [13] & \(N+o(N)\) & \(\mathcal{O}(|\rho(y[i])|)\) \\ DACs & [2, 3] & \(\mathcal{O}((N\log\log N)/(\sqrt{N_{0}/n}\log N)+\log\sigma)\) & \(\mathcal{O}(N/(n(\sqrt{N_{0}/n})))\) \\ SFDC & This paper & \(N+\mathcal{O}(n)\) & \(\mathcal{O}(|\rho(y[i])|)\) Expected \\ \hline \end{tabular}
\end{table}
Table 1: Some of the main compression schemes based on variable-length codes that allow for direct access to characters in the encoded sequence
## 4 Succinct Format with Direct Accessibility
In this section, we present in detail our proposed SFDC representation and its related encoding and decoding procedures.
Again, let \(y\) be a text of length \(n\), over an alphabet \(\Sigma\) of size \(\sigma\), and let \(f\colon\Sigma\to\in\{0,..,n-1\}\) be its corresponding frequency function.
Let us assume to run the Huffman algorithm on \(y\), so as to generate a set of optimal prefix binary codes for the characters of \(\Sigma\), represented by the map \(\rho\colon\Sigma\to\{0,1\}^{+}\). For any \(c\in\Sigma\), we denote by \(|\rho(c)|\) the length of the binary code \(\rho(c)\).
Without loss of generality, we assume that the alphabet \(\Sigma=\{c_{0},c_{1},\ldots,c_{\sigma-1}\}\) is arranged as an ordered set in non-decreasing order of frequencies, so that \(f(c_{i})\leqslant f(c_{i+1})\) holds, for \(0\leqslant i<\sigma-1\). Thus, \(c_{0}\) is the least frequent character in \(y\), while \(c_{\sigma-1}\) is the most frequent character in \(y\).
Based on the codes set shown in Fig. 2, the string Compression, of length \(11\), is for instance encoded by the binary sequence
\[\texttt{1100011010}\cdot\texttt{1100111}\cdot\texttt{101}\cdot\texttt{01101 01}\cdot\texttt{11101}\cdot\texttt{01}\cdot\texttt{001}\cdot\texttt{001}\cdot \texttt{11010}\cdot\texttt{1100111}\cdot\texttt{010}\,,\]
where the individual character codes have been separated by dots to enhance readability.
Let \(\max(\rho)\coloneqq\max\{|\rho(c)|:c\in\Sigma\}\) be the length of the longest code of any character in \(\Sigma\), and assume that \(\lambda\) is a fixed constant such that \(1<\lambda\leqslant\max(\rho)\). The SFDC codes any string \(y\) of length \(n\) as an ordered collection of \(\lambda\) binary strings representing \(\lambda-1\)_fixed layers_ and an additional _dynamic layer_. The number \(\lambda\) of layers is particularly relevant for the encoding performance. We will refer to this parameter as the _size_ of the SFDC representation.
The first \(\lambda-1\) binary strings have length \(n\); we denote them by \(\widehat{Y}_{0},\widehat{Y}_{1},\ldots,\widehat{Y}_{\lambda-2}\). Specifically, the \(i\)-th binary string \(\widehat{Y}_{i}\) is the sequence of the \(i\)-th bits (if present, \(0\) otherwise) of the encodings of the characters in \(y\), in the order in which they appear in \(y\). More formally, for \(i\leqslant 0i\leqslant\lambda-2\), we have
\[\widehat{Y}_{i}\coloneqq\big{\langle}\rho(y[0])[i],\,\rho(y[1])[i],\,\ldots, \,\rho(y[n-1])[i]\big{\rangle},\]
Figure 1: Examples of the reorganization of the bits of two character’s code: \(\rho(y[i])\) has length \(\lambda-1\) and fits within the \(\lambda\) layers of the representation; \(\rho(y[j])\) has length \(\lambda+3\) and its \(3\) pending bits are arranged along the dynamic layer.
where if \(i\geqslant|\rho(y[j])|\) we put \(\rho(y[j])[i]=0\), for \(0\leqslant j<n\). Thus, each binary string \(\widehat{Y}_{i}\) can be regarded as an array \(Y_{i}\) of \(\lceil n/w\rceil\) binary blocks of size \(w\). We refer to the bit vectors \(\widehat{Y}_{0},\widehat{Y}_{1},\ldots,\widehat{Y}_{\lambda-2}\) as the _fixed layers_ of the encoding, and to the bits stored in them as the _fixed bits_.
The last layer of the encoding is a binary string \(\widehat{Y}_{D}\coloneqq\widehat{Y}_{\lambda-1}\), of length \(n_{D}\geqslant n\), which suitably gathers all the bits at positions \(\lambda-1,\lambda,\ldots\) of the character encodings whose length exceeds \(\lambda-1\). We refer to such an additional layer as the _dynamic layer_, and to the bits in it as the _pending bits_.
Fig. 2 shows in the center the SFDC representation of the string Compression with 5 fixed layers. The fixed layers are represented with a white background, while the dynamic layer has a grey background. Specifically, on the left it is shown a relaxed representation in which, for each encoding, all the pending bits (i.e., the bits past position \(\lambda-2\)) are linearly arranged in the additional space of the dynamic layer \(\widehat{Y}_{D}\).
Pending bits are arranged within the dynamic layer proceeding from left to right and following a last-in first-out (FIFO) scheme. Specifically, such a scheme obeys the following three rules:
1. If the encoding \(\rho(y[i])\) has more than one pending bit, then each bit \(\rho(y[i])[k+1]\) is stored on the dynamic layer \(\widehat{Y}_{D}\) at some position on the right of that at which the bit \(\rho(y[i])[k]\) has been stored, for \(\lambda-1\leqslant k<|\rho(y[i])|-1\).
2. For \(0\leqslant i<j<n\), each pending bit of \(\rho(y[i])\) (if any) is stored in the dynamic layer either in a position \(p\) such that \(i\leqslant p<j\) or to the right of all positions at which the pending bits (if any) of \(\rho(y[j])\) have been stored.
3. Every pending bit is stored in the leftmost available position in the dynamic layer, complying with Rules 1-2.
Fig. 2 shows, on the right, the layout of the pending bits, arranged according to the FIFO scheme just illustrated. To better understand it, the pending bits of \(\rho(y[0])\), \(\rho(y[1])\), \(\rho(y[3])\), and \(\rho(y[9])\) have been coloured in red, green, orange and blue, respectively. According to Rules 1, 2, and 3, the first pending bit of \(\rho(y[0])\), \(\rho(y[1])\), \(\rho(y[3])\), and \(\rho(y[9])\) is stored at positions 0, 1, 3 and 9 of the dynamic layer \(\widehat{Y}_{D}\), respectively. Next, according to Rules 1, 2, and 3, the second pending bit of \(\rho(y[1])\), \(\rho(y[3])\), and \(\rho(y[9])\) is stored at positions 2, 4, and 10 of \(\widehat{Y}_{D}\), respectively. Finally, according to Rule 1, 2, and 3, the last 4 bits of \(\rho(y[0])\) are stored at positions 5, 6, 7 and 8 of \(\widehat{Y}_{D}\), respectively. This is possible since \(\rho(y[2])\), \(\rho(y[4])\), \(\rho(y[5])\), \(\rho(y[6])\), \(\rho(y[7])\), \(\rho(y[8])\) and \(\rho(y[10])\), have no pending bits.
In the following sections, we will describe in detail the encoding and decoding procedures of the SFDC scheme.
### Encoding
The encoding procedure is shown in Fig. 3 (on the left). The procedure takes as input a text \(y\) of length \(n\), the mapping function \(\rho\) generated by the Huffman compression algorithm, and the number \(\lambda\) of layers adopted for the SFDC representation. We recall that the mapping function \(\rho\) associates each character \(c\in\Sigma\) with a variable-length binary code, and that the set of codes \(\{\rho(c)\ |c\in\Sigma\}\) is prefix-free.
The encoding procedure uses a stack \(S\) to implement the FIFO strategy for the arrangement of pending bits within the dynamic layer \(\widehat{Y}_{D}\). The main for loop of the algorithm (lines 3-12) iterates over all \(n\) characters of the text. At the \(i\)-th iteration the algorithm takes care of inserting the first bits of the encoding \(\rho(y[i])\) (lines 5-7) in the fixed layers, up to a maximum of \(\lambda-1\) bits.
If \(|\rho(y[i])|<\lambda-1\), some bits in the fixed layers, namely \(\lambda-|\rho(y[i])|-1\) bits, will remain idle. Conversely, when the length of the encoding of the character \(y[i]\) exceeds the value \(\lambda-1\), the remaining (pending) bits are pushed in reverse order into the stack \(S\) (lines 8-9).
At each iteration of the main while cycle, the first bit extracted from the stack is stored in the dynamic layer (lines 10-11). Should the stack be empty at the \(i\)-th iteration, the \(i\)-th bit of the dynamic layer will remain idle.
If, at the end of the main while loop, the stack still contains some pending bits, these are placed at the end of the dynamic layer (lines 13-15). This is where the length of the dynamic layer may exceed that of the fixed layers.
Assuming that each layer of length \(n\), used for the representation, can be initialised at \(0^{n}\) in constant time (or at most in \(\lceil n/w\rceil\) steps), the execution time of the encoding algorithm for a text \(y\) of length \(n\) is trivially equal to the sum of the encoding lengths of each individual character. Thus the encoding of a text of length \(n\) is performed in \(\mathcal{O}(N)\)-time, where \(N\) is the length of the encoded text. No additional time is consumed by the algorithm.
### Decoding
Although the decoding of a character from a text with a SFDC representation takes place via a direct access to the position where its encoding begins, it does not necessarily take constant time. In fact, during the decoding of a character \(y[i]\), it may be necessary to address some _additional work_ related to a certain number of additional characters that have to be decoded in order to complete the full decoding of \(y[i]\). In particular, if the last bit of the encoding of \(y[i]\) is placed at position
Figure 2: The SFDC representation of the string Compression with \(\lambda=6\) (5 fixed layers and an additional dynamic layer). The fixed layers have a white background, while the dynamic layer has a grey one. In the middle, it is shown a relaxed representation in which, for each encoding, all pending bits (i.e., the bits past position \(\lambda-2\)) are arranged in two dimensions in the additional space of the dynamic layer. On the right, the pending bits have been arranged linearly along the dynamic layer according to a last-in first-out approach. Idle bits are indicated with the symbol -.
in the dynamic layer, where \(j\geqslant i\), then the procedure is forced to decode all the characters from position \(i\) to position \(j\) of the text, \(y[j]\) included. We refer to this addtional work as a _decoding delay_, meaning by this term the number \(j-i\) of additional characters that must be decoded in order to complete the decoding of the character \(y[i]\). In the case where no characters other than \(y[i]\) need to be decoded, then \(j=i\) holds and the decoding delay is \(0\).
Despite the presence of such a delay, when the whole test, or a just a window of it, needs to be decoded, the additional work done during the delay is not lost: due to the FIFO distribution of pending bits within the dynamic layer, it may indeed happen that a character at position \(j\) is decoded before a character at position \(i\), with \(i<j\). As we shall see later in this section, this happens when the pending bits of the code \(\rho(y[i])\) are stored after position \(j\).
The decoding procedure is shown in Fig. 3 (on the right). We assume that the procedure is invoked to decode the window \(y[i..j]\) of the text, with \(i\leqslant j\). The procedure takes as input the SFDC representation \(Y\) of the text \(y\) of length \(n\), the starting position, \(i\), and the ending position, \(j\), of the window to be decoded, the root of the Huffman tree, and the number \(\lambda\) of layers adopted for the SFDC representation. Note that if one wants to decode a single character at position \(i\), the procedure needs to be invoked with \(j=i\). Likewise, to decode the entire text, the decoding procedure needs to be invoked with \(i=0\) and \(j=n-1\).
Figure 3: Procedures encode and decode for encoding and decoding a text \(y\) of length \(n\) using the scheme SFDC
Decoding of character \(y[i]\) is done by scanning a descending path on the Huffman tree, which starts at the root of the tree and ends in a leaf. Specifically, by scanning the bits of the encoding \(\rho(y[i])\), we descend into the left subtree, if we find a bit equal to \(0\), or to the right subtree, if we find a bit equal to \(1\). We assume that each node \(x\) in the tree has a Boolean value associated with it, \(x\).leaf, which is set to True in the case where the node \(x\) is a leaf. We further assume that \(x\).left allows access to the left child of \(x\), while \(x\).right allows access to the right child of node \(x\). If \(x\) is a leaf node, then it contains an additional parameter, \(x\).symbol, which allows access to the character of the alphabet associated with the encoding.
Again, in a completely symmetrical manner to the encoding procedure, we maintain a stack to extract the pending bits in the dynamic layer. The stack stores the nodes in the Huffman tree relating to those characters in the text for which decoding has started but it has not been completed yet. Within the stack, the node \(x\) used for decoding the character \(y[p]\) is coupled to the position \(p\) itself, so that the symbol can be positioned in constant time once decoded. Thus the elements of the stack are of the form \(\langle x,p\rangle\).
For simplicity, in this presentation we denote by \(x_{p}\) the node within the Huffman tree used for decoding the character \(y[p]\). Since we use a FIFO strategy for bit arrangements in the dynamic layer, if two nodes \(x_{i}\) and \(x_{j}\), are contained within the stack, with \(1\leqslant i<j\leqslant n\), then node \(x_{j}\) is closer to the top of the stack than \(x_{i}\) is.
Each iteration of the main while loop of the procedure (lines 4-22) explores vertically the \(k\)-th bit of each layer, proceeding from the first layer \(Y_{0}\) towards the last layer \(Y_{D}\), in an attempt to decode the \(k\)-th character of the text, for \(k\leqslant i\). The main loop stops when the entire window, \(y[i..j]\), has been fully decoded. This condition occurs when the character \(y[j]\) has been decoded (i.e. \(y[j]\neq\)Nil) and the stack is empty, a condition which ensures that all characters preceding position \(j\) in the window have already been decoded (line 4).
Each iteration of the main while is divided into two parts: the first part (line 5-14), which is executed only if \(k<n\), explores the Huffman tree, starting from the root and proceeding towards the leaves, in the hope of arriving directly at the decoding of the \(k\)-th character (this happens when \(|\rho(y[k])|<\lambda\)). If a leaf is reached at this stage, the text encoding is transcribed (line 12), otherwise the node \(x_{k}\) remains on hold and is placed on the top of the stack (line 13). The second part of the loop (lines 15-21) scans, if necessary (namely when \(S\neq\emptyset\)), the bit located in the dynamic layer \(Y_{D}\) of the representation and updates the node \(x_{p}\) at the top of the stack accordingly. If the node \(x_{p}\) reaches the position of a leaf, the corresponding symbol is transcribed to the position \(y[p]\) and the node is deleted from the stack.
The time complexity required to decode a single character \(y[i]\) is \(\mathcal{O}(|\rho(y[i])|+d)\), while decoding a text window \(y[i..j]\) requires \(\mathcal{O}(\sum_{k=i}^{j}|\rho(y[k]|)+d)\), where \(d\) is the decoding delay. In Section 6, we prove that the expected value of the delay is, in the worst case, equal to \(\mathcal{O}((F_{\sigma-\lambda+3}-3)/F_{\sigma+1})\), which is less than \(1.0\) for \(\sigma\geqslant 4\).
### Computing the Average Decoding Delay
The decoding delay is a key feature for the use of SFDC in practical applications, since the expected access time to text characters is connected to it. This value depends on the distribution of characters within the text and on the number \(\lambda\) of layers used for representation. Ideally, the SFDC should use the minimum number of layers to allow the expected value of the decoding delay to fall below a certain user-defined bound.
Since we do not know the frequency of the characters within the text nor their distribution, it is not possible to define a priori the number of layers that is most suitable for our application. However, it is possible to compute the expected decoding delay value by simulating the construction of the SFDC on the text for a given number of layers. Should this value be above the bound, we would repeat the computation with a higher number of layers, until we reach a value of \(\lambda\) that guarantees an acceptable delay.
Fig. 4 shows the procedure compute-delay used for computing the average decoding delay of the SFDC of a text \(y\) of length \(n\) using \(\lambda\) layers. The procedure simulates the construction of the representation SFDC for the text \(y\), without actually realising it. The instruction flow of the procedure closely follows that used for encoding. However, in contrast to the encoding procedure, the stack maintains pairs of the type \(\langle p,h\rangle\), where \(p\) is the position in the text of the character \(y[p]\) whose pending bits are still waiting to be placed in the dynamic layer, while \(h\) is the position of the next bit in \(\rho(y[p])\) that is to be placed in the dynamic layer.
The average value of the delay, identified by the variable \(d\), is initialised to \(0\). During the \(i\)-th iteration, when a bit needs to be placed in the dynamic layer, a \(\langle p,h\rangle\) pair is extracted from the stack (provided it is not empty) and the bit \(\rho(y[p])[h]\) is placed at position \(i\) of the dynamic layer. If \(h+1\) is still less than \(|\rho(y[p])|\), then the pair \(\langle p,h+1\rangle\) is placed again at the top of the stack. Conversely, if the last pending bit of \(y[p]\) has been placed, then the delay value \(d\) is increased by the amount \(i-p\), which corresponds to the decoding delay of the character \(y[p]\). At the end of the procedure, the value \(d\) is divided by \(n\) in order to obtain the average value of the decoding delay.
Assuming that the \(\lambda\) layers of the SFDC can be initialised at \(0^{n}\) in constant time, the complexity of procedure compute-delay is equal to \(\mathcal{O}(N)\), where we recall that \(N=\sum_{i=0}^{n-1}|\rho(y[i])|\).
Figure 4: Procedure compute-delay for computing the average decoding delay of the SFDC of a text \(y\) of length \(n\) using \(\lambda\) layers. The procedure takes as input also the mapping \(\rho\) and the number \(\lambda\) of fixed layers.
Applications to Text Processing
Existing state-of-the-art compression schemes encode data by extensive and convoluted references between pieces of information, leading to strong compression guarantees, but often making it difficult to efficiently perform compressed text processing. Although recent developments have moved towards designing more computation-friendly compression schemes, achieving both strong compression and allowing for efficient computation, there are few solutions that remain competitive against algorithms operating on plain texts.
In contract, as we have just reported above, the SFDC encoding arranges textual data in a simple two-dimensional structure so that text processing may proceed both _horizontally_, by reading bits (or bit blocks) along a binary vector in a given layer, and _vertically_, by moving from a given layer to the next one while reconstructing the code of a character (or of a group of characters). Thanks to such two-dimensional structure, the SFDC counts many favourable and interesting features that make it particularly suitable for use in text processing applications.
First of all, it naturally allows _parallel computation_ on textual data. Since a string is partitioned in \(\lambda\) independent binary vectors, it is straightforward to entrust the processing of each bit vector to a different processor, provided that the corresponding \(\lambda\) outputs are then blended.
It is also well suited for _parallel accessing of multiple data_. A single computer word maintains, indeed, partial information about \(w\) contiguous characters. For instance, assume we are interested in searching all occurrences of the character a in a given text \(y\), where \(\rho(\texttt{a})=0001011\). If it is discovered that \(Y_{0}[k]=1^{w}\), while inspecting the \(k\)-th block of the bit-stream, then such block can immediately be ignored as we are sure it does not contain any occurrences of the character. Although we can access only a fraction (namely a single bit) of each character code in the block, such information can be processed in constant time, also by exploiting the intrinsic parallelism of bitwise operations, which allows one to cut down the horizontal processing by a factor up to \(w\).
In addition, it allows also _adaptive accesses_ to textual data. This means that processing may not always need to go all the way in depth along the layers of the representation, as in favourable cases accesses can stop already at the initial layers of the representation of a given character. For instance, assume as above to be interested in searching all occurrences of the character a in the text \(y\). If it is discovered that \(\widehat{Y}_{0}[i]=1\), while inspecting character \(y[i]\), then such a position can immediately be marked as a mismatch and so it will not be necessary to scan the other layers. This feature may allow, under suitable conditions, to cut down the vertical data processing by a factor up to \(\lambda\).
The SFDC scheme turns out to be also well suited for _cache-friendly accesses_ to textual data. Indeed, the cache-hit rate is very crucial for good performances, as each cache miss results in fetching data from primary memory (or worse form secondary memory), which takes considerable time.3 In comparison, reading data from the cache typically takes only a handful of cycles. According to the principle of spatial locality, if a particular vector location is referenced at a particular time, then it is expected that nearby positions will be referenced in the near future. Thus, in present-day computers, it is common to attempt to guess the size of the area around the current position for which it is worthwhile to prepare for faster accesses for subsequent references. Assume, for instance, that such a size is of \(k\) positions, so that, after a cache miss, when a certain block of a given array is accessed, the next \(k\) blocks are also stored in cache memory. Then, under the standard text rep
resentation, we have at most \(k\) supplementary text accesses before the next cache miss, whereas in our bit-layers representation such number may grow by a factor up to \(\lambda\), resulting, under suitable conditions, in faster accesses to text characters.
Finally, we note that the proposed representation has the advantageous feature of _blind matching_, a technique that allows comparison between strings even without them being decoded. Since SFDC encoding is designed so that two identical strings are encoded in two memory configurations that are (almost) identical, the verification of equality between strings can be limited to the blind comparison of the blocks involved in the representation.
Assume, for example, that we want to check the occurrence of a pattern \(x\) of length \(m\) at position \(j\) of a text \(y\) of length \(n\), i.e., we want to check whether \(x[0\,..\,m-1]=y[j\,..\,j+m-1]\). Since the idle bits within the fixed layers are set to zero, we know that the pattern has a match at position \(j\) of the text if and only if the configuration of the fixed layers of the text, at the positions concerned, matches that of the fixed layers of the pattern, i.e., if \(Y_{h}[j\,..\,j+m-1]=X_{h}[0\,..\,m-1]\). This allows the comparison to be made without decoding the characters of the two strings, but simply comparing the blocks involved in the representation.
If the comparison of the fixed layers finds a match, it remains to compare the pending bits arranged within the dynamic layer. However, we observe that the FIFO distribution of pending bits ensures that, if the characters in the pattern match those in the text window, then the configuration of the pending bits turns out to be the same, with the sole exception of those positions in the dynamic layer that remained idle (set to 0) in the pattern representation. Such positions in the text may instead be filled by bits (set to 1) of some character's codes preceding the window. As a consequence, in case one does not wish to decode the window for the occurrence check, it would be enough to add an additional _blind-match_ layer to the pattern representation in which the bits are all set to 1, except for those positions corresponding to idle bits. At the cost of \(\mathcal{O}(m)\) additional bits, this layer would allow the last dynamic layer to be compared in _blind_ mode, effectively saving a huge amount of time during text processing.
As a case study, in Appendix 9 we describe in more detail how this new technique can be adapted to the problem of exact string matching, and propose a solution to the problem capable of searching directly on SFDC encoded strings. In Section 8, we show how such solution is surprisingly faster than any other algorithm operating on plain texts. Although a number of solutions to the string-matching problem capable of operating directly on encoded texts can be found in the literature (see for instance [7, 12, 23]), none of these solutions exceeds in terms of running time the more efficient algorithms operating on plain texts. From this point of view, the solution proposed in this work represents a breakthrough in the field of Computation-Friendly Compression.
## 6 Complexity Issues
In this section, we propose a theoretical analysis of the performance of the new SFDC encoding, in terms of expected values of decoding delay and average number of idle bits. The first value is related to the average access time, while the second value gives us an estimate of the additional space used by the encoding with respect to the minimum number of bits required to compress the sequence.
In our analysis, we will make some simplifying assumptions that, however, do not significantly affect the results. When necessary, in order to support the reasonableness of our arguments, we will
compare theoretical results with experimentally obtained data, reproducing the same conditions as in the theoretical analysis.
### Fibonacci Numbers and Some Useful Identities
Before entering into the details of our analysis, we state and prove two useful identities related to the Fibonacci numbers, which will be used throughout this section.
The well-known Fibonacci number series is as follows:
\[F_{0}=0;\ F_{1}=1;\ F_{n+2}=F_{n}+F_{n+1},\ n\geqslant 0.\]
The following identities hold for every \(n\geqslant 0\):
\[\sum_{i=0}^{n}F_{i} =F_{n+2}-1\,, \tag{1}\] \[\sum_{i=0}^{n}iF_{i} =nF_{n+2}-F_{n+3}+2\,. \tag{2}\]
Both identities can be proved by induction on \(n\).
As for the well-known identity (1), we have \(\sum_{i=0}^{0}=0=F_{0+2}-1\) for the base case and
\[\sum_{i=0}^{n+1}F_{i}=\sum_{i=0}^{n}F_{i}+F_{n+1}=F_{n+2}-1+F_{n+1}=F_{(n+1)+2}-1\]
for the inductive step.
Concerning (2), we have \(\sum_{i=0}^{0}iF_{i}=0=0\cdot F_{0+2}-F_{0+3}+2\) for the base case and
\[\sum_{i=0}^{n+1}iF_{i} =\sum_{i=0}^{n}iF_{i}+(n+1)F_{n+1}\] \[=\big{(}nF_{n+2}-F_{n+3}+2\big{)}+(n+1)F_{n+1}\] \[=(n+1)\big{(}F_{n+1}+F_{n+2}\big{)}-F_{n+2}-F_{n+3}+2\] \[=(n+1)F_{(n+1)+2}-F_{(n+1)+3}+2\]
for the inductive step.
### Texts with Fibonacci Character Frequencies
Let \(\Sigma\coloneqq\{c_{0},c_{1},\ldots,c_{\sigma-1}\}\) be an alphabet of size \(\sigma\), and assume that the characters in \(\Sigma\) are ordered in non-decreasing order with respect to their frequency in a given text \(y\) (of length \(n\)), namely \(f(c_{i})\leqslant f(c_{i+1})\) holds, for \(0\leqslant i<\sigma-1\), where \(f\colon\Sigma\to\mathbb{N}^{+}\) is the frequency function.
In order to keep the number of bits used by our encoding scheme as low as possible, it is useful to choose the number \(\lambda\) of layers as close as possible to the expected value
\[e_{\rho}\coloneqq\frac{1}{n}\sum_{i=0}^{n-1}|\rho(y[i])|\]
of the length of the Huffman encodings of the characters in the text \(y\). In particular, if we set \(\lambda=\lceil e_{\rho}\rceil\), it is guaranteed that the number of idle bits, equal to \(n(\lambda-e_{\rho})\), is the minimum possible. As already observed, should the decoding delay prove to be too high in practical applications, the value of \(\lambda\) can be increased.
The optimal case for the application of the SFDC method, in terms of decoding delay, is when all the characters of the alphabet have the same frequency. In this circumstance, in fact, the character Huffman encodings have length comprised between \(\lfloor\log\sigma\rfloor\) and \(\lceil\log\sigma\rceil\).4 Hence, in this case it suffices to use a number of layers equal to \(\lambda=\lceil\log\sigma\rceil\) to obtain a guaranteed direct access to each character of the text, hence a decoding delay equal to \(0\).
Footnote 4: More precisely there are \(2\sigma-2^{\lfloor\log\sigma\rfloor+1}\) characters with encodings of length \(\lceil\log\sigma\rceil+1\) and \(2^{\lceil\log\sigma\rceil+1}-\sigma\) characters with encodings of length \(\lceil\log\sigma\rceil\).
On the other side of the spectrum, the worst case occurs when the Huffman tree, in its canonical configuration, is completely unbalanced. In this case, in fact, the difference between the expected value \(e_{\rho}\) and the maximum length of the Huffman encodings, namely \(\big{(}\max_{c\in\Sigma}|\rho(c)|\big{)}-e_{\rho}\) as large as possible, thus causing the maximum possible increase of the average decoding delay. In addition, we observe that the presence of several characters in the text whose encoding length is less than \(e_{\rho}\) does not affect positively the expected value of the delay, since pending bits are placed exclusively in the dynamic layer. Rather, the presence of such _short-code_ characters increases the number of idle bits of the encoding.
We call such a completely unbalanced tree a _degenerate tree_. Among all the possible texts whose character frequency induces a degenerate Huffman coding tree, the ones that represent the worst case for our encoding scheme are the texts in which the frequence differences \(f(c_{i})-f(c_{i-1})\), for \(i=1,2,\ldots,\sigma-1\) are minimal. It is not hard to verify that such condition occurs when the frequencies follow a Fibonacci-like distribution of the following form
\[f(c_{i})\coloneqq\begin{cases}1&\text{if }i=0\\ F_{i}&\text{if }1\leqslant i\leqslant\sigma-1\,.\end{cases} \tag{3}\]
(or any multiple of it).
Thus, let us assume that \(y\) be a random text over \(\Sigma\) with the frequency function (3) (so that, by (1), \(|y|=F_{\sigma+1}\)), and let \(f^{r}\colon\Sigma\to[0,1]\) be the relative frequency function of the text \(y\), where
\[f^{r}(c_{i})\coloneqq\frac{f(c_{i})}{F_{\sigma+1}},\qquad\text{for }i=0,1, \ldots,\sigma-1.\]
Assume also that \(\rho\colon\Sigma\to\{0,1\}^{+}\) is the canonical degenerate Huffman encoding of \(\Sigma\) relative to the frequency function (3), with
\[|\rho(c_{i})|=\begin{cases}\sigma-1&\text{if }i=0\\ \sigma-i&\text{if }1\leqslant i\leqslant\sigma-1.\end{cases}\]
Lemma 1: _The expected encoding length \(E[|\rho(y[i])|]\) by \(\rho\) of the characters in \(y\) is equal to \(\frac{F_{\sigma+3}-3}{F_{\sigma+1}}\)._
Proof: Using (1) and (2), we have:
\[E[|\rho(y[i])|] =\sum_{i=0}^{\sigma-1}f^{r}(c_{i})\cdot|\rho(c_{i})|\] \[=\sum_{i=0}^{\sigma-1}\frac{f(c_{i})}{F_{\sigma+1}}\cdot|\rho(c_{ i})|\] \[=\frac{1}{F_{\sigma+1}}\cdot\left(\sum_{i=1}^{\sigma-1}(\sigma-i )F_{i}+\sigma-1\right)\] \[=\frac{1}{F_{\sigma+1}}\cdot\left(\sigma\cdot\sum_{i=0}^{\sigma- 1}f_{i}-\sum_{i=1}^{\sigma-1}i\cdot F_{i}-1\right)\] \[=\frac{1}{F_{\sigma+1}}\cdot\left(\sigma F_{\sigma+1}-(\sigma-1 )F_{\sigma+1}+F_{\sigma+2}-3\right)\] \[=\frac{1}{F_{\sigma+1}}\cdot\left(F_{\sigma+1}+F_{\sigma+2}-3\right)\] \[=\frac{F_{\sigma+3}-3}{F_{\sigma+1}}.\]
By recalling that \(\underset{\sigma\rightarrow\infty}{\lim}F_{\sigma}/\phi^{\sigma}=1\), where \(\phi=\frac{1}{2}(1+\sqrt{5})\) is the golden ratio, we have
\[\underset{\sigma\rightarrow\infty}{\lim}E[|\rho(y[i])|]=\underset{\sigma \rightarrow\infty}{\lim}\frac{F_{\sigma+3}-3}{F_{\sigma+1}}=\phi^{2}=\frac{1 }{2}(3+\sqrt{5})\approx 2.618\,.\]
As a consequence of the above, the expected number of idle bits in a SFDC encoding using \(\lambda\) layers can be estimated by the following formula:
\[\lambda-\frac{F_{\sigma+3}-3}{F_{\sigma+1}}.\]
\begin{table}
\begin{tabular}{|c|c c c|} \hline \hline \multicolumn{4}{|c|}{Ide Bits (Theoretical)} \\ \hline \(\lambda\)\(\setminus\)\(\sigma\) & 10 & 20 & 30 \\ \hline
5 & 2.42 & 2.38 & 2.38 \\
6 & 3.42 & 3.38 & 3.38 \\
7 & 4.42 & 4.38 & 4.38 \\
8 & 5.42 & 5.38 & 5.38 \\ \hline \hline \end{tabular}
\begin{tabular}{|c|c c c|} \hline \hline \multicolumn{4}{|c|}{Ide Bits (Experimental)} \\ \hline \(\lambda\)\(\setminus\)\(\sigma\) & 10 & 20 & 30 \\ \hline
5 & 2.29 & 2.26 & 2.27 \\
6 & 3.29 & 3.27 & 3.27 \\
7 & 4.30 & 4.29 & 4.29 \\
8 & 5.30 & 5.31 & 5.31 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Expected number of idle bits for each text element, in a SFDC encoding of a text over an alphabet with Fibonacci frequencies, comparing theoretical values against values experimentally computed. Values are shown for \(5\leqslant\lambda\leqslant 8\) and \(\sigma\in\{10,20,30\}\).
Table 2 shows the expected number of idle bits for each text element, in a SFDC encoding of a text over an alphabet with Fibonacci frequencies, comparing theoretical values against values experimentally computed. To obtain the experimental values, we used artificially generated texts of 100 MB so that the frequency of the characters of the alphabet would result in a Fibonacci frequency.
It is easy to verify how, as the size of the alphabet \(\sigma\) increases, the function quickly converges to the value \(\lambda-2.618\). Consequently, if we assume that the value \(\lambda\) represents a constant implementation-related parameter, the total space used by SFDC for encoding a sequence of \(n\) elements is equal to \(N+\mathcal{O}\left(n\left(\lambda-\frac{F_{\sigma+3}-3}{F_{\sigma+1}}\right) \right)=N+\mathcal{O}(n)\), where \(N\coloneqq\sum_{i=0}^{\sigma-1}f(c_{i})\cdot|\rho(c_{i})|\).
### Expected Decoding Delay
On the basis of what was shown above, we now estimate the expected value of the decoding delay, i.e., the number of additional characters that need to be decoded in order to obtain the full encoding of a character of the text.
Assume that the \(i\)-th character of the text is \(y[i]=c_{k}\), where \(0\leqslant k<\sigma\) and \(0\leqslant i<n\). We want to estimate the position \(j\geqslant i\) at which the last pending bit of the encoding \(\rho(y[i])\), is placed, so the decoding delay of character \(y[i]\) will be \(j-i\).
Let us assume that the number of layers of the representation is equal to \(\lambda\). If the length of the encoding of \(y[i]\) falls within the number \(\lambda\) of layers, namely \(|\rho(y[i])|\leqslant\lambda\), then the character can be decoded in a single cycle and the delay is \(0\). On the other hand, if \(|\rho(y[i])|>\lambda\), then \(|\rho(y[i])|-\lambda\) bits remain to be read, all arranged within the dynamic layer.
Since in the average case the length of the encoding of any text character can be approximated to the value \(2.618\), if we assume to use a fixed number of layers \(\lambda\geqslant 4\) we expect that the pending bits of \(\rho(y[i])\) will be placed on the average in the \(|\rho(y[i])|-\lambda\) positions past position \(i\). The average delay in this case can be estimated by \(|\rho(y[i])|-\lambda\).
Thus, for an alphabet of \(\sigma\) characters with frequency function (3), the expected delay of our Fibonacci encoding with \(\lambda\) layers (where \(4\leqslant\lambda\leqslant\sigma-1\)) can be estimated by the following summation:
\[\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\cdot(|\rho(c_{i})|-\lambda).\]
We claim that
\[\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\cdot(|\rho(c_{i})|-\lambda)=\frac{F _{\sigma-\lambda+3}-3}{F_{\sigma+1}}. \tag{4}\]
Indeed,
\[\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\cdot(|\rho(c_{i})|-\lambda)\] \[\qquad=\frac{1}{F_{\sigma+1}}\cdot\sum_{i=0}^{\sigma-\lambda-1}f_{ i}\cdot(|\rho(c_{i})|-\lambda)\] \[\qquad=\frac{1}{F_{\sigma+1}}\cdot\left(\sum_{i=1}^{\sigma- \lambda-1}F_{i}\cdot(\sigma-\lambda-i)+\sigma-\lambda-1\right)\] \[\qquad=\frac{1}{F_{\sigma+1}}\cdot\left((\sigma-\lambda)\cdot \sum_{i=1}^{\sigma-\lambda-1}F_{i}-\sum_{i=1}^{\sigma-\lambda-1}iF_{i}+\sigma- \lambda-1\right)\] \[\qquad=\frac{(\sigma-\lambda)(F_{\sigma-\lambda+1}-1)-(\sigma- \lambda-1)F_{\sigma-\lambda+1}+F_{\sigma-\lambda+2}-2+\sigma-\lambda-1}{F_{ \sigma+1}}\] \[\qquad=\frac{F_{\sigma-\lambda+1}+F_{\sigma-\lambda+2}-3}{F_{ \sigma+1}}\] \[\qquad=\frac{F_{\sigma-\lambda+3}-3}{F_{\sigma+1}}.\]
Table 3 reports the delay in decoding a single character in a text over an alphabet with Fibonacci frequencies using SFDC, comparing values resulting from the identity (4) against values computed experimentally. To obtain the experimental values, we generated texts of 100 MB whose character frequency is the Fibonacci frequency (3). For each such text \(y\), the average delay value was experimentally obtained by accessing all the characters in the text in random order, computing the corresponding delays, and dividing the sum of the values so obtained by the total number of characters in \(y\).
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline \multicolumn{5}{c}{Average Delay (Theoretical)} \\ \hline \(\lambda\)\(\backslash\)\(\sigma\) & 10 & 15 & 20 & 25 & 30 \\ \hline
5 & 0.20 & 0.23 & 0.24 & 0.24 & 0.24 \\
6 & 0.11 & 0.14 & 0.15 & 0.15 & 0.15 \\
7 & 0.06 & 0.09 & 0.09 & 0.09 & 0.09 \\
8 & 0.02 & 0.05 & 0.06 & 0.06 & 0.06 \\ \hline \end{tabular}
\begin{tabular}{|c|c c c c c|} \hline \multicolumn{5}{c}{Average Delay (Experimental)} \\ \hline \(\lambda\)\(\backslash\)\(\sigma\) & 10 & 15 & 20 & 25 & 30 \\ \hline
5 & 0.31 & 0.37 & 0.37 & 0.39 & 0.39 \\
6 & 0.15 & 0.20 & 0.18 & 0.17 & 0.21 \\
7 & 0.07 & 0.11 & 0.11 & 0.11 & 0.11 \\
8 & 0.02 & 0.06 & 0.06 & 0.07 & 0.06 \\ \hline \end{tabular}
\end{table}
Table 3: The average delay in decoding a single character in a text over an alphabet with Fibonacci frequencies using SFDC. Theoretical values (on the left) compared with experimental values (on the right). Values are tabulated for \(5\leqslant\lambda\leqslant 8\) and \(\sigma\in\{10,15,20,25,30\}\).
## 7 An Even More Succinct Variant
In this section, we show how it is possible to achieve, by foregoing some advantageous aspects of the SFDC encoding, a variant that is able to reduce both the expected value of the delay and, under suitable conditions, the total amount of consumed space. The new variant, which we will refer to as \(\gamma\)-SFDC, would then provide an even more succinct representation of the text, while still allowing direct access to the characters in the encoded sequence.
The main idea behind the new variant is to distribute the pending bits of those characters whose encoding length exceeds the number of fixed layers in any available position within all layers and not exclusively within the dynamic layer. In other words, the \(\gamma\)-SFDC encoding makes no longer a role distinction between the fixed layers and the dynamic layer, arranging the pending bits using a FIFO strategy to take advantage of all those positions that remain idle in any layer.
Provided that the number of idle bits within the fixed layers is enough to maintain all the pending bits that would have been arranged within the dynamic layer, it would be possible to reduce the number of layers in the representation by one, so that \(\gamma\)-SFDC would be more succinct than the standard variant.
On the other hand, as can easily be observed, for the same number of fixed layers, the \(\gamma\) variant of the SFDC offers a lower expected delay since the pending bits of a character encoding also find their place within the fixed layers. Thus, on average, decoding of a given character is resolved by accessing fewer additional characters.
Fig. 5 depicts the \(\gamma\)-SFDC encoding of the string Compression with \(\lambda=6\) and \(\lambda=5\) layers, respectively. The pending bits have been arranged in all available positions of the representation using a last-in first-out approach. Observe that the pending bits of the code \(\rho(y[0])\) are placed within the fixed layers, decreasing the value of the character's decoding delay by 3.
It is to be noted that one of the negative aspects of this variant is the lack of all those positive features that make SFDC particularly suitable for the application in text-processing problems. In fact, the effect of the new arrangement of pending bits is that the memory configurations of the encodings of two sub-strings of the text can vary significantly, despite the fact that they are the
Figure 5: Two \(\gamma\)-SFDC representations of the string Compression with 6 and 5 layers, respectively. In both cases the pending bits have been arranged in all available positions of the representation using a last-in first-out approach. Idle bits are indicated by the symbol -.
same. This can occur, for example, when available positions within the fixed layers are occupied by bits of character encodings preceding the sub-string.
The encoding procedure of the \(\gamma\)-SFDC is shown in Fig. 6 (on the left). The \(i\)-th iteration of the main while loop of line 3 vertically explores position \(i\) of the \(\lambda\) layers of the representation. If \(i<n\) is the position of the character \(y[i]\) of the text, then the bits of the encoding \(\rho(y[i])\) are all inserted at the top of the stack, in reverse order. Then the procedure arranges the bits in the stack, extracting them one by one, within the layers, from the first layer \(\widehat{Y}_{0}\) to the last layer \(\widehat{Y}_{D}\). If during the exploration of the layer \(\widehat{Y}_{k}\), with \(0\leqslant k<\lambda\), the stack is empty, the \(i\)-th bit of the layers from \(\widehat{Y}_{k}\) to \(\widehat{Y}_{D}\) will remain idle.
The decoding procedure is shown in Fig. 6 (on the right). Also in this case we assume that the procedure is invoked to decode a window \(y[i\,..\,j]\) of the text, with \(i\leqslant j\). The procedure takes as input the \(\gamma\)-SFDC representation \(Y\) of a text \(y\) of length \(n\), the starting position \(i\) and the ending position \(j\) of the window to be decoded, the root of the Huffman tree, and the number \(\lambda\) of layers adopted in the \(\gamma\)-SFDC representation.
As in the previous case, the decoding of the character \(y[i]\) is done by scanning a descending path on the Huffman tree, starting at the root and ending in a leaf of the tree. A stack is maintained containing the nodes in the Huffman tree related to those characters in the text for which decoding has started but it has not yet been completed. The position \(p\) of the node \(x_{p}\) (the node used for decoding the character \(y[p]\)) are coupled and pushed within the stack.
Each iteration of the main while loop of the procedure explores vertically the \(k\)-th bit of each layer, proceeding from the first layer \(Y_{0}\) towards the last layer \(Y_{D}\), in the attempt to decode the \(k\)-th character of the text, for \(k\geqslant i\). The main loop stops when the entire window \(y[i\,..\,j]\) has been fully decoded. This condition occurs when the character \(y[j]\) has been decoded (i.e., \(y[j]\neq\textsf{Nil}\)
Figure 6: Procedure \(\gamma\)-encode and \(\gamma\)-decode for encoding and decoding a text \(y\) of length \(n\) using the scheme \(\gamma\)-SFDC.
and the stack is empty, a condition which ensures that all the characters preceding position \(j\) in the window have already been decoded (line 4).
At the beginning of each iteration of the main while loop, the node \(x_{k}=\mathit{root}\) is pushed into the stack. Subsequently, a nested while loop is executed, which goes up along the \(k\)-th position of all layers. At the \(h\)-th step, a node is extracted from the stack and is updated, based on the bit \(\widehat{Y}_{h}[k]\). If after the update the node has become a leaf, then a new symbol is decoded, otherwise it is re-entered into the stack for the next iteration. The loop continues until the last layer has been reached or until the stack empties.
Finally, Figure 7 presents the procedure \(\gamma\)-compute-delay used for the preliminary computation of the minimum value of \(\lambda\) that guarantees that the decoding delay falls below a given bound. As in the case of the standard variant, the procedure takes as input the string \(y\) to be encoded and the number \(\lambda\) of layers used for the representation. The purpose of the procedure is to simulate the construction of the SFDC encoding in order to compute and output the average value of the decoding delay.
It is straightforward to verify that the three procedures described above achieve a \(\mathcal{O}(N)\) time complexity, as in the case of the standard variant.
### Expected Decoding Delay
In this section, we derive a simple estimate of the expected value of the decoding delay for the \(\gamma\)-variant presented above of our encoding.
Assume again that the \(i\)-th character of the text is \(y[i]=c_{k}\), where \(0\leqslant k<\sigma\) and \(0\leqslant i<n\). In order to compute the decoding delay for character \(y[i]\), it is necessary to guess the position \(j\geqslant i\) at which the last pending bit of the encoding \(\rho(y[i])\) is placed, so that the decoding delay will be \(j-i\).
Figure 7: Procedure \(\gamma\)-compute-delay for computing the average decoding delay of the \(\gamma\)-SFDC of a text \(y\) of length \(n\) using \(\lambda\) layers. The procedure takes as input also the mapping \(\rho\) and the number \(\lambda\) of fixed layers
Let \(\lambda\) be the number of layers of the representation. As noted earlier, if the length of the encoding of \(y[i]\) falls within the layers, namely if \(|\rho(y[i])|\leqslant\lambda\), then the decoding of the character can be performed in a single cycle and the delay is \(0\). Otherwise, if \(|\rho(y[i])|>\lambda\), then \(|\rho(y[i])|-\lambda\) bits remain to be read.
In the average case, the length of the encoding of a text character can be approximated to the value \(e_{\rho}\approx 2.618\). If we assume to use a fixed number of layers \(\lambda\geqslant 4\), we expect that the pending bits of \(\rho(y[i])\) will be placed in the \(\lambda-e_{\rho}\) bits left unused by characters following position \(i\). Thus, for an alphabet of \(\sigma\) characters, we estimate the expected delay for the \(\gamma\)-SFDC variant of our Fibonacci encoding with \(\lambda\) layers (where \(4\leqslant\lambda\leqslant\sigma-1\)) by way of the following summation:
\[\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\cdot\left\lceil\frac{|\rho(c_{i})|- \lambda}{\lambda-e_{\rho}}\right\rceil,\]
where \(e_{\rho}\coloneqq E[|\rho(y[i])|]\). Specifically, the following inequalities hold:
\[\frac{F_{\sigma-\lambda+3}-3}{\lambda F_{\sigma+1}-F_{\sigma+3}+3}\leqslant \sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\left\lceil\frac{|\rho(c_{i})|- \lambda}{\lambda-e_{\rho}}\right\rceil<\frac{F_{\sigma-\lambda+3}-3}{\lambda F _{\sigma+1}-F_{\sigma+3}+3}+\frac{F_{\sigma-\lambda+1}}{F_{\sigma+1}} \tag{5}\]
Since we plainly have
\[\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\cdot\frac{|\rho(c_{i})|-\lambda}{ \lambda-e_{\rho}}\leqslant\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\cdot\left \lceil\frac{|\rho(c_{i})|-\lambda}{\lambda-e_{\rho}}\right\rceil<\sum_{i=0}^{ \sigma-\lambda-1}f^{r}(c_{i})\cdot\frac{|\rho(c_{i})|-\lambda}{\lambda-e_{\rho }}+\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\]
and \(\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})=\frac{F_{\sigma-\lambda+1}}{F_{ \sigma+1}}\), to prove (5) it is enough to show that
\[\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\cdot\frac{|\rho(c_{i})|-\lambda}{ \lambda-e_{\rho}}=\frac{F_{\sigma-\lambda+3}-3}{\lambda F_{\sigma+1}-F_{ \sigma+3}+3} \tag{6}\]
holds, which we do next.
We have:
\[\sum_{i=0}^{\sigma-\lambda-1}f^{r}(c_{i})\cdot\frac{|\rho(c_{i})|- \lambda}{\lambda-e_{\rho}}=\sum_{i=0}^{\sigma-\lambda-1}\frac{f(c_{i})}{F_{ \sigma+1}}\cdot\frac{|\rho(c_{i})|-\lambda}{\lambda-e_{\rho}}\] \[\qquad=\frac{1}{F_{\sigma+1}}\cdot\left(\sum_{i=1}^{\sigma- \lambda-1}F_{i}\cdot\frac{\sigma-\lambda-i}{\lambda-e_{\rho}}+\frac{\sigma- \lambda-1}{\lambda-e_{\rho}}\right)\] \[\qquad=\frac{1}{F_{\sigma+1}}\cdot\left(\sum_{i=1}^{\sigma- \lambda-1}F_{i}\cdot\frac{\sigma-\lambda-i}{\lambda-\frac{F_{\sigma+3}-3}{F_{ \sigma+1}}}+\frac{\sigma-\lambda-1}{\lambda-\frac{F_{\sigma+3}-3}{F_{\sigma+1 }}}\right)\] \[\qquad<\frac{1}{F_{\sigma+1}}\cdot\left(\sum_{i=1}^{\sigma- \lambda-1}F_{i}\cdot\frac{(\sigma-\lambda-i)F_{\sigma+1}}{\lambda F_{\sigma+1 }-F_{\sigma+3}+3}+\frac{(\sigma-\lambda-1)F_{\sigma+1}}{\lambda F_{\sigma+1}- F_{\sigma+3}+3}\right)\] \[\qquad=\sum_{i=1}^{\sigma-\lambda-1}F_{i}\cdot\frac{\sigma- \lambda-i}{\lambda F_{\sigma+1}-F_{\sigma+3}+3}+\frac{\sigma-\lambda-1}{ \lambda F_{\sigma+1}-F_{\sigma+3}+3}\] \[\qquad=\frac{1}{\lambda F_{\sigma+1}-F_{\sigma+3}+3}\cdot\left(( \sigma-\lambda)\cdot\sum_{i=1}^{\sigma-\lambda-1}F_{i}-\sum_{i=1}^{\sigma- \lambda-1}iF_{i}+\sigma-\lambda-1\right)\] \[\qquad=\frac{(\sigma-\lambda)(F_{\sigma-\lambda+1}-1)-(\sigma- \lambda-1)F_{\sigma-\lambda+1}+F_{\sigma-\lambda-2}-2+\sigma-\lambda-1}{ \lambda F_{\sigma+1}-F_{\sigma+3}+3}\] \[\qquad=\frac{F_{\sigma-\lambda+2}-3}{\lambda F_{\sigma+1}-F_{ \sigma+3}+3},\]
proving (6), and in turn (5).
Table 4 shows (on the left) the values of the expected delay obtained from equation (5) for \(4\leqslant\lambda\leqslant 12\) and \(\sigma\in\{10,20,30\}\). These values can be compared with the average delay obtained experimentally5 on texts over alphabets with Fibonacci frequencies (on the right). It can be observed that, given the number of layers \(\lambda\), the function (5) has an increasing trend as the size of the alphabet varies. However, the function has a horizontal asymptote whose limit is reached very quickly.
\begin{table}
\begin{tabular}{|c|c c c|} \hline \multicolumn{4}{c}{Average Delay (Theoretical)} \\ \hline \(\lambda\)\(\setminus\)\(\sigma\) & 10 & 20 & 30 \\ \hline
5 & 0.39 & 0.42 & 0.42 \\
6 & 0.17 & 0.19 & 0.19 \\
7 & 0.09 & 0.10 & 0.10 \\
8 & 0.05 & 0.05 & 0.06 \\ \hline \end{tabular}
\begin{tabular}{|c|c c c|} \hline \(\lambda\)\(\setminus\)\(\sigma\) & 10 & 20 & 30 \\ \hline
5 & 0.29 & 0.37 & 0.38 \\
6 & 0.10 & 0.15 & 0.16 \\
7 & 0.06 & 0.08 & 0.08 \\
8 & 0.03 & 0.04 & 0.04 \\ \hline \end{tabular}
\end{table}
Table 4: The average delay for decoding a single character in a text over an alphabet with Fibonacci frequencies using \(\gamma\)-SFDC. Theoretical values (on the left) compared with experimental values (on the right). Values are tabulated for \(5\leqslant\lambda\leqslant 8\) and \(\sigma\in\{10,15,20,25,30\}\).
## 8 Experimental Evaluation on Real Data
The SFDC offers a practical approach to the problem of compressed representation allowing direct (and fast) access to the symbols of the sequence. In addition to the more natural application of the compressed representation of a sequence of symbols (such as natural texts, biological sequences, source codes, etc.), this new representation can be used successfully in many applications where direct access to the elements of a compressed sequence of integers is required, a case which occurs frequently in the design of compressed data structures such as suffix trees, suffix arrays, and inverted indexes, to name a few.
In this section, we show experimentally how the SFDC scheme offers a competitive alternative to other encoding schemes that support direct access. Specifically, we compare SFDC with DACs and Wavelet Trees, which are among of the most recent and effective representations of variable-length codes allowing for direct access. In [3], where a more in-depth experimental analysis is conducted, it is possible to analyse how DACs compares to the other main succinct representations, in terms of space and access times.
Our experiments were performed on 5 real data sequences and on an LCP sequence. The real data sequences are text files of size 100 MB from the Pizza&Chili corpus ([http://pizzachili.dcc.uchile.cl](http://pizzachili.dcc.uchile.cl)), and specifically an XML file, two English texts, a protein sequence, and a DNA sequence. The last dataset has been obtained by computing the Longest Common Prefix (LCP) array of the XML file. We remember that the LCP array is a central data structure in stringology and text indexing [18]. Given a text \(y\) of length \(n\) and the set of all its suffixes, sorted in lexicographic order, the LCP array stores, for each suffix, the length of the longest common prefix between each suffix and its predecessor. Most LCP values are small, but some can be very large. Hence, a variable-length encoding scheme is a good solution to represent this sequence of integers.
Following the notation proposed in [3], we denote by dblp and lcp.dblp the XML file and the LCP array obtained from the XML file, which contains bibliographic information on major computer science journals and proceedings. We denote by english the English text containing different English text files with slightly different character distributions, while we denote by bibble an English highly repetitive text obtained by concatenating 25 times a file containing the 4MB King James version of the Bible. Finally, we denote by protein and dna the protein and DNA texts, containing protein sequences and gene DNA sequences consisting, respectively, of uppercase letters of the 20 amino acids and uppercase letters A,G,C,T, plus some other few occurrences of special characters.
\begin{table}
\begin{tabular}{|l|c c c c|} \hline Text & \(\sigma\) & Max\(\{c_{i}\}\) & Avg\(\{c_{i}\}\) & Max\(\{|\rho(y[i])|\}\) & Avg\(\{|\rho(y[i])|\}\) \\ \hline protein & 25 & 90 & 75.92 & 11 & 4.22 \\ dblp & 96 & 126 & 86.55 & 21 & 5.26 \\ english & 94 & 122 & 89.53 & 20 & 4.59 \\ bibble & 90 & 122 & 91.24 & 17 & 4.88 \\ dna & 16 & 89 & 72.24 & 12 & 2.21 \\ lcp.dblp & 1,085 & 1,084 & 44.45 & 27 & 6.82 \\ \hline \end{tabular}
\end{table}
Table 5: Some interesting information about the dataset used in our experimental results. All sequences are of \(104,857,600\) elements.
Some interesting information about this dataset is shown in Table 5. Specifically, we report: the size \(\sigma\) of the alphabet, the largest numerical value (Max\(\{c_{i}\}\)) contained within the sequence, and the average value (Avg\(\{c_{i}\}\)) contained therein. We also report the maximum length and average length (Max\(\{|\rho(y[i])|\}\) and Avg\(\{|\rho(y[i])|\}\), respectively) of the Huffman code generated for each sequence.
All experiments have been executed locally on a MacBook Pro with 4 Cores, a 2 GHz Intel Core i7 processor, 16 GB RAM 1600 MHz DDR3, 256 KB of L2 Cache, and 6 MB of Cache L3. We compiled with gcc version 13.0.0 and the option -O3. For the implementation of the DACs we used the codes available at [http://lbd.udc.es/research/DACS/](http://lbd.udc.es/research/DACS/). For the implementation of the Wavelet Trees (WT) we use the Huffman tree shaped variant of WT [17] which turned out to be the most effective among the WT implementations available in the Succinct Data Structure Library (SDSL). All the implementations of the encodings presented in this paper and all the datasets are available in the Google Drive repository, accessible at this link.
### Space Consumption and Decoding Times
Table 6 reports experimental results obtained by comparing our SFDC variants against the DACs on the six datasets described above. We compared the encoding schemes in terms of: average space for text element (Space), in number of bits; full decoding time (Decode), in seconds; average access time for element (Access), in microseconds; average decoding delay (Delay), in number of characters. Solutions with the minimum value of \(\lambda\) for which an average decoding delay less than 1 is obtained have been highlighted in light grey.
Concerning the space consumed by the representation, i.e., the average number of bits used for each element of the sequences, the SFDC variants offer very competitive performance by using, essentially, only the bits connected to the \(\lambda\) layers used by the representation. When \(\lambda\leqslant 6\) this value is always below the space used by the DACs and the WTs, and often continues to be so even for values of \(\lambda=7\). It is important to note that DACs and WTs already offer the best performance when compared to other representation methods [3], and this makes SFDC encoding one of the best solutions in terms of space consumption. The only downside is the one in the case of lcp.dblp, where up to 12 layers are required to achieve a delay below the bound, which is almost double the value proposed by the DACs. However, we observe that already with 10 bits acceptable, if not optimal, results are obtained, where DACs require nearly 8 bits per element.
A particularly favourable case is that of the dna dataset, where a delay value below the threshold 1.00 is obtained with only 3 bits per element. The dataset in fact presents a particularly uniform distribution of the 4 characters A, C, G and T that make up a genomic sequence. However, the presence of some special characters outside the 4 main elements of the alphabet forces the encoding to use 3 layers. The number of bits per element is however very close to the minimum limit of 2.21 bits.
Regarding the time consumed to decode the whole file, we observe that the times required by SFDC are often slightly above the times required by DACs for the same operation. However, they remain very comparable and, in some cases, offer better performance. In particular, this is the case of DNA sequences.
The access time to the elements of the compressed sequence was computed by accessing all the characters of the text and measuring, for each of them, the time required for decoding. However, the characters were accessed in random order in order to avoid possible advantages due to cache
memory information and the possibility that the effect due to delay could favour the SFDC variants. Experimental results show that the access times obtained by the SFDC variants are comparable with the access times obtained by the WTs but significantly higher than those obtained by DACs, being almost an order of magnitude higher than those reported for DACs. However, it should be noticed that SFDC has to deal with delay and the fact that they offer a representation based on plain Huffman coding (as in the case of WTs), not designed for decoding speed. Adapting the
SFDC to other more efficient representations, based on block decoding, would also significantly decrease the direct access time to the characters in the sequence. Beyond that, access times are often below a microsecond, and therefore they remain acceptable for any practical application.
The delay shown in the cases tested is very often below the threshold of a single character, and the value decreases very quickly for increasing \(\lambda\) values, reaching in some cases the desirable limit of \(0.0\). As may be expected, the delay values offered by the \(\gamma\)-variant of SFDC are always below the values offered by the standard variant under the same conditions. However, this does not always translate into a higher speed in direct access.
The experimental results also show that the SFDC scheme suffers particularly in cases where the text contains portions in which the character frequency diverges significantly from the general frequency of the text. This is the case with the dataset english, which consists of the union of several English language texts with slightly different character frequencies and which contain portions in which infrequent characters are found in contiguous sequences (e.g., long sequences of characters written in capital letters). On the other hand, this negative case does not arise when decoding highly repetitive texts such as that of the dataset bible, for which access times and delay values are significantly lower.
This weakness in coding suggests the need to design SFDC variants that are able to overcome the points in the sequence where the number of pending bits increases significantly. Such solutions, the discussion of which goes beyond the scope of this work, can be diverse: ranging from splitting the whole sequence into sub-sequences, based on frequency variation, to be encoded separately; to the adoption of dynamic encoding techniques such as dynamic Huffman codes [25]; to the introduction of special cases in which the encoding flow is temporarily interrupted to give the stack a chance to empty itself, thus reducing the expected decoding delay.
### Some Experimental Results on Text Processing
In this section, we present some experimental results that allow us to evaluate the effectiveness of SFDC encoding in the field of text-processing. We evaluate the performance of a string-matching algorithm adapted to operate directly on SFDC-encoded texts, comparing its performance with algorithms designed for application on standard representations. Specifically, we compare the SFDC adaptation of the Skip-Search algorithm (SFDC-SS) presented in Appendix A against the original solution implemented using \(q\)-grams, for \(q\in\{1,2,4,6,8\}\).
The algorithms have been compared in terms of their running times, excluding preprocessing times, on the 100MB data sets dna, protein, and english.
We used two different implementations of the SFDC-SS algorithm, both implemented using the standard variant of the encoding SFDC, designed to work with 8-bit blocks and with 16-bit blocks, respectively. For the dataset dna, we tested the two algorithms using the value \(\lambda=3\), while for the datasets english and protein we tested the algorithms using the value \(\lambda=5\).
For completeness, we also included in our experimental comparison the Weak-Factor-Recognition (WFR) algorithm [4] implemented with \(q\)-grams, for \(q\in\{1,2,4,6,8\}\), and the Backward-Range-Automaton Matcher (BRAM) [10], implemented using \(q\)-grams characters, with \(q\in\{1,2,4,6\}\). The WFR and the BRAM algorithms are among the most recent and effective solutions available in literature [4, 10] for the exact string matching problem working on standard representation. In our experimental comparison, we did not include any solutions adapted to work with other direct-access coding schemes, as they are not designed for this type of task, resulting in excessively high execution times.
In the experimental evaluation, patterns of length \(m\) have been randomly extracted from the sequences, with \(m\) ranging over the set of values \(\{2^{i}\ |\ 4\leqslant i\leqslant 10\}\). For each experiment, the mean over the processing speed (expressed in GigaBytes per second) of 1000 runs has been reported.
Table 7 presents the experimental results obtained from our comparison. Since many of the algorithms compared were implemented and tested in different variants, for simplicity we only present the search speed offered by the best variant. Furthermore, the last row of each sub-table presents the speed-up offered by the new variants compared to the previous ones.
Although the new variants operate on codified texts, they perform best in each of the cases analysed. As is understandable, the variant operating on words of length 8 offers the best performance for short patterns (\(\sigma\leqslant 32\)), whereas, when the pattern length increases, the 16-bit variant obtains significantly better performance.
\begin{table}
\begin{tabular}{|l|c c c c c c c|} \hline \hline \(m\) & 16 & 32 & 64 & 128 & 256 & 512 & 1024 \\ \hline SKIP-SEARCH & 0.36 & 0.42 & 0.48 & 0.49 & 0.49 & 0.54 & 0.55 \\ WFR & 0.32 & 0.39 & 0.43 & 0.45 & 0.49 & 0.55 & 0.57 \\ BRAM & 0.31 & 0.39 & 0.46 & 0.46 & 0.51 & 0.56 & 0.56 \\ \hline SFDC-SS\((3,8)\) & **0.63** & **1.22** & 1.72 & 2.13 & 2.94 & 4.00 & 5.00 \\ SFDC-SS\((3,16)\) & 0.06 & 0.81 & **1.79** & **3.70** & **5.00** & **7.69** & **11.11** \\ \hline _speed-up_ & 1.77 & 2.90 & 3.71 & 7.59 & 9.85 & 13.77 & 19.33 \\ \hline \hline \multicolumn{10}{|c|}{protein dataset} \\ \hline \(m\) & 16 & 32 & 64 & 128 & 256 & 512 & 1024 \\ \hline SKIP-SEARCH & 0,40 & 0,46 & 0,51 & 0,51 & 0,53 & 0,55 & 0,56 \\ WFR & 0,34 & 0,42 & 0,48 & 0,49 & 0,51 & 0,53 & 0,52 \\ BRAM & 0,37 & 0,43 & 0,48 & 0,48 & 0,49 & 0,54 & 0,57 \\ \hline SFDC-SS\((5,8)\) & **0,62** & **1,16** & 1,61 & 2,08 & 2,86 & 3,85 & 4,76 \\ SFDC-SS\((5,16)\) & 0,07 & 0,90 & **2,00** & **4,00** & **7,69** & **14,29** & **16,67** \\ \hline _speed-up_ & 1,54 & 2,52 & 3,96 & 7,80 & 14,62 & 26,14 & 29,33 \\ \hline \hline \multicolumn{10}{|c|}{english dataset} \\ \hline \(m\) & 16 & 32 & 64 & 128 & 256 & 512 & 1024 \\ \hline SKIP-SEARCH & 0.44 & 0.47 & 0.53 & 0.51 & 0.53 & 0.56 & 0.63 \\ WFR & 0.37 & 0.43 & 0.50 & 0.48 & 0.52 & 0.55 & 0.63 \\ BRAM & 0.36 & 0.45 & 0.47 & 0.52 & 0.51 & 0.56 & 0.63 \\ \hline SFDC-SS\((5,8)\) & **0.55** & **0.95** & 1.35 & 1.75 & 2.50 & 3.23 & 4.00 \\ SFDC-SS\((5,16)\) & 0.07 & 0.82 & **1.39** & **3.03** & **6.67** & **9.09** & **11.11** \\ \hline _speed-up_ & 1.25 & 2.04 & 2.61 & 5.79 & 12.67 & 16.09 & 17.56 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Experimental results obtained by comparing SFDC-based string matching algorithms against 3 standard string matching algorithms. Results are expressed in Gigabytes per second. Best results have been bold-faced.
The possibility of processing the string in blind mode allows a significantly higher processing speed than that offered by solutions operating on plain texts and allowing the SFDC-SS algorithm to achieve impressive speed-ups for very long patterns. Specifically, it is up to 11 times faster than the BRAM algorithm in the case of dna and english dataset, and up to 29 times faster than BRAM in the case of the protein dataset.
## 9 Conclusions
In this paper, we have introduced a data reorganisation technique that, when applied to variable-length codes, allows easy, direct, and fast access to any character of the encoded text, in time proportional to the length of its code-word, plus an additional overhead that is constant in many practical cases. The importance of this result is related to the need for random access to variable-length codes required by various applications, particularly in compressed data structures. Our method, besides being extremely simple to be translated into a computer program and efficient in terms of space and time, has the surprising feature of being computation-friendly. In fact, it turns out to be particularly suitable for applications in text processing problems. Although our evaluation was limited to the case of exact string matching, one of the basic problems in text processing, the applications which the techniques presented in this work can be applied to are many. We have also demonstrated experimentally that our coding model is comparable to the most efficient methods found in the literature, thus providing a proof of concept of the practical value of the idea.
Among the weaknesses of the new model, we have highlighted its particular sensitivity to the case in which the input text has a character frequency that is subject to considerable variation along its length. In almost all cases, the expected delay time may increase significantly. Our future work will aim to identify specific techniques that are able to eliminate, or cancel, this limitation.
|
2301.00055 | A robust Bayesian latent position approach for community detection in
networks with continuous attributes | The increasing prevalence of multiplex networks has spurred a critical need
to take into account potential dependencies across different layers, especially
when the goal is community detection, which is a fundamental learning task in
network analysis. We propose a full Bayesian mixture model for community
detection in both single-layer and multi-layer networks. A key feature of our
model is the joint modeling of the nodal attributes that often come with the
network data as a spatial process over the latent space. In addition, our model
for multi-layer networks allows layers to have different strengths of
dependency in the unique latent position structure and assumes that the
probability of a relation between two actors (in a layer) depends on the
distances between their latent positions (multiplied by a layer-specific
factor) and the difference between their nodal attributes. Under our prior
specifications, the actors' positions in the latent space arise from a finite
mixture of Gaussian distributions, each corresponding to a cluster. Simulated
examples show that our model outperforms existing benchmark models and exhibits
significantly greater robustness when handling datasets with missing values.
The model is also applied to a real-world three-layer network of employees in a
law firm. | Zhumengmeng Jin, Juan Sosa, Shangchen Song, Brenda Betancourt | 2022-12-30T21:06:00Z | http://arxiv.org/abs/2301.00055v3 | A Bayesian latent position approach for community detection in single- and multi-layer networks with continuous attributes
###### Abstract
The increasing prevalence of multiplex networks has spurred a critical need to take into account potential dependencies across different layers, especially when the goal is community detection, which is a fundamental learning task in network analysis. We propose a full Bayesian mixture model for community detection in both single-layer and multi-layer networks. A key feature of our model is the joint modeling of the nodal attributes that often come with the network data as a spatial process over the latent space. In addition, our model for multi-layer networks allows layers to have different strengths of dependency in the unique latent position structure and assumes that the probability of a relation between two actors (in a layer) depends on the distances between their latent positions (multiplied by a layer-specific factor) and the difference between their nodal attributes. Under our prior specifications, the actors' positions in the latent space arise from a finite mixture of Gaussian distributions, each corresponding to a cluster. Simulated examples show that our model performs favorably compared to the existing ones. The model is also applied to a real three-layer network of employees in a law firm.
+
Footnote †: _Key words and phrases._ multiplex network, community detection, latent position model, mixture model, spatial process, visualization
## 1 Introduction
Network data conveniently describes the relationships between actors in complex systems and is ubiquitous in many statistical applications, including finance, social science, criminology, biology, epidemiology, and computer science, among others. Understanding the relationships between actors can aid domain experts.
For instance, in epidemiology, people in a certain area can be portrayed in a contact network that can be studied to detect infectious disease outbreaks. In criminology, communications between terrorists form a terrorist network, helping intelligence agencies to better counter terrorism.
Many models have been developed for the inference of networks over the past decades (e.g., Erdos and Renyi, 1959, Frank and Strauss, 1986), among which the broad class of latent space models is one of the most widely used (see, e.g., Sosa, 2021 for an exhaustive review). Suppose the network under study has \(N\) actors, then under latent space models, there are \(N\) independent and identically distributed (i.i.d.) latent variables \(z_{1},\ldots,z_{N}\), one for each actor. Under a mild exchangeability assumption in Hoff (2007), results in Aldous (1985) and Hoover (1982) show that edge variables \(y_{i,j}\) depend on latent variables through a symmetric function \(\gamma(z_{i},z_{j})\) that is meant to capture any pattern in the network beyond any known covariate information.
Many well-known models fall into the category of latent space models, which can be distinguished between two cases depending on whether latent variables are discrete or continuous (Matias and Robin, 2014). For instance, stochastic block models (Nowicki and Snijders, 2001, Wang and Wong, 1987) - hereafter SBM - are special cases of latent space models with discrete latent variables \(z_{i}\in\{1,2,\ldots,K\}\). When latent variables are assumed to be continuous, another approach using latent variables is the class of latent position models (LPM) proposed by Hoff et al. (2002) which our model in the paper is built upon. In its basic formulation, LPMs model the edge variables \(y_{i,j}\) as conditionally independent given the distance between latent variables \(\gamma(z_{i},z_{j})=-\|z_{i}-z_{j}\|\), which naturally accounts for transitivity effects through the latent space (typically a Euclidean \(K\)-dimensional space for a predetermined \(K\)) where \(z_{i}\) lives. Later on, Handcock et al. (2007) proposed an extension on Hoff et al.'s LPM, namely the latent position cluster model (LPCM), by imposing a Gaussian mixture prior on the latent positions to perform clustering tasks. Krivitsky et al. (2009) further extended Handcock et al.'s model by adding the random sender and receiver effects proposed by Hoff (2005). Other formulations of \(\gamma(\cdot,\cdot)\) can be found in Schweinberger and Snijders (2003), Hoff (2005, 2009), Athreya et al. (2017), Minhas et al. (2019), among others.
Besides edge information of a network, extra information like node and edge attributes and different types of edges are often available, and should ideally be leveraged for inference. Typical ways to incorporate attributes in a network model include: (1) modeling the network as a function of the attributes (see, e.g., Hoff et al., 2002, Hoff, 2005); (2) modeling the attributes as a function of the network (Guha and Rodriguez, 2021); (3) jointly modeling the network and attributes (Linkletter, 2007, Kim and Leskovec, 2012, Fosdick and Hoff, 2015, Ciminelli et al., 2019). The first approach is arguably the most common approach to incorporate covariates into the model, but we consider an approach of joint modeling proposed by Ciminelli et al.
[2019], namely the social network spatial model (SNSM), where the authors modeled edges \(y_{i,j}\) as conditionally independent given \(\|z_{i}-z_{j}\|\) and the distance of the continuous node attributes \(\|x_{i}-x_{j}\|\), and node attributes are further modeled as a spatial process over the latent space. Note that joint modeling does not require the network or the attributes to be fully observed as the first two approaches, hence one could predict missing network and attribute data (if there is any). In addition, it improves model fitting by capturing the dependence structure between latent variables and the attributes (when such dependency exists), as we will see in Section 3.
We propose a full hierarchical Bayesian model that builds on Ciminelli et al.'s SNSM. Instead of using a Gaussian distribution as the prior for latent positions as in Ciminelli et al. [2019], we impose a Gaussian mixture prior as in Handcock et al. [2007], so that our model could also capture the group structure in the network. Detecting communities or clusters among actors in the network is an important task in network analysis and has spurred the development of many models and algorithms, among which the SBM has motivated an active line of research that deals with community detection (see, e.g., Lee and Wilkinson [2019] for a review). However, SBM may not fit well when many actors fall between clusters [Hoff et al., 2002]. We will compare our model with an SBM that incorporates covariates as fixed effects (i.e., model the edge variables as a function of latent classes and covariates [Leger, 2016]), and we call this model a covariate-assisted stochastic block model (CSBM). We will show that our model presents improved model fitting while producing similar clustering results as CSBM.
We also propose an extension of our model to multi-layer network settings. Multi-layer networks can generally be categorized into two cases: cross-sectional networks that have different types of connections (e.g., social networks of friendship, coworker-ship, etc.) and time-varying networks where the same type of connections are measured over time (e.g., a trade network that changes over time). We consider a type of cross-sectional multi-layer network where each layer has a common set of actors. Substantial work has been done on latent space models for cross-sectional multi-layer networks that take a Bayesian approach (see, e.g., Gollini and Murphy, 2016; Salter-Townshend and McCormick, 2017; D'Angelo et al., 2019; Sosa and Betancourt, 2022; Durante and Dunson, 2018; Wang et al., 2019; MacDonald et al., 2020). In extending our model to the multiple networks setting, we adopt the approach in Sosa and Betancourt [2022] in a parsimonious way, where latent positions are assumed to be the same for all layers, but the strength of borrowing such latent structure information is allowed to be different across different layers. Note that, the original model in Sosa and Betancourt [2022] assumed different latent positions for different layers and had an additional hierarchy on the hyperparameters. The specification of our model is given in the next section.
The remainder of the paper is organized as follows. Section 2 contains general background on the spatial
process and introduces the proposed model (for single- and multi-layer network settings) which we call the latent position joint mixture model (LPJMM) in the rest of the paper. In addition, prior specification, identifiable problem, and inference will also be discussed in this section. Several simulation studies are conducted in section 3, where LPJMM is compared with Handcock et al.'s LPCM, Ciminelli et al.'s SNSM and CSBM in single-layer settings and the model is also evaluated in multi-layer settings. In section 4, we apply LPJMM to a real-world multi-layer network data set. Finally, we conclude with some discussion in section 5.
## 2 Models
We first review the LPM introduced in Hoff et al. (2002), and then build upon it with a spatial process to allow for joint modeling of the network and the nodal attributes, and with a finite Gaussian mixture distribution for latent positions to allow for clustering.
Consider a binary single-layer network with \(N\) actors. Denote its adjacency matrix as \(\mathbf{Y}=(y_{i,j})\in\{0,1\}^{N\times N}\), where \(y_{i,j}=1\) if actors \(i\) and \(j\) are connected, and \(y_{i,j}=0\) if they are not connected. Suppose the network data comes with a one-dimensional nodal attribute \(x_{i}\) for each actor, and denote the covariate as \(\mathbf{x}=(x_{i})\in\mathbb{R}^{N}\). The LPM assumes that each actor \(i\) has an observed latent position \(z_{i}\) in a \(K\)-dimensional Euclidean latent space, the so-called latent space, for some \(K\in\mathbb{N}\). Let \(\mathbf{z}=(z_{i})\in\mathbb{R}^{N\times K}\), then LPM models edge \(y_{i,j}\) as conditionally independent given distances between nodal attributes as well as distances between latent positions via logistic regression. But instead of the logistic link, we use the probit link in our model. The analysis of probit regression models can often be facilitated by a Gibbs sampler constructed using the data augmentation approach that introduces latent variables with truncated normal distributions (Albert and Chib, 1993). (See also Sosa and Betancourt (2022) for a discussion on the choice of link functions.) Specifically, for \(i,j\in\{1,\ldots,N\}\) and \(i\neq j\),
\[y_{i,j}\mid\mathbf{z},\mathbf{x},a,b,\theta\stackrel{{\text{ ind}}}{{\sim}}\mathrm{Ber}\big{(}\Phi(a+b|x_{i}-x_{j}|-\theta\|z_{i}-z_{j}\|)\big{)}\,, \tag{1}\]
where \(a,b\in\mathbb{R}\) and \(\theta\in\mathbb{R}^{+}\), \(\mathrm{Ber}(p)\) is a Bernoulli distribution that takes value 1 with some probability \(p\), \(\|\cdot\|\) is the Euclidean norm on \(\mathbb{R}^{K}\) and \(\Phi(\cdot)\) is the cumulative distribution function of the standard normal distribution. Note that we impose a factor \(\theta\) for the distance between latent positions, which is different from Hoff et al. (2002) and Krivitsky et al. (2009). Although \(\theta\) is unidentifiable in single-layer networks, it plays a non-trivial role in multi-layer network settings (introduced in Section 2.1). We defer a detailed discussion of \(\theta\) to Section 2.4.
To allow for joint modeling of the network and nodal attributes, we model the nodal attributes as a spatial process over the latent space \(\mathbb{R}^{K}\). Hence, nodal attributes are treated as random variables indexed by their latent positions, and the distance between these random variables is found by the distance between their corresponding positions. As in Ciminelli et al. (2019), we specify the spatial process as a Gaussian process that is stationary with mean \(\beta\) and isotropic (see Banerjee et al., 2015 for definitions). In this case, the process is completely defined by its covariance function \(\mathrm{Cov}(d)\), where \(d\) is the distance between two random variables in the Gaussian process. In particular, we specify \(\mathrm{Cov}(d)\) with an exponential kernel, that is,
\[\mathrm{Cov}(d)=\begin{cases}\tau^{2}+\sigma^{2},&\text{if $d=0$;}\\ \sigma^{2}\exp(-\phi d),&\text{if $d>0$,}\end{cases}\]
where \(\tau\geq 0\), \(\sigma>0\) and \(\phi>0\). It is well-known that such a covariance structure is _valid_, i.e., the covariance matrix for any finite collection of random variables in the process is positive definite (Banerjee et al., 2015). Let \(M_{\mathbf{z}}=(m_{ij})\in\mathbb{R}^{N\times N}\) where \(m_{ij}=\exp(-\phi\|z_{i}-z_{j}\|)\) and denote \(I_{N}\) as the \(N\)-dimensional identity matrix, then the Gaussian process of the nodal attributes is constructed as follows,
\[\mathbf{x}\mid\mathbf{z},\beta,\sigma,\tau,\phi\sim\mathrm{N}_{N}(\beta \mathbf{1}_{N},\,\sigma^{2}M(\mathbf{z},\phi)+\tau^{2}I_{N}), \tag{2}\]
where \(\mathrm{N}_{d}\) is a \(d\)-dimensional multivariate normal distribution for some dimension \(d\in\{2,3,\dots\}\), and \(\mathbf{1}_{N}\) is an \(N\)-dimensional vector with all 1s.
As in Krivitsky et al. (2009), we impose a Gaussian mixture distribution on latent positions, which allows us to cluster actors into different groups. Suppose there are \(H<\infty\) predetermined number of components in the Gaussian mixture distribution, then
\[z_{i}\mid\mathbf{\omega},\mathbf{\mu},\mathbf{\kappa}\stackrel{{\text{ind}}} {{\sim}}\sum_{h=1}^{H}\omega_{h}\mathrm{N}_{K}(\mu_{h},\,\kappa_{h}^{2}I_{K})\,, \tag{3}\]
where \(\mathbf{\omega}=\{\omega_{1},\dots,\omega_{H}\}\), \(\mathbf{\mu}=\{\mu_{1},\dots,\mu_{H}\}\), \(\mathbf{\kappa}=\{\kappa_{1},\dots,\kappa_{H}\}\). Note that \(\mu_{h}\) is a \(K\)-dimensional mean vector where \(h\in\{1,\dots,H\}\), and \(\omega_{h}\) is the probability that an actor belongs to the \(h\)-th group such that \(\omega_{h}\in(0,1)\) and \(\sum_{h=1}^{H}\omega_{h}=1\).
In single-layer network settings, the model is given by Eqs. (1) to (3). Under our model, nodal attributes of two actors whose latent positions are close are more likely to be similar according to the exponential covariance structure. If \(b<0\) (\(b>0\)), actors with similar attributes are more (less) likely to be connected. When \(b=0\), nodal attributes do not affect the distribution of the network directly (but it still has an indirect
impact on the network through latent positions by Eq. (2)).
### An extension to multi-layer networks.
Our model can also be extended to multi-layer network settings in the following way. Suppose we have \(L\) layers \(\mathbf{Y}_{1},\ldots,\mathbf{Y}_{L}\) in the network, where all layers are defined over the same set of actors. We assume the same latent positions \(\mathbf{z}\) for all layers but allow the strength of borrowing such latent structure information to be different by imposing layer-specific factors \(\theta_{\ell}\) for \(\ell\in\{1,\ldots,L\}\). Our model in multi-layer settings is then presented as follows
\[y_{i,j,\ell} \mid\mathbf{z},\mathbf{x},a_{\ell},b_{\ell},\theta_{\ell} \stackrel{{\text{ind}}}{{\sim}}\operatorname{Ber} \bigl{(}\Phi(a_{\ell}+b_{\ell}|x_{i}-x_{j}|-\theta_{\ell}\|z_{i}-z_{j}\|)\bigr{)}\,, \tag{4}\] \[\mathbf{x} \mid\mathbf{z},\beta,\sigma,\tau,\phi \sim\mathrm{N}_{N}(\beta\mathbf{1}_{N},\,\sigma^{2}M(\mathbf{z}, \phi)+\tau^{2}I_{N})\,,\] (5) \[z_{i} \mid\boldsymbol{\omega},\boldsymbol{\mu},\boldsymbol{\kappa} \stackrel{{\text{i.i.d.}}}{{\sim}}\sum_{h=1}^{H} \omega_{h}\mathrm{N}_{K}(\mu_{h},\,\kappa_{h}^{2}I_{K})\,, \tag{6}\]
where \(y_{i,j,\ell}\) is the edge variable between actors \(i\) and \(j\) in layer \(\ell\in\{1,\ldots,L\}\), \(a_{\ell},b_{\ell}\) and \(\theta_{\ell}\) are layer-specific parameters. Note that Eqs. (5) and (6) are the same as Eqs. (2) and (3). Fig. 1 shows a directed acyclic graph (DAG) representation of the model given by Eqs. (4) to (6).
Figure 1: DAG representation of the LPJMM in multi-layer settings.
### Prior specification
We take a Bayesian approach to estimate the model parameters. Without loss of generality, a Bayesian version of the model given by Eqs. (4) to (6) is formed by placing prior distributions on the unknown parameters \(a_{\ell},b_{\ell},\theta_{\ell}\), \(\beta\), \(\sigma\), \(\tau\), \(\phi\), \(\boldsymbol{\omega}\), \(\boldsymbol{\mu}_{h},\kappa_{h}\), for \(\ell=\{1,\ldots,L\}\) and \(h=\{1,\ldots,H\}\). In the model we consider, these parameters are assumed _a priori_ independent. For parameters in the probit regression tier as specified by Eq. (4), their priors are specified as follows:
\[a_{\ell}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}\mathrm{N}(m_{a}, \nu_{a}^{2})\,,\qquad b_{\ell}\stackrel{{\mathrm{i.i.d.}}}{{\sim} }\mathrm{N}(m_{b},\nu_{b}^{2})\,,\qquad\theta_{\ell}\stackrel{{ \mathrm{i.i.d.}}}{{\sim}}\mathrm{Gamma}(\lambda_{1},\lambda_{2})\,.\]
The priors for the parameters in the spatial process tier as given in Eq. (5) are given as follows:
\[\beta\sim\mathrm{N}(0,\nu_{\beta}^{2})\,,\qquad\sigma^{2}\sim\mathrm{InvG}( \eta_{1},\eta_{2})\,,\qquad\tau^{2}\sim\mathrm{InvG}(\xi_{1},\xi_{2})\,,\qquad \phi\sim\mathrm{U}(u_{1},u_{2})\,.\]
Finally, we put the following priors on the rest of the parameters:
\[\boldsymbol{\omega}\sim\mathrm{Dir}(\alpha)\,,\qquad\mu_{h}\stackrel{{ \mathrm{i.i.d.}}}{{\sim}}\mathrm{N}_{K}(m_{\mu},\nu_{\mu}^{2}I_{K})\,, \qquad\kappa_{h}^{2}\stackrel{{\mathrm{i.i.d.}}}{{\sim}} \mathrm{InvG}(\gamma_{1},\gamma_{2})\,.\]
Note that, \(m_{a}\), \(\nu_{a}\), \(m_{b}\), \(\nu_{b}\), \(\lambda_{1}\), \(\lambda_{2}\), \(\nu_{\beta}\), \(\eta_{1}\), \(\eta_{2}\), \(\xi_{1}\), \(\xi_{2}\), \(u_{1}\), \(u_{2}\), \(\alpha\), \(m_{\mu}\), \(\nu_{\mu}\), \(\gamma_{1}\) and \(\gamma_{2}\) are user-specified hyperparameters, and \(\mathrm{Gamma}(\cdot,\cdot)\), \(\mathrm{InvG}(\cdot,\cdot)\), \(\mathrm{U}(\cdot,\cdot)\), \(\mathrm{Dir}(\cdot)\) represents Gamma, Inverse-Gamma, uniform, and Dirichlet distributions respectively.
### Posterior distribution and model estimation
As is standard in Bayesian estimation of mixture models (see, e.g., Diebolt and Robert (1994)), we define a new variable \(g_{i}\) that serves as the missing data of group membership of actor \(i\) whose distribution depends on \(\boldsymbol{\omega}\). In particular, \(g_{i}=h\) if actor \(i\) belongs to the \(h\)-th group. The joint density of \((z_{i},g_{i})\) given \(\boldsymbol{\omega}\), \(\boldsymbol{\mu}\) and \(\boldsymbol{\kappa}\) is then given by
\[\prod_{h=1}^{H}\left\{\omega_{h}\frac{1}{\sqrt{2\pi\kappa_{h}^{2}}}\exp\Big{(} -\frac{1}{2\kappa_{h}^{2}}\|z_{i}-\mu_{h}\|^{2}\Big{)}\right\}^{\mathrm{I}_{\{g _{i}=h\}}}\,,\]
where the indicator function \(\mathrm{I}_{\{g_{i}=h\}}=1\) if \(g_{i}=h\), and \(\mathrm{I}_{\{g_{i}=h\}}=0\) otherwise. Let \(\mathbf{g}=(g_{i})_{i=1}^{N}\) be the group membership for all actors and \(\mathcal{L}(\cdot)\) be the law of a random variable. Then the posterior distribution of \(\mathbf{z}\), \(\mathbf{g}\)
and the parameters upon which priors are specified in Section 2.2 is given by
\[\Pi(\mathbf{z},\mathbf{g},a_{1},\ldots,a_{L},b_{1},\ldots,b_{L}, \theta_{1},\ldots,\theta_{L},\beta,\tau^{2},\sigma^{2},\phi,\boldsymbol{\omega}, \boldsymbol{\mu},\boldsymbol{\kappa}\mid\mathbf{Y}_{1},\ldots,\mathbf{Y}_{L}, \mathbf{x})\] \[\propto \bigg{\{}\prod_{\ell=1}^{L}\mathcal{L}(\mathbf{Y}_{\ell}\mid \mathbf{z},\mathbf{x},a_{\ell},b_{\ell},\theta_{\ell})\bigg{\}}\mathcal{L}( \mathbf{x}\mid\mathbf{z},\sigma,\tau,\phi)\mathcal{L}(\mathbf{z},\mathbf{g} \mid\boldsymbol{\omega},\boldsymbol{\mu},\boldsymbol{\kappa})\bigg{\{}\prod_{ \ell=1}^{L}\mathcal{L}(a_{\ell})\mathcal{L}(b_{\ell})\mathcal{L}(\theta_{ \ell})\bigg{\}}\] \[\times\mathcal{L}(\beta)\mathcal{L}(\sigma^{2})\mathcal{L}(\tau^{ 2})\mathcal{L}(\phi)\mathcal{L}(\boldsymbol{\omega})\mathcal{L}(\boldsymbol{ \mu})\mathcal{L}(\boldsymbol{\kappa})\,.\]
Note that the dimension of the posterior distribution has dimension \(NK+N+3L+3H+4\) and the corresponding posterior density is presented as follows,
\[\pi(\mathbf{z},\mathbf{g},a_{1},\ldots,a_{L},b_{1},\ldots,b_{L}, \theta_{1},\ldots,\theta_{L},\beta,\tau^{2},\sigma^{2},\phi,\boldsymbol{ \omega},\boldsymbol{\mu},\boldsymbol{\kappa}\mid\mathbf{Y}_{1},\ldots,\mathbf{ Y}_{L},\mathbf{x})\] \[\propto \prod_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{N}\prod_{\ell=1}^{L}\big{[}\Phi(a_{\ell}+b_{\ell}|x_{i} -x_{j}|-\theta_{\ell}\|z_{i}-z_{j}\|)\big{]}^{y_{i,j,\ell}}\big{[}1-\Phi(a_{ \ell}+b_{\ell}|x_{i}-x_{j}|-\theta_{\ell}\|z_{i}-z_{j}\|)\big{]}^{1-y_{i,j, \ell}}\] \[\times|\sigma^{2}M(\mathbf{z},\phi)+\tau^{2}I_{N}|^{-\frac{1}{2}} \exp\Big{(}-\frac{1}{2}(\mathbf{x}-\beta\mathbf{1})^{\intercal}\big{(}\sigma^ {2}M(\mathbf{z},\phi)+\tau^{2}I_{N}\big{)}^{-1}(\mathbf{x}-\beta\mathbf{1}) \Big{)}\] \[\times\prod_{i=1}^{N}\prod_{h=1}^{H}\bigg{\{}\frac{\omega_{h}}{ \sqrt{\kappa_{h}^{2}}}\exp\Big{(}-\frac{1}{2\kappa_{h}^{2}}\|z_{i}-\mu_{h}\|^{ 2}\Big{)}\bigg{\}}^{\mathrm{I}_{(g_{i}=h)}}\] \[\times\exp\Big{(}\frac{1}{2\nu_{a}^{2}}\sum_{\ell=1}^{L}(a_{\ell} -m_{a})^{2}+\frac{1}{2\nu_{b}^{2}}\sum_{\ell=1}^{L}(b_{\ell}-m_{b})^{2}\Big{)} \prod_{\ell=1}^{L}\theta_{\ell}^{\lambda_{1}-1}\exp(-\lambda_{2}\theta_{\ell})\] \[\times\exp\Big{(}\frac{\beta^{2}}{2\nu_{\beta}^{2}}\big{)}(\sigma^ {2})^{-\eta_{1}-1}(\tau^{2})^{-\xi_{1}-1}\exp\Big{(}-\frac{\eta_{2}}{\sigma^{2 }}-\frac{\xi_{2}}{\tau^{2}}\Big{)}\mathrm{I}_{\{\phi\in[u_{1},u_{2}]\}}\] \[\times\prod_{h=1}^{H}\bigg{\{}\omega_{h}^{\alpha_{h}-1}\mathrm{I}_ {\{\sum_{h=1}^{H}\omega_{h}=1\}}\exp\Big{(}-\frac{1}{2\nu_{\mu}^{2}}\|\mu_{h}-m _{\mu}\|^{2}\Big{)}(\kappa_{h}^{2})^{-\gamma_{1}-1}\exp\Big{(}-\frac{\gamma_{2 }}{\kappa_{h}^{2}}\Big{)}\bigg{\}}.\]
### Inference and identifiability of parameters
Note that the posterior distribution is highly intractable, hence we must resort to Markov chain Monte Carlo (MCMC) methods for inferences on model parameters. A Markov chain of the parameters is generated via the program "Just Another Gibbs Sampler" (JAGS) which is implemented in R(R Core Team, 2021) using the rjags package (Plummer, 2022).
Several parameters are not identifiable in our model. Firstly, due to factors \(\theta_{\ell}\) and \(\phi\), and the fact that latent positions are incorporated in the posterior only through their distances, the posterior is, therefore, invariant to \(\theta_{\ell}\)s and \(\phi\), and is invariant to scaling, reflection, rotation, and translation of the latent positions \(\mathbf{z}\). (Note that, Hoff et al., 2002 and Krivitsky et al., 2009 did not have \(\theta_{\ell}\)s, hence their posterior is not invariant to the
scaling of latent positions.) Although \(\theta_{\ell}\)s are not identifiable and do not affect the model fitting, in multi-layer settings, their ratios \(\theta_{\ell 1}/\theta_{\ell 2}\) still provide valid information on layer's relative strength of borrowing information from the latent space.
Despite being unidentifiable, one can still make inferences on the latent positions and find a reasonable estimate for \(\mathbf{z}\) through a post-process which we now describe. Similar to the definition in (Hoff et al., 2002), we define the equivalence class of \(\mathbf{z}\in\mathbb{R}^{N\times K}\), denoted as \([\mathbf{z}]\), to be the set of positions that are equivalent to \(\mathbf{z}\) under scaling, reflection, rotation, and translation. Given a fixed reference position \(\mathbf{z}_{ref}\), a position \(\mathbf{z}_{*}\) is found in \([\mathbf{z}]\) such that \(\mathbf{z}_{*}=\arg\,\min_{\mathbf{z}^{\prime}\in[\mathbf{z}]}\mathrm{tr}( \mathbf{z}_{ref}-\mathbf{z}^{\prime})^{\intercal}(\mathbf{z}_{ref}-\mathbf{z} ^{\prime})\), which is the so-called Procrustes transformation. In simulation studies, \(\mathbf{z}_{ref}\) is naturally chosen to be the true latent position, while in practical applications, we could use the last iteration of the Markov chain of latent positions as the reference. The Procrustes transformation is performed for each iteration of the Markov chain of the latent positions \(\{\mathbf{z}_{n}\}\), and an estimate for \(\mathbf{z}\) is taken as the mean of the Procrustes transformations of \(\{\mathbf{z}_{n}\}\).
As occurs in Bayesian mixture models, the label-switching problem for the group membership \(\mathbf{g}\) is another source of non-identifiability. That is, the posterior is invariant under permutations of clustering labels. Many algorithms have been proposed to obtain a single clustering estimate based on the MCMC sample of the group membership \(\{\mathbf{g}_{n}\}\), including an optimization method (which we call "MaxPEAR" hereafter) in Fritsch and Ickstadt (2009) that finds a clustering that maximizes posterior expected adjusted rand index, an optimization method ("MinBinder") in Lau and Green (2007) that minimizes Binder's loss function, and a greedy algorithm ("GreedyEPL") in Rastelli and Friel (2018) that aims to minimize the variation of information, among others. These approaches may generate different clustering estimates, and to get a better understanding of the model performance, all aforementioned algorithms (MaxPEAR, MinBinder and GreedyEPL) are used to assess the model. Estimates based on these approaches are found using the packages GreedyEPL (Rastelli, 2021) and mcclust(Fritsch, 2022).
## 3 Simulation
Two simulation studies are carried out in this section to evaluate our model. A single-layer network is considered in the first simulation where we compare LPJMM with three other models designed only for single-layer networks, namely LPCM in Handcock et al. (2007), SNSM in Ciminelli et al. (2019), and CSBM in Leger (2016), where SNSM is also implemented using the rjags package, and LPCM and CSBM are implemented using the latentnet(Krivitsky and Handcock, 2022) and sbm(Chiquet et al., 2022) packages respectively. The model specifications for these models can be found in Appendix A. Models assessments include how well a model could recover the group membership and the latent position configuration, and a
goodness-of-fit test using summaries of networks including density, transitivity, and assortative coefficient with respect to the group membership \(\mathbf{g}\)(see Kolaczyk and Csardi, 2020 for definitions). We also evaluate our model by how accurately it could estimate certain parameters in the model. The second simulation is conducted in two-layer network settings, where the performance of our model could be further evaluated by how well the ratio \(\theta_{1}/\theta_{2}\) can be recovered that reflects differences in each layer's dependency on the latent position structure.
### Simulation 1: a single-layer network
Consider a single-layer network (i.e., \(L=1\)) with \(N=100\) actors generated as follows. Firstly, generate latent positions \(\mathbf{z}\) from a mixture of \(H=5\) multivariate normal distributions, and then generate attributes \(\mathbf{x}\) jointly from a multivariate normal distribution with mean \(\beta\mathbf{1}_{N}=\mathbf{0}\) and covariance matrix given by \(\mathrm{Cov}(d)\) in Section 2 where \(\phi=0.5\), \(\tau^{2}=0.3\), \(\sigma^{2}=1\). Finally, the network data is generated according to Eq. (1) with \(a=5\), \(b=-2\), and \(\theta=2.72\). See Fig. 2 for a visualization of the simulated network. The network is fairly sparse with a density equal to \(0.1531\), and shows fairly strong transitivity and assortative mixing with coefficients \(0.5049\) and \(0.5512\) respectively.
As for the prior specifications, we set \(m_{a}=m_{b}=0\), and \(\nu_{a}^{2}=\nu_{b}^{2}=9\) to allow a wide range of values for \(a\) and \(b\). Let \(\theta\sim\mathrm{Gamma}(1,1)\) so that \(\theta\) has mean 1. An almost flat prior is imposed on \(\beta\) by setting \(\nu_{\beta}=10^{4}\). The same uniform prior \(\mathrm{U}(0,1)\) as in Ciminelli et al. (2019) is specified for \(\phi\). We suggest the sum of the prior means of \(\tau^{2}\) and \(\sigma^{2}\) to be on the same scale as the sample variance of \(\mathbf{x}\), and here we use \(\sigma^{2}\sim\mathrm{InvG}(2,1)\) and \(\tau^{2}\sim\mathrm{InvG}(2,1)\). Let \(\alpha=\mathbf{1}\) so that the prior on \(\boldsymbol{\omega}\) is a flat Dirichlet distribution. Following the heuristics in Sosa and Betancourt (2022), we specify \(\mu_{h}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathrm{N}_{K}(\mathbf{0 },2/3I_{K})\) and \(\kappa_{h}^{2}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathrm{InvG}(3,2/3)\) so that \(\mathrm{var}(z_{ij}|g_{i})=1\).
Figure 2: Left: A visualization of the network based on the true latent position and color indicates group membership \(\mathbf{g}\). Right: Heatmap of the adjacency matrix (where actors are reordered according to \(\mathbf{g}\)).
Note that the latent space dimension \(K\) and the number of clusters \(H\) in the model need to be prespecified along with the priors. We take \(K\) to be the true dimensions of the latent space (i.e., \(K=2\)) since this facilitates model assessment by allowing visualizations of the estimated latent positions. One could also use the Watanabe-Akaike Information Criterion (WAIC) to select a \(K\) with the smallest WAIC as in Sosa and Betancourt (2022). However, WAIC and other information criteria like Deviance Information Criterion (DIC) are not helpful in choosing the number of clusters \(H\). We noticed that the model assessment is significantly worse when \(H\) is chosen to be smaller than the truth. However, model assessments are similar among models whose \(H\) is at least as large as the truth. A comparison of the model assessment for different specified \(H\) is given in Appendix B. From the comparison, we could also see that when \(H\) is specified to be larger, the number of clusters in the estimated group membership \(\hat{\mathbf{g}}\) also tends to be larger. Therefore, we suggest choosing \(H\) to be the largest number of groups that one is willing to accept, and in this example, we choose \(H\) to be \(5\).
We then fit LPJMM using MCMC sampling with \(20\,000\) burn-in iterations and a further \(10\,000\) iterations which are kept for posterior analysis. The Markov chain mixes reasonably well and shows no signs of lack of convergence (see Appendix C for the traceplot of the log-likelihood chain).
\begin{table}
\begin{tabular}{c c c c} \hline \hline & our model name & LPCM & CSBM \\ \hline MaxPEAR & 0.737 (5) & 0.707 (4) & – \\ MinBinder & 0.712 (11) & 0.748 (10) & – \\ GreedyEPL & 0.664 (4) & 0.688 (4) & – \\ Variational-EM & – & – & 0.707 (6) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Adjusted Rand indices corresponding to different estimation methods for group membership. Numbers in the parentheses represent numbers of estimated groups.
Figure 3: (A): Color indicates the true group membership \(\mathbf{g}\). (B)-(D): Color indicates the estimated group memberships \(\hat{\mathbf{g}}\) of LPJMM, LPCM and CSBM respectively. Positions of the points in all plots are true latent positions \(\mathbf{z}\).
To evaluate a model's ability to recover the group membership, we first find estimates of clustering using the optimization algorithms (i.e., MaxPEAR, MinBinder and GreedyEPL) mentioned in Section 2.4. The adjusted Rand index is then calculated for each clustering estimate. Note that SNSM does not define clusters, therefore we only compare the adjusted Rand index between LPJMM, LPCM, and CSBM. Since the sbm package takes a non-Bayesian approach that uses a Variational-EM algorithm to find a point estimator for the group membership \(\mathbf{g}\), optimization methods like MaxPEAR are not necessary to analyze results from CSBM. The results shown in Table 1 suggest that these three models have a similar ability in recovering group membership, with rand indices of LPJMM using the MaxPEAR and MinBinder algorithms being higher than the rand index (\(0.707\)) under the CSBM model. A visualization of the estimated clusters based on the true latent positions is given in Fig. 3. Also, notice that the MinBinder algorithm tends to overestimate the number of clusters in the network.
To further compare the ability to recover latent position configuration between LPJMM and LPCM, we find an estimate of the latent positions as follows. Firstly, we perform the Procrustes transformation on \(\mathbf{z}_{n}\) for each iteration \(n\), and then take the estimate \(\hat{\mathbf{z}}\) of \(\mathbf{z}\) to be the average of \(\mathbf{z}_{n}\). We then calculate the Euclidean distance between the estimated latent position \(\hat{z}_{i}\) (which is the \(i\)-th row in \(\hat{\mathbf{z}}\)) and the true latent position \(z_{i}\) (i.e., the \(i\)-th row in \(\mathbf{z}\)) for each actor \(i\) and use the sum of distances of all actors to quantify the similarity between the estimated and the true latent position configurations. The results are shown in Table 2 which suggests that these two models have similar recovery of the latent positions. Plots of the estimated latent positions of LPJMM and LPCM can be found in Appendix D, which also suggest similar estimated configurations of \(\mathbf{z}\) as Table 2.
Following Sosa and Betancourt (2022), we assess if models have a good fit in the sense of good reproduction of a variety of summary statistics, which are calculated based on a collection of simulated networks generated as follows. For LPJMM and SNSM, a network is simulated for every \(10\)-th iteration using the parameters in that iteration. For LPCM and CSBM, \(1000\) networks are simulated using their respective packages. Then for each model, we calculate the density, transitivity, and assortative coefficient (if applicable) with respect to the true group membership for each simulated network. Boxplots of these summary statistics are given in Fig. 4 and the averages of these summary statistics for each model are given in Table 3. Note that our model
\begin{table}
\begin{tabular}{l c c c} \hline \hline & LPCM & LPJMM & LPJMM \\ & & (simulation 1) & (simulation 2) \\ \hline Sum of Euclidean distances & 23.08 & 28.06 & 12.20 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sum of distances between the estimated and true latent positions.
appropriately captures these structural features of the network data, while LPCM tends to overestimate transitivity in the network, and both SNSM and CSBM tend to underestimate both transitivity and assortativity in the network.
### Simulation 2: a two-layer network
Continue using the parameter setup in simulation 1 and its generated network as the first layer (i.e., \(a_{1}=5\), \(b_{1}=-2\), \(\theta_{1}=2.72\)), we generate a second layer of the network with \(a_{2}=3\), \(b_{2}=1\), \(\theta_{2}=4\). As in simulation 1, we fit LPJMM with \(K=2\) and \(H=5\) and evaluate the model's ability to recover the group membership using the adjusted Rand indices based on four clustering summaries. The results are given in Table 4, which shows similar clustering estimates as in simulation 1 where only one layer is considered. However, the sum of Euclidean distances between the estimated and true latent positions of all actors (see Table 2) in simulation 2 is \(12.20\), which is a significant improvement compared to \(28.06\) in simulation 1. The plot of the estimated latent position configurations is given in Fig. 5 (B), which visualizes the model's recovery of latent positions and group membership.
We also carry out the goodness-of-fit test as in simulation 1 and the result is given in Table 5, which shows that LPJMM captures these structural features accurately, and the result for layer 1 is similar to the result in simulation 1.
Figure 4: Boxplots of summary statistics for each model. Red dotted lines indicate the true values for network characteristics respectively.
\begin{table}
\begin{tabular}{c c c c c c} \cline{2-6} & true value & LPJMM & LPCM & SNSM & CSBM \\ \hline density & 0.1531 & 0.1539 & 0.1530 & 0.1504 & 0.1499 \\ transitivity & 0.5049 & 0.5144 & 0.5467 & 0.4027 & 0.3776 \\ assortativity & 0.5512 & 0.5468 & 0.5475 & 0.4811 & 0.4954 \\ \hline \end{tabular}
\end{table}
Table 3: Means of the summary statistics of the simulated networks for each model in simulation 1.
Recall that \(\theta_{1}\) and \(\theta_{2}\) are of no direct interest since they are not identifiable. However, we are still interested in the ratio \(\theta_{1}/\theta_{2}\) since it reflects the relative strength of borrowing information from the latent space of each layer. Although \(a_{\ell}\) and \(b_{\ell}\) are of no direct interest, we pay attention to their signs, especially that of \(b_{\ell}\) because different signs of \(b_{\ell}\) have different interpretations of the effect of attributes as discussed in Section 2. We also assess the model's ability to estimate parameters \(\beta,\tau^{2}\), and \(\sigma^{2}\) using posterior means and 95% credible intervals. The results are given in Table 6. Overall, the performance of LPJMM in recovering the true values of these model parameters is pretty well, except for \(\tau^{2}\) and \(\sigma^{2}\). Both LPJMM and SNSM tend to underestimate \(\sigma^{2}\) and overestimate \(\tau^{2}\). That is, the covariance of the attributes tends to be underestimated, and although \(\tau^{2}\) is slightly overestimated, the variance of the attributes (\(\tau^{2}+\sigma^{2}\)) still tends to be underestimated.
## 4 Real data analysis
In this section, we consider a three-layer network data set collected by (Lazega, 2001) from a corporate law firm from \(1988\)-\(1991\) in New England. This network describes three types of relationships (namely, networks of advice, friendship, and coworker contacts) between 71 lawyers in the law firm. Several actor attributes are also collected: age, gender, seniority (years with the firm), office (located in Boston, Hartford, or Providence), practice (litigation or corporate law), law school the lawyers attended (Harvard or Yale, University of Connecticut, or other universities) and status (partner or associate). A principal component
\begin{table}
\begin{tabular}{l c c c} & MaxPEAR & MinBinder & GreedyEPL \\ \hline adjusted Rand index & 0.748 (6) & 0.753 (12) & 0.662 (4) \\ \hline \end{tabular}
\end{table}
Table 4: Adjusted Rand indices corresponding to different estimation methods for group membership in simulation 2. Numbers in the parentheses represent numbers of estimated groups.
Figure 5: (A): Points are plotted based on true latent position \(\mathbf{z}\) and true group membership \(\mathbf{g}\). (B): Points are plotted using the estimated latent positions in simulation 2, and color represents the estimated group membership using the MaxPEAR method.
analysis (PCA) is performed on age and seniority attributes, and the first principal component explains 89% of the variance which is of no surprise since age and seniority are highly correlated with a correlation coefficient being \(0.78\). We chose the first principal component to be the attribute \(\mathbf{x}\) and let \(H=8\) since it is the largest number of clusters we expect in the network. Then the model is fitted to the network using the same prior and Markov chain setup as in Section 3.
The study of the Lazega network in this paper is meant to find out how the three types of relations can be explained by the findings deduced from the model fitting. We first visualize the estimated latent positions \(\mathbf{z}\) colored by different categorical attributes (gender, office, practice, law school, and status) in Fig. 6. As we can see from these plots, the estimated positions \(\mathbf{z}\) are well separated by the office (especially offices in Boston and Hartford) and practice. Compare these plots with \(\mathbf{z}\) colored by MaxPEAR and GreedyEPL estimated clustering \(\mathbf{g}\) in Fig. 7, we can see that both estimated \(\mathbf{g}\) roughly clusters lawyers into three groups: lawyers in Hartford office, litigation lawyers in Boston or Providence offices, and corporate lawyers in Boston or Providence offices.
Plots of adjacency matrices of the three layers (where lawyers are grouped by the MaxPEAR estimate of \(\mathbf{g}\)) and their corresponding networks are given in Fig. 8, where we could see that the coworker network shows
\begin{table}
\begin{tabular}{c c c c} & true value & posterior mean & 95\% credible interval \\ \hline \(\theta_{1}/\theta_{2}\) & 0.680 & 0.653 & (0.579, 0.721) \\ \(a_{1}\) & 5 & 5.01 & (4.719, 5.262) \\ \(a_{2}\) & 3 & 3.25 & (2.976, 3.572) \\ \(b_{1}\) & -2 & -1.919 & (-2.053, -1.766) \\ \(b_{2}\) & 1 & 1.058 & (0.901, 1.252) \\ \(\beta\) & 0 & -0.047 & (-1.027, 1.01 ) \\ \(\tau^{2}\) & 0.3 & 0.409 & (0.261, 0.592) \\ \(\sigma^{2}\) & 1 & 0.642 & (0.230, 1.684) \\ \hline \end{tabular}
\end{table}
Table 6: Posterior means and 95% credible intervals.
\begin{table}
\begin{tabular}{c c c c} & & true value & mean \\ \hline density & layer 1 & 0.1531 & 0.1535 \\ & layer 2 & 0.1024 & 0.1023 \\ transitivity & layer 1 & 0.5049 & 0.5088 \\ & layer 2 & 0.5477 & 0.5546 \\ assortativity & layer 1 & 0.5512 & 0.5466 \\ & layer 2 & 0.6923 & 0.6890 \\ \hline \end{tabular}
\end{table}
Table 5: Means of the summary statistics in simulation 2.
the most estimated clustering pattern, while the advice network presents the least of such pattern, which could also be seen from the relative ratios of \(\theta_{\ell}\)s in Table 7. This means that lawyers from the same office and doing the same practice are more likely to become coworkers and friends, but who they seek advice from does not depend much on office and practice. Furthermore, we can deduce from the posteriors of \(b_{\ell}\) in Table 7 that these lawyers tend to seek advice from people of similar age (or seniority) since the posterior estimate of \(b_{1}\) is negative, while lawyers of different ages (or seniority) are more likely to become friends and coworkers. This conclusion is in line with the assortativity coefficients with respect to the nodal attributes (lawyer's age) given in Table 8.
## 5 Discussion
This paper presents a latent position model that extends LPCM of Handcock et al. (2007) and SNSM of Ciminelli et al. (2019) to jointly model network data and the nodal attributes and perform model-based clustering. By jointly modeling the network and the attributes, we are able to describe how the attributes
Figure 6: Points in all plots are drawn based on the estimated latent positions \(\mathbf{z}\), and are colored based on their categories in gender, office, practice, law school, and status.
change over the network and explain how relations could be influenced by attributes. LPJMM also provides an extension to multi-layer network settings on the assumption that all layers share the same latent position structure but with different strengths of borrowing such latent structure information. We applied our method to two simulated networks, one with a single layer and another with two layers, and found our model to give satisfactory fits to these two data sets and is competitive in terms of goodness-of-fit and group detection compared with SNSM, LPCM, and CSBM. The model is also applied to a three-layer real network data set and we are able to draw reasonable conclusions from the modeling results.
We have suggested choosing the number of groups \(H\) to be the largest number of groups that one is willing to accept in the network because we have found that varying the number of groups has almost no impact on the model fit and prediction outcome as long as it is in a reasonable range. One could also fit the CSBM to the network first, and choose \(H\) based on its estimated number of groups. One problem we have not addressed in the paper is of choosing the dimension of the latent space. This can be done by using Bayesian model selection like WAIC as in Sosa and Betancourt (2022).
Our model could be extended in several ways. Firstly, other extensions of our model to multi-layer settings could be considered. For example, Sosa and Betancourt (2022) assumed conditionally independent layer
\begin{table}
\begin{tabular}{l c c} & posterior mean & 95\% credible interval \\ \hline \(\theta_{1}/\theta_{2}\) & 0.3229 & (0.2352, 0.4152) \\ \(\theta_{1}/\theta_{3}\) & 0.2035 & (0.1479, 0.2606) \\ \(\theta_{2}/\theta_{3}\) & 0.6319 & (0.5536, 0.7198) \\ \(b_{1}\) & -0.0986 & (-0.1401, -0.0579) \\ \(b_{2}\) & 0.0708 & (0.0263, 0.1137) \\ \(b_{3}\) & 0.133 & (0.0854, 0.186) \\ \hline \end{tabular}
\end{table}
Table 7: Posterior means and 95% credible intervals.
Figure 7: Points are plotted using the estimated latent positions and color indicates the estimated group membership using MaxPEAR and GreedyEPL methods respectively.
specific latent positions, whereas MacDonald et al. (2022) assumed that the latent position of an actor in all layers is \((d_{0}+d_{1})\)-dimensional, where the first \(d_{0}\) components of the latent position are the same across all layers, and only the last \(d_{1}\) components are layer-specific. Secondly, instead of assigning a user-specified number of groups \(H\) to the model, we could learn the number of groups by using a Bayesian nonparametric approach with a Dirichlet Process prior to model community memberships (see, e.g., Amini et al., 2019).
LPJMM could also be extended to leverage multivariate covariates. So far, we have limited ourselves to modeling univariate nodal attributes that are approximately Gaussian. For continuous nodal attributes with more than one dimension, we have used the first principal component from the principal component analysis. To take full advantage of high-dimensional nodal attributes, one could use multivariate spatial process modeling to replace Eq. (2). Other extensions of more sophisticated spatial modeling include spatiotemporal modeling of attributes for time-varying networks, which would help to describe changes in actors over time.
Figure 8: Upper: Heatmaps of the adjacency matrices (where lawyers are reordered according to the MaxPEAR estimate of \(\mathbf{g}\)). Lower: A visualization of the three layers based on the estimated \(\mathbf{z}\) and color indicates the MaxPEAR estimate of \(\mathbf{g}\).
\begin{table}
\begin{tabular}{l l l l} & advice & friendship & coworker \\ \hline assortativity & 0.2536 & -0.1107 & -0.1224 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Assortativity coefficients with respect to lawyer’s age.
## Appendix A Model Specifications for SNSM, LPCM and CSBM
Note that the original SNSM in Ciminelli et al. (2019) uses the logit link. In order to make a fair comparison, we also use the probit link in SNSM as in LPJMM. The model specification for SNSM used in this paper is given as follows:
\[y_{i,j} \mid\mathbf{z},\mathbf{x},a_{\ell},b_{\ell},\theta_{\ell}\stackrel{{ \text{ind}}}{{\sim}}\operatorname{Ber}\bigl{(}\Phi(a+b|x_{i}-x_{j}|-\|z_{i}- z_{j}\|)\bigr{)}\,,\] \[\mathbf{x} \mid\mathbf{z},\beta,\sigma,\tau,\phi\sim\mathrm{N}_{N}(\beta \mathbf{1}_{N},\,\sigma^{2}M(\mathbf{z},\phi)+\tau^{2}I_{N})\,,\]
and the priors are set to be the same as the priors in LPJMM (if possible). To be specific,
\[z_{i}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathrm{N}_{2}(\mathbf{0},\,I_{2})\,,\quad\beta\sim\mathrm{N}(0,10^{4})\,,\quad\sigma^{2}\sim\mathrm{ Inv}\mathrm{G}(2,1)\,,\quad\tau^{2}\sim\mathrm{Inv}\mathrm{G}(2,1)\,,\quad\phi\sim \mathrm{U}(0,1)\,,\]
and the priors on the parameters in the probit regression tier are given by:
\[a\stackrel{{\text{i.i.d.}}}{{\sim}}\mathrm{N}(0,9)\,,\quad b \stackrel{{\text{i.i.d.}}}{{\sim}}\mathrm{N}(0,9)\,.\]
SNSM in this paper is implemented using JAGS.
The model specification for LPCM (see Handcock et al., 2007) is given as the follows,
\[y_{i,j} \mid\mathbf{z},\mathbf{x},\beta_{0},\beta_{1}\stackrel{{ \text{ind}}}{{\sim}}\operatorname{Ber}\bigl{(}\log\bigl{(}\beta_{0}^{ \intercal}x_{i,j}-\beta_{1}\|z_{i}-z_{j}\|\bigr{)}\bigr{)}\,,\] \[z_{i} \mid\boldsymbol{\omega},\boldsymbol{\mu},\boldsymbol{\kappa} \stackrel{{\text{i.i.d.}}}{{\sim}}\sum_{h=1}^{5}\omega_{h} \mathrm{N}_{5}(\mu_{h},\,\kappa_{h}^{2}I_{K})\,,\]
and we use the default priors given in the latentnet package for prior specifications.
We first introduce several notations before presenting CSBM in Leger (2016). Suppose there are \(Q\) groups in the network. Denote the \(N\times Q\) group membership matrix as \(\boldsymbol{Z}=\{Z_{iq}\}\), and \(Z_{iq}=1\) if actor \(i\) belongs to group \(q\), \(Z_{iq}=0\) if otherwise. It is assumed that an actor can only belong to one group. The model specification for CSBM is given as follows,
\[y_{i,j}\mid Z_{i},Z_{j},\mathbf{x},\beta\stackrel{{\text{ind}}} {{\sim}}\operatorname{Ber}\bigl{(}\log\bigl{(}m_{q_{i},q_{j}}+\beta^{ \intercal}x_{i,j}\bigr{)}\bigr{)}\,,\]
where \(Z_{i}\) is the \(i\)-th row of \(\boldsymbol{Z}\), \(q_{i}\) is the group membership for actor \(i\) and the group effect \(m_{q_{i},q_{j}}\in\mathbb{R}\).
Comparing model performances for different number of groups
We conduct a comparison of LPJMM with different \(H\in\{3,4,\ldots,9\}\) using the data set in simulation 1. Table 9 presents the adjusted rand indices, and the results are similar for models that assume \(H\) to be equal to or larger than the true number of groups (which is \(5\) in this example). However, the adjusted rand indices for all three estimates are significantly smaller when the model assumes \(H\) to be smaller than \(5\). Also, notice that the estimated number of groups increases with \(H\). Visualizations of how adjusted rand indices and estimated number of groups changes over \(H\) are given in Fig. 9.
The goodness-of-fit test outlined in Section 3 is also carried out here to compare the means of several summary statistics, which are plotted in Fig. 10. As we can see from the plots, the model's fit is not affected by the choice of \(H\) even for \(H\) smaller than the actual number of clusters in the network.
\begin{table}
\begin{tabular}{c c c c} \(H\) & MaxPEAR & MinBinder & GreedyEPL \\ \hline
3 & 0.4067 (3) & 0.4008 (5) & 0.4321 (3) \\
4 & 0.4882 (3) & 0.4977 (6 ) & 0.6521 (4) \\
5 & 0.7374 (5) & 0.7115 (11) & 0.6635 (4) \\
6 & 0.7237 (6) & 0.7442 (20) & 0.7134 (4) \\
7 & 0.7449 (7) & 0.6624 (25) & 0.7313 (4) \\
8 & 0.7422 (8) & 0.6674 (25) & 0.7293 (8) \\
9 & 0.7056 (12) & 0.7041 (25) & 0.7043 (11) \\ \hline \end{tabular}
\end{table}
Table 9: Adjusted Rand indices of different estimates under LPJMM with different \(H\). Numbers in the parentheses denote the numbers of estimated groups.
Figure 9: Left: Adjusted rand indices of the clustering estimates found by using the MaxPear, MinBinder, and GreedyEPL methods. Right: Estimated number of groups using the three methods.
## Appendix C Traceplots of log-likelihood
The traceplots of the log-likelihood (after thinning the Markov chain every 10 iterations) in simulation studies and real applications in Sections 3 and 4 are given in Fig. 11.
## Appendix D Visualizations of results from LPJMM and LPCM
Visualizations of the estimated latent positions and estimated group membership using the MaxPEAR, MinBinder, and GreedyEPL methods under LPJMM and LPCM are shown in Figs. 12 and 13 respectively. |
2309.09135 | Characterizations of Stability via Morse Limit Sets | Subgroup stability is a strong notion of quasiconvexity that generalizes
convex cocompactness in a variety of settings. In this paper, we characterize
stability of a subgroup by properties of its limit set on the Morse boundary.
Given $H<G$, both finitely generated, $H$ is stable exactly when all the limit
points of $H$ are conical, or equivalently when all the limit points of $H$ are
horospherical, as long as the limit set of $H$ is a compact subset of the Morse
boundary for $G$. | Jacob Garcia | 2023-09-17T02:25:11Z | http://arxiv.org/abs/2309.09135v1 | # Characterizations of stability via Morse limit sets
###### Abstract.
Subgroup stability is a strong notion of quasiconvexity that generalizes convex cocompactness in a variety of settings. In this paper, we characterize stability of a subgroup by properties of its limit set on the Morse boundary. Given \(H<G\), both finitely generated, \(H\) is stable exactly when all the limit points of \(H\) are conical, or equivalently when all the limit points of \(H\) are horospherical, as long as the limit set of \(H\) is a compact subset of the Morse boundary for \(G\).
## 1. Introduction
An important example of Kleinian groups are called convex cocompact groups. These are exactly the subgroups \(H\) whose orbit in \(\mathbb{H}^{3}\) is convex cocompact. Additionally these groups admit compact Kleinian manifolds, and every infinite order element of a convex cocompact group is loxodromic. We highlight some of the other interesting properties of convex cocompact groups in the following theorem.
**Theorem 1.1**.: _([19, 20]) A Kleinian group \(H<\text{Iso}^{+}(\mathbb{H}^{3})\cong PSL_{2}(\mathbb{C})\) is called convex cocompact if one of the following equivalent conditions hold:_
1. \(H\) _acts cocompactly on the convex hull of its limit set_ \(\Lambda H\)_._
2. _Any_ \(H\)_-orbit in_ \(\mathbb{H}^{3}\) _is quasiconvex._
3. _Every limit point of_ \(H\) _is conical._
4. \(H\) _acts cocompactly on_ \(\mathbb{H}^{3}\cup\Omega\)_, where_ \(\Omega=\partial\mathbb{H}^{3}\setminus\Lambda H\)_._
However other more recent versions of this relationship have been shown. Swenson showed a generalization of this theorem for Gromov hyperbolic groups equipped with their visual boundaries [21], and there has been recent interest in generalizing these relationships beyond the setting of word-hyperbolic groups. For example, convex cocompact subgroups of mapping class groups acting on Teichmuller space, equipped with the Thurston compactification, have been characterized by Farb-Mosher [14] and Hamenstadt [15] as exactly the subgroups which determine Gromov hyperbolic surface group extensions. There has also been recent work done in this direction for subgroups of \(Out(F_{n})\), relating convex cocompact subgroups to hyperbolic extensions of free groups [16, 17, 18].
There has also been interest in creating generalizations which are applicable for any finitely generated group. An important generalization comes from [16], where Durham and Taylor introduced _stability_ (see Definition 2.23) to characterize convex cocompact subgroups of a mapping class group in a way which is intrinsic to the geometry of the mapping class group, and in fact, generalizes the notions of convex cocompactness to any finitely generated group. The concept of stability was later generalized to _strongly quasiconvex subgroup_, introduced in [19]. We note that a subgroup is stable when it is undistorted and strongly quasiconvex.
In the Kleinian, hyperbolic, and mapping class group settings, convex cocompactness is characterized by properties of the limit set on an appropriate boundary [18]. For an arbitrary finitely generated group, it is possible to construct a (quasi-isometric invariant) boundary called the Morse boundary, which was introduced by Cordes in [18] and expanded by Cordes and Hume in [16]. A generalization of convex cocompactness developed by Cordes and Durham, called _boundary convex cocompactness_ (see Definition 2.24), uses both the Morse boundary and stability to generalize item (1) of Theorem 1.1, see [18].
The purpose of this paper is fully generalize item (3) of Theorem 1.1 to the setting of finitely generated groups, thereby answering [18, Question 1.15]. In fact, we additionally generalize some other characterizations from the hyperbolic setting found in [21]. We summarize the results of this paper in the following theorem:
**Theorem 1.2**.: _Let \(H\) be a finitely generated group acting by isometries on a proper geodesic metric space \(X\). The following are equivalent:_
1. _Any_ \(H\)_-orbit in_ \(X\) _is a stable embedding of_ \(H\to X\)_._
2. \(H\) _acts boundary convex cocompactly on_ \(X\)_._
3. _Every point in_ \(\Lambda H\) _is a conical limit point of_ \(H\)_,_ \(\Lambda H\neq\emptyset\)_, and_ \(\Lambda H\) _is a compact subset of the Morse boundary of_ \(H\)_._
4. _Every point in_ \(\Lambda H\) _is a horospherical limit point of_ \(H\)_,_ \(\Lambda H\neq\emptyset\)_, and_ \(\Lambda H\) _is a compact subset of the Morse boundary of_ \(H\)_._
**Remark 1.3**.: The result \((1)\Leftrightarrow(2)\) is found in the main theorem of [1]. We show \((3)\Rightarrow(4)\) in a combination of Propostion 3.4 and Theorem 3.8, using methods similar to [12]. We show \((4)\Rightarrow(2)\) in Theorem 4.2, by first showing that non-cobounded actions on the weak convex hull of \(\Lambda H\) admit a sequence of points \(p_{n}\) which diverge quickly from the orbit (see Lemma 4.1), but then showing that the \(p_{n}\) converge to an element of \(\Lambda H\), which ultimately contradicts the conical assumption. We give an alternate proof to \((2)\Rightarrow(3)\) in Proposition 4.4 which does not use the main theorem from [1].
A limit point in \(\Lambda H\) is conical if the limit point is accumulated by the orbit in a strong way: every geodesic ray representing the limit point gets boundedly close to the orbit. See Definitions 2.1 and 3.2. In general, a geodesic ray which is constructed from geodesic segments \([x,hx]\) need not stay close to the orbit of \(H\) even when \(H\) is stably embedded in the hyperbolic setting. For an example, see [12, Lemma 3]. A limit point in \(\Lambda H\) is horospherical if it is accumulated by the orbit in a similar way: every horoball around a geodesic ray representing the limit point intersects the orbit, see Definition 3.2.
We take a moment to provide a broad overview of stability in the recent liturature. In addition to results for the mapping class group from above in [13, 1, 14, 15], it is also known that infinite index Morse subgroups of the mapping class group exactly coincide with stable subgroups [14], and stable subgroups of mapping class groups (and more generally, stable subgroups of Morse local-to-global groups) have interesting combination theorems [15]. Stability has also been studied in the context of Morse local-to-global groups [11], relatively hyperbolic groups [1], and hierarchically hyperbolic groups [1, 1]. It is also known that stable subgroups admit finite height [13].
Comparing Theorem 1.2 to Theorem 1.1, we see a cocompact action involving a domain of discontinuity in Theorem 1.1 which does not appear in Theorem 1.2. This is because the standard methods used for showing this property rely on the fact that the (Gromov-)hyperbolic boundary for a word hyperbolic group is a compactification, and thus finding the requisite compact set needed for a cocompact action boils down to finding an appropriate closed subset. In contrast, the Morse boundary usually does not compactify the underlying group, in fact the Morse boundary compactifies a finitely generated group \(H\) if and only if \(H\) is word hyperbolic, see [11, Theorem 3.10] and [1, Lemma 4.1]. This leads to an open question:
**Open Question 1**.: Does there exist an appropriate classification of boundary convex cocompactness via an appropriate action on a domain of discontinuity analog?
For other properties in Theorem 1.2, we are able to address the need for some compactness in the Morse Boundary by assuming that the limit set of the group, \(\Lambda H\), is compact. See Definition 2.16 and Corollary 2.18. It is not possible to remove the compactness condition in either point \((3)\) or \((4)\) of Theorem 1.2. For example, consider the group \(G=\mathbb{Z}^{2}*\mathbb{Z}*\mathbb{Z}=\langle a,b\rangle*\langle c\rangle* \langle d\rangle\) with subgroup \(H=\langle a,b,c\rangle\). As discussed in [1, Remark 1.8], \(H\) is isometrically embedded and convex in \(G\), and so every point of \(\Lambda H\) is conical with respect to \(H\). In fact all rays representing a point in \(\Lambda H\) travel through \(H\) infinitely often. However \(H\) is not hyperbolic, so \(H\) is not stable. See [1, Section 1.2] for a complete discussion.
### Applications to Mapping Class Groups
Convex cocompact subgroups of mapping class groups have been well studied, see [13, 14], but in particular conical limit point characterizations have been analyzed before. Let \(S\) be a finite type surface, \(\operatorname{Mod}(S)\) its associated mapping class group, and let \(\mathcal{T}(S)\) be its associated Teichmuller space. In [14, Theorem 1.2], it is shown that a subgroup \(H\) of \(\operatorname{Mod}(S)\) is convex cocompact if and only if all the limit points of \(H\) in the Thurston compactification of \(\mathcal{T}(S)\) are conical. A straightforward combination of Theorem 1.2 and [1, Theorem 1.18] gives the following direct comparison, which uses the intrinsic geometry of \(\operatorname{Mod}(S)\) instead of the geometry of \(\mathcal{T}(S)\).
**Theorem 1.4**.: _Let \(S\) be a finite type surface, let \(H<\operatorname{Mod}(S)\) be finitely generated. Then \(H\) is a convex-cocompact subgroup of \(\operatorname{Mod}(S)\) if, and only if, every point in \(\Lambda H\) is a conical limit point of \(H\curvearrowright\operatorname{Mod}(S)\), \(\Lambda H\neq\emptyset\), and \(\Lambda H\) is compact in the Morse boundary of \(\operatorname{Mod}(S)\)._
This theorem, combined with the above result of [12], gives the following immediate corollary, which shows that conicality is a strong condition in the setting of mapping class groups:
**Corollary 1.5**.: _Let \(S\) be a finite type surface, and let \(H<\operatorname{Mod}(S)\) be finitely generated. The following are equivalent:_
1. _Every limit point of_ \(H\) _in the Morse boundary of_ \(\operatorname{Mod}(S)\) _is a conical limit point of_ \(H\curvearrowright\operatorname{Mod}(S)\) _and_ \(\Lambda H\) _is compact._
2. _Every limit point of_ \(H\) _in the Thurston compactification of_ \(\mathcal{T}(S)\) _is a conical limit point of_ \(H\curvearrowright\mathcal{T}(S)\)_._
We also show that there exists a natural \(\operatorname{Mod}(S)\)-equivariant map from \(\operatorname{Mod}(S)\) to \(\mathcal{T}(S)\) which sends conical limit points of \(H<\operatorname{Mod}(S)\) in the Morse boundary of \(\operatorname{Mod}(S)\) to conical limit point of \(H\) in the the Thurston compactification of \(\mathcal{T}(S)\). This directly proves the implication \((1)\Rightarrow(2)\) in Corollary 1.5 without requiring results of [12], and in fact, does not require \(H\) to be a convex cocompact subgroup. See Theorem 5.2 for details.
### Acknowledgements
I would like to thank my advisor Matthew Gentry Durham for his guidance and support during this project. Thanks to Elliot Vest for many conversations and for his comments on an earlier draft of this paper.
## 2. Background
Here we collect some necessary material for the rest of the paper. We refer the reader to [1] for many definitions and for needed background material, in particular for the construction of the visual boundary in a \(\delta\)-hyperbolic space. We begin by recalling some relevant definitions and setting some notation.
**Definition 2.1**.: Let \((X,d)\) be a proper, geodesic metric space, and let \(\alpha:[a,\infty)\to X\) and \(\beta:[b,\infty)\to X\) be two geodesic rays.
* Given a closed set \(S\subseteq X\), we define the _closest point projection to_ \(S\) as \(\pi_{S}(x)=\{s\in S:d(s,x)=d(S,x)\}\).
* We say \(\alpha\) and \(\beta\)_\(N\)-asymptotically fellow-travel_, denoted by \(\alpha\sim_{N}\beta\), if there exists \(T\in\mathbb{R}\) so that whenever \(t\geq T\), we have \(d(\alpha(t),\beta(t))\leq N\).
* Given \(A\subset X\) and \(K\geq 0\), we denote the _\(K\)-neighborhood_ of \(A\) by \(\mathcal{N}_{K}(A)=\{x\in X:d(x,A)\leq K\}\).
In addition, if \(X\) is \(\delta\)-hyperbolic we have the following definitions from [10]:
* We denote the _horoball about_\(\alpha\) by \(H(\alpha)\) and define it as \(H(\alpha)=\bigcup\{\beta([b,\infty)):\beta\sim_{6\delta}\alpha,\ b\geq a\}\).
* We denote the _funnel about_\(\alpha\) by \(F(\alpha)\) and define it as \(F(\alpha)=\{x\in X:d(x,\alpha)\leq d(\pi_{\alpha}(x),\alpha(a))\}\).
* Given a point \(x\) in the visual boundary of \(X\) and a subset \(A\subseteq X\), we say \(x\) is a _horopherical limit point of \(A\)_ if, for every geodesic ray \(\alpha\) with \(\alpha(\infty)=x\), we have \(H(\alpha)\cap A\neq\emptyset\).
* Given a point \(x\) in the visual boundary of \(X\) and a subset \(A\subseteq X\), we say \(x\) is a _funnelled limit point of \(A\)_ if, for every geodesic ray \(\alpha\) with \(\alpha(\infty)=x\), we have \(F(\alpha)\cap A\neq\emptyset\).
* Given a point \(x\) in the visual boundary of \(X\) and a subset \(A\subseteq X\), we say \(x\) is a _conical limit point of \(A\)_ if there exists \(K>0\) such that, for every geodesic ray \(\alpha\) with \(\alpha(\infty)=x\), we have \(\mathcal{N}_{K}(\alpha)\cap A\neq\emptyset\).
We present here for completeness a relaxed version of a claim in [10, pg 125] which shows that every horoball of a geodesic ray contains a funnel of an equivalent geodesic ray in a \(\delta\)-hyperbolic space.
**Lemma 2.2**.: _Let \((X,d)\) be a proper, geodesic, \(\delta\)-hyperbolic metric space, and let \(\alpha:[0,\infty)\to X\) be a geodesic ray. Define \(\alpha^{\prime}:[0,\infty)\to X\) by \(\alpha^{\prime}(t)=\alpha(t+6\delta)\). Then \(F(\alpha^{\prime})\subseteq H(\alpha)\)._
Proof.: See Figure 2.1. Let \(p\in F(\alpha^{\prime})\). Construct a geodesic ray \(\beta:[b,\infty)\to X\) such that \(\beta\sim_{6\delta}\alpha\) and \(\beta(b)=p\) (for details on the existence of such a geodesic ray, we refer to [1, pg 427-428]). Let \(q\in\pi_{\alpha^{\prime}}(p)\) such that \(d(\alpha(0),q)=\min\{d(\alpha(0),x):x\in\pi_{\alpha^{\prime}}(p)\}\), i.e., so that \(q\) is the point in \(\pi_{\alpha^{\prime}}(p)\) closest to \(\alpha(0)\). Notice that since \(p\in F(\alpha^{\prime})\) we have that \(d(p,q)\leq d(q,\alpha^{\prime}(0))\). Choose \(T\geq 6\delta\) so that \(q\in[\alpha^{\prime}(0),\alpha^{\prime}(T)]\) and so that for all \(t\geq T\), we have \(d(\alpha(t),\beta(t))<6\delta\). Then
\[T-b=d(\beta(T),p)\leq d(\beta(T),\alpha(T))+d(\alpha(T),q)+d(q,p)\leq 6\delta+d( \alpha^{\prime}(T-6\delta),q)+d(q,\alpha^{\prime}(0))=6\delta+(T-6\delta).\]
This shows that \(b\geq 0\), and so \(p=\beta(b)\in H(\alpha)\).
We also include the complementary statement that every funnel of a geodesic ray contains a horoball of an equivalent geodesic ray.
**Lemma 2.3**.: _([12, Lemma 5]) Let \((X,d)\) be a proper, geodesic, \(\delta\)-hyperbolic metric space, and let \(\alpha:[0,\infty)\to X\) be a geodesic ray. Define \(\alpha^{\prime}:[0,\infty)\to X\) by \(\alpha^{\prime}(t)=\alpha(t+12\delta)\). Then \(H(\alpha^{\prime})\subseteq F(\alpha)\). _
The combination of Lemmas 2.2 and 2.3 give the following relationship, which was originally stated as a corollary in [12].
**Corollary 2.4**.: _In a proper, geodesic, \(\delta\)-hyperbolic metric space, the funneled limit points are exactly the horospherical limit points. _
In a proper, geodesic, \(\delta\)-hyperbolic metric space, any two geodesic rays \(\alpha\) and \(\beta\) with \(d_{Haus}(\alpha,\beta)<\infty\) can be re-parameterized so that they asymptotically fellow-travel, for a fellow-travelling constant depending only on \(\delta\)[12, Lemma 4]. This is an important part of the definition of a horoball in Definition 2.1. In order to define a horoball in a non-hyperbolic space, it will be necessary to develop an analogue of this fact for Morse rays. We begin by recalling the definition.
**Definition 2.5**.: _([12, Definition 1.3])_ A (quasi)-geodesic \(\gamma\) in a metric space is called \(N\)-Morse, where \(N\) is a function \([1,\infty)\times[0,\infty)\to[0,\infty)\), if for any \((K,C)\)-quasi-geodesic \(\varphi\) with endpoints on \(\gamma\), we have \(\varphi\subset\mathcal{N}_{N(K,C)}(\gamma)\). We call the function \(N\) a _Morse gauge_. We say \(\gamma\) is _Morse_ if there exists a Morse gauge \(N\) so that \(\gamma\) is \(N\)-Morse.
**Definition 2.6**.: _([12, 12])_ Given a Morse gauge \(N\) and a basepoint \(\mathfrak{o}\in X\), the \(N\)_-Morse stratum_, denoted \(X^{N}_{\mathfrak{o}}\), is defined as the set of all points \(x\) such that \([\mathfrak{o},x]\) is an \(N\)-Morse geodesic. Each such stratum is \(\delta\)-hyperbolic for \(\delta\) depending only on \(N\)[12, Proposition 3.2], and thus has a well defined visual boundary, which we denote as \(\partial X^{N}_{\mathfrak{o}}\). If \(\mathcal{M}\) is the set of all Morse gauges, then there is a natural partial order on \(\mathcal{M}\): \(N\leq N^{\prime}\) if \(N(K,C)\leq N^{\prime}(K,C)\) for all \(K\) and \(C\). Note the natural inclusion \(\partial X^{N}_{\mathfrak{o}}\hookrightarrow\partial X^{N^{\prime}}_{ \mathfrak{o}}\) is continuous whenever \(N\leq N^{\prime}\) by [12, Corollary 3.2]. We define the _Morse boundary based at \(\mathfrak{o}\)_ as
\[\partial X_{\mathfrak{o}}=\varinjlim_{\mathcal{M}}\partial X^{N}_{\mathfrak{o}}\]
with the induced direct limit topology. Given a Morse geodesic ray \(\alpha\), we denote the associated point in \(\partial X_{\mathfrak{o}}\) as \(\alpha(\infty)\).
Figure 2.1. Diagram for Lemma 2.2
**Remark 2.7**.: Often when studying the Morse boundary, the basepoint is suppressed from the notation, as the Morse boundary is basepoint independent [11, Proposition 2.5]. However, we will often make use of the basepoint explicitly in the arguments to come, thus we keep it in the notation.
The following fact states that subrays of Morse rays are also Morse. This will be especially useful in Section 3, as many of the arguments which describe the relationships between horoballs, funnels, and cones require restriction to a subray, as illustrated in the proof of Lemma 2.2.
**Lemma 2.8**.: _([16, Lemma 3.1]) Let \(X\) be a geodesic metric space. Let \(\alpha:I\to X\) be an \(N\)-Morse \((\lambda,\epsilon)\)-quasi-geodesic where \(I\) is an interval of \(\mathbb{R}\). Then for any interval \(I^{\prime}\subseteq I\), the \((\lambda,\epsilon)\)-quasi-geodesic \(\alpha^{\prime}=\alpha|_{I^{\prime}}\) is \(N^{\prime}\)-Morse where \(N^{\prime}\) depends only on \(\lambda,\epsilon\), and \(N\). _
We now present a combination of statements which will show that, given one Morse ray and another ray which fellow-travels with the first, then eventually the fellow-travelling constant is determined only by the Morse gauge of the first ray.
**Proposition 2.9**.: _([11, Proposition 2.4]) Let \(X\) be a geodesic metric space. Let \(\alpha:[0,\infty)\to X\) be an \(N\)-Morse geodesic ray. Let \(\beta:[0,\infty)\to\) be a geodesic ray such that \(d(\alpha(t),\beta(t))<K\) for \(t\in[A,A+D]\) for some \(A\in[0\infty)\) and \(D\geq 6K\). Then for all \(t\in[A+2K,A+D-2K]\), \(d(\alpha(t),\beta(t))<4N(1,2N(5,0))+2N(5,0)+d(\alpha(0),\beta(0))\). _
The proof of Proposition 2.9, as presented in [11], shows the following additional facts:
**Corollary 2.10**.: _Let \(X\) be a geodesic metric space. Let \(\alpha:[0,\infty)\to X\) be an \(N\)-Morse geodesic ray. Let \(\beta:[0,\infty)\to\) be a geodesic ray such that \(d(\alpha(t),\beta(t))<K\) for \(t\in[A,A+D]\) for some \(A\in[0,\infty)\) and \(D\geq 6K\). Then there exists \(x,y\in[0,A+2K]\) such that \(d(\alpha(x),\beta(y))<N(5,0)\). _
**Corollary 2.11**.: _([11, Corollary 2.6]) Let \(X\) be a geodesic metric space. Let \(\alpha:[0,\infty)\to X\) be an \(N\)-Morse geodesic ray. Let \(\beta:[0,\infty)\to\) be a geodesic ray such that \(d(\alpha(t),\beta(t))<K\) for all \(t\in[0,\infty)\) (i.e. \(\beta(\infty)=\alpha(\infty)\)). Then for all \(t\in[2K,\infty)\), \(d(\alpha(t),\beta(t))<\max\{4N(1,2N(5,0))+2N(5,0),8N(3,0)\}+d(\alpha(0),\beta (0))\). _
Combining Corollaries 2.10 and 2.11, we get the following generalization of [1, Chapter 3, Lemma 3.3].
**Proposition 2.12**.: _Let \(X\) be a geodesic metric space. Let \(\alpha:[0,\infty)\to X\) be an \(N\)-Morse geodesic ray. Let \(\beta:[0,\infty)\to X\) be a geodesic ray such that \(d(\alpha(t),\beta(t))<K\) for all \(t\in[0,\infty)\) (i.e. \(\beta(\infty)=\alpha(\infty)\)). Then there exists \(T_{1},T_{2}>0\) such that for all \(t\in[0,\infty)\), \(d(\alpha(T_{1}+t),\beta(T_{2}+t))<\max\{4N(1,2N(5,0))+2N(5,0),8N(3,0)\}+N(5,0)\)._
Proof.: By Corollary 2.10, there exists \(x,y\geq 0\) so that \(d(\alpha(x),\beta(y))<N(5,0)\). Define \(\alpha^{\prime}(t)=\alpha(x+t)\) and \(\beta^{\prime}(t)=\beta(y+t)\), and note in particular that \(\alpha^{\prime}(0)=\alpha(x)\) and \(\beta^{\prime}(0)=\beta(y)\). Applying Corollary 2.11 to \(\alpha^{\prime}\) and \(\beta^{\prime}\) produces the desired result.
For convenience, we will denote \(\delta_{N}=\max\{4N(1,2N(5,0))+2N(5,0),8N(3,0)\}+N(5,0)\). Using this notation, Proposition 2.12 leads to the following generalization of [12, Lemma 4].
**Corollary 2.13**.: _Let \(X\) be a geodesic metric space. Let \(\alpha:[0,\infty)\to X\) be an \(N\)-Morse geodesic ray. Let \(\beta:[0,\infty)\to X\) be a geodesic ray such that \(\beta(\infty)=\alpha(\infty)\). Then there exists \(a\in\mathbb{R}\) and an isometry \(\rho:[a,\infty)\to[0,\infty)\) so that \(\alpha\sim_{\delta_{N}}\beta\circ\rho\)._
Proof.: Apply Proposition 2.12 to find \(T_{1},T_{2}>0\) so that for all \(t\in[0,\infty)\), \(d(\alpha(T_{1}+t),\beta(T_{2}+t))<\delta_{N}\). Then let \(\rho:[a,\infty)\to[0,\infty)\) be the unique isometry such that \(\rho(T_{1})=T_{2}\).
**Proposition 2.14**.: _Suppose \(\alpha:[a,\infty)\to X\) is an \(N\)-Morse geodesic ray and \(\beta:[b,\infty)\to X\) is a geodesic ray such that \(\beta\sim_{\delta_{N}}\alpha\) and \(\alpha(a)=\beta(b)\). Then \(\beta\) is \(M\)-Morse where \(M\) depends only on \(N\)._
Proof.: It suffices to show that \(d_{Haus}(\alpha,\beta)\leq K\) where \(K\geq 0\) depends only on \(N\). Choose \(T>0\) so that \(d(\alpha(t),\beta(t))\leq\delta_{N}\) for all \(t\geq T\). Note that \([\beta(b),\beta(t)]\ast[\beta(t),\alpha(t)]\) is a \((1,2\delta_{N})\) quasi-geodesic, so by [11, Lemma 2.1], \(d_{Haus}([\alpha(a),\alpha(t)],[\beta(b),\beta(t)]\ast[\beta(t),\alpha(t)])\leq L\) for some \(L\) depending only on \(N\). But since \(l([\alpha(t),\beta(t)])\leq\delta_{N}\), we have \(d_{Haus}([\alpha(a),\alpha(t)],[\beta(b),\beta(t)])\leq L+\delta_{N}\)
The above statement leads to the following generalization, which is very similar to [12, Lemma 2.8]. This statement will be useful for showing a generalization of Corollary 2.4, since our horoballs and funnels will be restricted to a single Morse stratum. See Theorem 3.8.
**Proposition 2.15**.: _Suppose \(x\in\partial X_{\mathfrak{o}}^{N}\) for a Morse gauge \(N\). Then any geodesic ray \(\alpha:[a,\infty)\to X\) with \(\alpha(\infty)=x\) is \(M\)-Morse, where \(M\) depends only on \(N\) and the Morse gauge of \([\alpha(a),\mathfrak{o}]\)._
Proof.: See Figure 2.2. Let \(\beta:[b,\infty)\to X\) be \(N\)-Morse with \(\beta(b)=\mathfrak{o},\beta(\infty)=\alpha(\infty)\), and let \(N^{\prime}\) be the Morse gauge of \([\alpha(a),\beta(b)]\). For each \(n\in\mathbb{N}\), let \(\gamma_{n}=[\alpha(a),\beta(b+n)]\). Note that \(\beta_{[b,b+n]}\) is Morse for some Morse gauge depending only on \(N\) by Lemma 2.8, and so by [12, Lemma 2.3], \(\gamma_{n}\) in \(N^{\prime\prime}\)-Morse for \(N^{\prime\prime}\) depending only on \(\max\{N,N^{\prime}\}\). Then via a straightforward generalization of [12, Lemma 2.10], there exists an \(N^{\prime\prime}\)-Morse geodesic ray \(\gamma\) with \(\gamma_{n}\to\gamma\) (uniformly on compact sets) and \(\gamma(\infty)=\beta(\infty)\). Then Proposition 2.14 shows that \(\alpha\) is Morse for an appropriate Morse gauge.
### Limit Sets and Weak Convex Hulls
We now introduce limit sets and weak convex hulls, and give some useful properties that these sets have. We use these constructions to turn subsets of \(X\) into subsets of the Morse boundary, and vice versa.
**Definition 2.16**.: ([12, Definition 3.2]) Let \(X\) be a proper, geodesic metric space and let \(A\subseteq X\). The _limit set of_ \(A\), denoted as \(\Lambda A\), is the set of points in \(\partial X_{\mathfrak{o}}\) such that, for some Morse gauge \(N\), there exists a sequence of points \((a_{k})\subset A\cap X_{\mathfrak{o}}^{N}\) such that \([\mathfrak{o},a_{k}]\) converges (uniformly on compact sets) to a geodesic ray \(\alpha\) with \(\alpha(\infty)=x\). (Note \(\alpha\) is \(N\)-Morse by [12, Lemma 2.10].) In the case where \(H\) acts properly by isometries on \(X\), we use \(\Lambda H\) to denote the limit set of \(H\mathfrak{o}\).
**Remark 2.17**.: By [12, Lemma 3.3], \(\Lambda H\) can be defined as the limit set of _any_ orbit of \(H\), we merely choose the orbit \(H\mathfrak{o}\) for convenience and simplicity in future arguments.
We also prove a small fact about limit sets, which is similar to [12, Lemma 4.1, Proposition 4.2].
**Corollary 2.18**.: _Let \(X\) be a proper, geodesic metric space and suppose \(A\subseteq X\). If \(\Lambda A\subseteq\partial X_{\mathfrak{o}}^{N}\) for some Morse gauge \(N\), then \(\Lambda A\) is compact._
Proof.: By [12, Proposition 3.12], this is equivalent to the condition that \(\Lambda H\) is closed.
**Remark 2.19**.: By Corollary 2.18 and by [12, Lemma 4.1], the requirement that \(\Lambda H\) is compact is equivalent to the requirement that \(\Lambda H\) is contained in the boundary of a single Morse stratum.
**Definition 2.20**.: ([13, 12]) Let \(X\) be a proper, geodesic metric space, and let \(A\subseteq X\cup\partial X_{\mathfrak{o}}\). Then the _weak convex hull_ of \(A\), denoted \(WCH(A)\), is the union of all geodesic (segments, rays, or lines) of \(X\) which have both endpoints in \(A\).
We take a moment to highlight some nice interactions between the weak convex hull of a compact limit set with the Morse boundary.
Figure 2.2. Diagram for Proposition 2.15
**Lemma 2.21**.: _([10, Proposition 4.2]) Let \(X\) be a proper geodesic metric space and let \(A\subseteq X\) such that \(\Lambda A\subseteq\partial X^{N}_{\mathfrak{o}}\) for some Morse gauge \(N\). Then there exists a Morse gauge \(N^{\prime}\), depending only on \(N\), such that \(WCH(\Lambda A)\subset X^{N^{\prime}}_{\mathfrak{o}}\). _
**Lemma 2.22**.: _Let \(X\) be a proper geodesic metric space and let \(A\subseteq X\) such that \(\Lambda A\subseteq\partial X^{N}_{\mathfrak{o}}\) for some Morse gauge \(N\). Then \(\Lambda(WCH(\Lambda A))\subseteq\Lambda A\)._
Proof.: We may assume \(|\Lambda A|>1\). Let \(x\in\Lambda(WCH(\Lambda A))\). By Definition 2.16, there exists \(x_{n}\in WCH(\Lambda A)\) such that \([\mathfrak{o},x_{n}]\) converges to a geodesic ray \(\gamma\) with \(\gamma(\infty)=x\). We show that there exists \(K>0\) so that for all \(n\) there exists \(a_{n}\) with \([\mathfrak{o},x_{n}]\subseteq\mathcal{N}_{K}([\mathfrak{o},a_{n}])\). Thus, (a subsequence of) the geodesics \([\mathfrak{o},a_{n}]\) converge to a geodesic ray \(\alpha:[0,\infty)\to X\) with \(\alpha(0)=\mathfrak{o}\), and \(\alpha(\infty)=\gamma(\infty)=x\) and so \(x\in\Lambda A\). It remains to find \(K\) so that \([\mathfrak{o},x_{n}]\subseteq\mathcal{N}_{K}([\mathfrak{o},a_{n}])\).
Fix \(n\). Since \(x_{n}\in WCH(\Lambda A)\), \(x\in\eta\) where \(\eta:(-\infty,\infty)\mathrel{\mathop{:}}\to X\) is a geodesic with \(\eta(\pm\infty)\in\Lambda A\). So, by Definition 2.16, there exists \(a_{k}^{+},a_{k}^{-}\in A\cap X^{N}_{\mathfrak{o}}\) so that \([\mathfrak{o},a_{k}^{+}]\) and \([\mathfrak{o},a_{k}^{-}]\) converge to geodesics \(\beta^{+}\) and \(\beta^{-}\), respectively, with \(\beta^{+}(\infty)=\eta(\infty)\) and \(\beta^{-}(-\infty)=\eta(-\infty)\). Since \(\Lambda A\subseteq\partial X^{N}_{\mathfrak{o}}\), the triangle \(\eta\cup\beta^{+}\cup\beta^{-}\) is \(L\)-slim for \(L\) depending only on \(N\) by [10, Proposition 3.6], and as \(x\in\eta\), there exists \(y\in\beta^{+}\cup\beta^{-}\) so that \(d(x_{n},y)\leq L\). Without loss of generality, assume \(y\in\beta^{+}\). Since \([\mathfrak{o},a_{k}^{+}]\) converges to \(\beta^{+}\) uniformly on compact sets, choose \(m\) large enough so that \(d(y,[\mathfrak{o},a_{m}^{+}])\leq 1\). Let \(z\in[\mathfrak{o},a_{m}^{+}]\) so that \(d(y,z)\leq 1\). See Figure 2.3.
Note that the concatenation \([\mathfrak{o},x_{n}]*[x_{n},z]\) is a \((1,L+1)\)-quasi-geodesic with endpoints on \([\mathfrak{o},a_{m}^{+}]\). Since \([\mathfrak{o},a_{m}^{+}]\) is \(N\)-Morse, we have that \([\mathfrak{o},x_{n}]\subseteq[\mathfrak{o},x_{n}]*[x_{n},z]\subseteq\mathcal{ N}_{N(1,L+1)}([\mathfrak{o},a_{m}^{+}])\). Since \(K:=N(1,L+1)\) did not depend on the choice of \(n\), this completes the proof.
Finally, we finish this section by stating the definitions of stability and boundary convex cocompactness here for reference.
**Definition 2.23**.: _([10, Definition 1.3])_ If \(f:X\to Y\) is a quasi-isometric embedding between geodesic metric spaces, we say \(X\) is a _stable_ subspace of \(Y\) if there exists a Morse Gauge \(N\) such that every pair of points in \(X\) can be connected by an \(N\)-Morse quasigeodesic in \(Y\); we call \(f\) a _stable embedding_.
If \(H<G\) are finitely generated groups, we say \(H\) is _stable in_\(G\) if the inclusion map \(i:H\hookrightarrow G\) is a stable embedding.
**Definition 2.24**.: _([10, Definition 1.4])_ We say that \(H\) acts _boundary convex cocompactly_ on \(X\) if the following conditions hold:
1. \(H\) acts properly on \(X\),
2. \(\Lambda H\) is nonempty and compact,
3. The action of \(H\) on \(WCH(\Lambda H)\) is cobounded.
## 3. Limit point characterizations in the Morse Boundary
The goal of this section is show that, given a set \(A\subseteq X\), if \(x\in\partial X_{\mathfrak{o}}\) is a Morse conical limit point of \(A\), then \(x\) is a Morse horospherical limit point of \(A\). This was first shown in the hyperbolic case in [11], here
Figure 2.3. Diagram for Lemma 2.22
we generalize this fact into the setting of proper geodesic metric spaces. We begin by introducing definitions which generalize horospheres and funnels for Morse rays.
**Definition 3.1** (Horoballs, Funnels).: Let \(X\) be a proper, geodesic metric space and let \(\mathfrak{o}\in X\) be some designated point. Let \(\alpha:[a,\infty)\to X\) be an \(N^{\prime}\)-Morse geodesic ray, and let \(N\) be some, potentially different, Morse gauge. We define the _\(N\)-Morse horoball around \(\alpha\) based at \(\mathfrak{o}\)_ as
\[H^{N}_{\mathfrak{o}}(\alpha)=\{x\in X^{N}_{\mathfrak{o}}\ |\ \exists\beta:[b, \infty)\to X\text{ with }\beta\sim_{\delta_{N^{\prime}}}\alpha\text{ and }b\geq a\text{ and }\beta(b)=x\}.\]
We define the _\(N\)-Morse funnel around \(\alpha\) based at \(\mathfrak{o}\)_ as
\[F^{N}_{\mathfrak{o}}(\alpha)=\{x\in X^{N}_{\mathfrak{o}}\ |\ d(x,\pi_{\alpha}(x)) \leq d(\alpha(a),\pi_{\alpha}(x))\}.\]
Comparing these definitions to Definition 2.1 shows that a Morse horoball is a horoball about a Morse geodesic intersected with an appropriate Morse stratum, and similarly, a Morse funnel is a funnel about a Morse geodesic intersected with an appropriate Morse stratum. The following three definitions classify points on the Morse boundary by asking if every horoball, funnel, or cone intersects a given subset of \(X\).
**Definition 3.2**.: Let \(X\) be a proper, geodesic metric space and let \(\mathfrak{o}\in X\) be some designated point. Let \(A\subset X\).
* We say that \(x\in\partial X_{\mathfrak{o}}\) is a _Morse horospherical limit point of \(A\)_ if for every Morse geodesic \(\alpha\) with \(\alpha(\infty)=x\), there exists a Morse gauge \(N\) such that \(H^{N}_{\mathfrak{o}}(\alpha)\cap A\neq\emptyset\).
* We say that \(x\in\partial X_{\mathfrak{o}}\) is a _Morse funnelled limit point of \(A\)_ if for every Morse geodesic \(\alpha\) with \(\alpha(\infty)=x\), there exists a Morse gauge \(N\) such that \(F^{N}_{\mathfrak{o}}(\alpha)\cap A\neq\emptyset\).
* We say that \(x\in\partial X_{\mathfrak{o}}\) is a _Morse conical limit point of \(A\)_ if there exists \(K>0\) such that, for every Morse geodesic \(\alpha\) with \(\alpha(\infty)=x\), we have that \(\mathcal{N}_{K}(\alpha)\cap A\neq\emptyset\).
**Remark 3.3**.: Notice that, in the case where \(X\) is a \(\delta\)-hyperbolic space, these definitions agree with the definitions given in Definition 2.1, as every geodesic in a \(\delta\)-hyperbolic space is \(N\)-Morse for \(N\) depending only on \(\delta\). In light of this, we will use "conical limit point" instead of "Morse conical limit point" for the rest of this paper, except in cases where the difference between these definitions causes confusion. We similarly reduce "Morse horospherical limit point" and "Morse funneled limit point" to "horospherical limit point" and "funnelled limit point," respectively.
We now begin proving the new implications found in Theorem 1.2. We will first show that every conical limit point of \(A\) is a funnelled limit point of \(A\), and then we will show that the funnelled limit points of \(A\) exactly coincide with the horospherical limit points of \(A\). These arguments generalize the arguments found in [10].
**Proposition 3.4**.: _Let \(X\) be a proper, geodesic metric space and let \(\mathfrak{o}\in X\). Let \(A\subseteq X\). If \(x\in\partial X_{\mathfrak{o}}\) is a conical limit point of \(A\), then \(x\) is a funneled limit point of \(A\)._
Proof.: See Figure 3.1. Let \(x\in\partial X_{\mathfrak{o}}\) be a conical limit point of \(A\subseteq X\). Let \(\alpha:[0,\infty)\to X\) be an \(N\)-Morse geodesic with \(\alpha(\infty)=x\). By Lemma 2.8, there exists a Morse gauge \(M\) so that every geodesic sub-ray of \(\alpha\) is \(M\)-Morse. Thus by Definition 3.2, there exists \(K\geq 0\) so that every subray of \(\alpha\) gets at least \(K\) close to \(A\).
Now define \(\alpha^{\prime}=\alpha|_{[3K,\infty)}\), and let \(a\in A\) such that \(a\in\mathcal{N}_{K}(\alpha^{\prime})\). Then note that \(d(a,\pi_{\alpha}(a))\leq d(a,\pi_{\alpha^{\prime}}(a))\leq K\), and so \(d(\pi_{\alpha}(a),\pi_{\alpha^{\prime}}(a))\leq 2K\). By the triangle inequality, \(\pi_{\alpha}(a)\subseteq\alpha|_{[K,\infty)}\). Therefore, \(d(\pi_{\alpha}(a),a)\leq K=d(\alpha(0),\alpha(K))\leq d(\alpha(0),\pi_{ \alpha^{\prime}}(a))\). It remains to show that \(a\in X^{N^{\prime}}_{\mathfrak{o}}\) for a Morse gauge \(N^{\prime}\) which is independent of the choice of \(a\in A\).
Let \(L=d(\mathfrak{o},\alpha(0))\), and let \(p\in\pi_{\alpha^{\prime}}(a)\), and note that \(d(p,a)\leq K\). Thus, \([\mathfrak{o},\alpha(0)]\) and \([p,a]\) are both \(N^{\prime\prime}\)-Morse depending only on \(\max\{K,L\}\), and \([\mathfrak{o},p]\) is \(N^{\prime\prime\prime}\)-Morse depending only on \(N\) by Lemma 2.8. Since \([\mathfrak{o},a]\) is one side of a quadrilateral whose other three sides are \(\max\{N^{\prime\prime},N^{\prime\prime\prime}\}\)-Morse, \([\mathfrak{o},a]\) is \(N^{\prime}\)-Morse where \(N^{\prime}\) does not depend on choice of \(a\in A\) by [10, Lemma 2.3].
Our next goal is to show that the funnelled limit points of \(A\) coincide with the horospherical limit points of \(A\). Towards this end, we show that, given a point \(x\) in a horoball of a subray, the projection of \(x\) to the subray is coarsely the same as the projection to the base ray.
**Lemma 3.5**.: _Suppose \(\alpha\) is an \(N\)-Morse geodesic ray and let \(\alpha^{\prime}\) be a subray. Suppose \(x\in H^{N^{\prime}}_{\mathfrak{o}}(\alpha^{\prime})\). If \(\alpha(\infty)\in\partial X^{N^{\prime\prime}}_{\mathfrak{o}}\), then \(d_{Haus}(\pi_{\alpha}(x),\pi_{\alpha^{\prime}}(x))\leq K\), where \(K\geq 0\) depends only on \(N\), \(N^{\prime}\), and \(N^{\prime\prime}\)._
Proof.: See Figure 3.2. Let \(\alpha:[0,\infty)\to X\) be an \(N\)-Morse geodesic ray and let \(\alpha^{\prime}=\alpha|_{[a,\infty)}\) for some \(a\geq 0\). By Lemma 2.8, \(\alpha^{\prime}\) is \(M\)-Morse for \(M\) depending only on \(N\). Let \(x\in H^{N^{\prime\prime}}_{\mathfrak{o}}(\alpha^{\prime})\), thus there exists \(\beta:[b,\infty)\to X\) a geodesic ray with \(b\geq a\), \(\beta(b)=x\), and \(\beta\sim_{\delta_{M}}\alpha^{\prime}\). By Proposition 2.15, \(\beta\) is \(M^{\prime}\)-Morse for \(M^{\prime}\) depending only on \(N\), \(N^{\prime}\), and \(N^{\prime\prime}\). We note that if \(\pi_{\alpha}(x)\subseteq\alpha^{\prime}\), then \(\pi_{\alpha}(x)=\pi_{\alpha^{\prime}}(x)\). So, we assume that \(\pi_{\alpha}(x)\not\subseteq\alpha^{\prime}\). We shall show that in this case, \(d(x,\pi_{\alpha}(x))\) and \(d(x,\pi_{\alpha^{\prime}}(x))\) are both bounded above by an appropriate constant, and this gives the desired result.
Let \(p\in\pi_{\alpha}(x)\setminus\alpha^{\prime}\), and let \(q\in\pi_{\alpha^{\prime}}(x)\). Without loss of generality, let \(T\) be large enough so that \(q\in[\alpha^{\prime}(a),\alpha^{\prime}(T)]\) and \(d(\alpha^{\prime}(T),\beta(T))\leq\delta_{N}\).
Put \(\gamma=[\beta(b),\alpha^{\prime}(a)]*[\alpha^{\prime}(a),\alpha^{\prime}(T)]*[ \alpha^{\prime}(T),\beta(T)]\), and note that \(\gamma\) is a \((3,4\delta_{N})\) quasi-geodesic. Thus there exists \(w\in[\beta(b),\beta(T)]\) and \(L\geq 0\) such that \(d(\alpha^{\prime}(a),w)\leq L\), where \(L\) depends only on \(M^{\prime}\) by [17, Lemma 2.1]. Notice now that \(|d(\alpha^{\prime}(a),\alpha^{\prime}(T))-d(w,\beta(T))|\leq\delta_{N}+L\). However, since \(b\geq a\) and \(w\in[\beta(b),\beta(T)]\), we know \(|d(\alpha^{\prime}(a),\alpha^{\prime}(T))-d(w,\beta(T))|=d(\alpha^{\prime}(a),\alpha^{\prime}(T))-d(w,\beta(T))\). But then by the definition of the nearest point projection and the triangle inequality, we have
\[d(x,p)\leq d(x,q)\leq d(x,\alpha^{\prime}(a))\leq d(x,w)+d(w,\alpha^{\prime}( a))=d(x,\beta(T))-d(w,\beta(T))+d(w,\alpha^{\prime}(a))\]
\[=d(\alpha^{\prime}(b),\alpha^{\prime}(T))-d(w,\beta(T))+d(w,\alpha^{\prime}( a))\leq d(\alpha^{\prime}(a),\alpha^{\prime}(T))-d(w,\beta(T))+L\leq\delta_{N}+L+L.\]
Therefore, \(d(\pi_{\alpha}(x),x)\) and \(d(\pi_{\alpha^{\prime}}(x),x)\) are both bounded above by \(L\), which is a constant depending only on \(N\), \(N^{\prime}\), and \(N^{\prime\prime}\), as desired.
Figure 3.1. Diagram for Lemma 3.4
Figure 3.2. Diagram for Lemma 3.5
We're now ready to show that Morse funnelled limit points are exactly Morse horospherical limit points. We proceed using the same overall strategy as the one found in [10], by showing direct generalizations of Lemma 2.2 and Lemma 2.3 for the Morse case.
**Proposition 3.6**.: _Let \(x\in\partial X_{\mathfrak{o}}^{N}\). Let \(\alpha:[0,\infty)\to X\) be an \(N^{\prime}\)-Morse geodesic with \(\alpha(\infty)=x\). Then for every Morse gauge \(N^{\prime\prime}\), there exists \(T\geq 0\) such that, for any subray \(\alpha^{\prime}\) of \(\alpha\) with \(d(\alpha(0),\alpha^{\prime})\geq T\), we have \(H^{N^{\prime\prime}}_{\mathfrak{o}}(\alpha^{\prime})\subseteq F^{N^{\prime \prime}}_{\mathfrak{o}}(\alpha)\)._
Proof.: See Figure 3.3. Let \(\alpha^{\prime}=\alpha|_{[a,\infty)}\) be a subray of \(\alpha\). By Lemma 2.8, \(\alpha^{\prime}\) is \(M\)-Morse where \(M\) depends only on \(N^{\prime}\). Let \(y\in H^{N^{\prime\prime}}_{\mathfrak{o}}(\alpha^{\prime})\). Thus there exists \(\beta:[b,\infty)\to X\) be a geodesic ray such that \(b\geq a\), \(\beta(b)=y\), and \(\beta\sim_{\delta_{M}}\alpha^{\prime}\). Note that \(\beta\) is \(M^{\prime}\)-Morse where \(M^{\prime}\) depends only on \(N\), \(N^{\prime}\), and \(N^{\prime\prime}\) by Proposition 2.15. Choose \(z\in\pi_{\alpha}(y)\) such that \(d(\alpha(0),z)=d(\alpha(0),\pi_{\alpha}(y))\), i.e., so that \(z\) is closest to \(\alpha(0)\). By Lemma 3.5, there exists \(p\in\pi_{\alpha^{\prime}}(x)\) so that \(d(z,p)\leq L\) for some \(L\) depending only on \(N\), \(N^{\prime}\), and \(N^{\prime\prime}\). Choose \(t\) large enough so that \(d(\alpha^{\prime}(t),\beta(t))=d(\alpha(t),\beta(t))\leq\delta_{M}\) and \(p,\alpha(b)\in[\alpha(a),\alpha(t)]\). Note that \([y,p]*[p,\alpha(t)]*[\alpha(t),\beta(t)]\) is a \((3,4\delta_{M})\)-quasi-geodesic, thus there exists \(q\in[\beta(b),\beta(t)]\) and \(\lambda\geq 0\) such that \(d(p,q)\leq\lambda\) where \(\lambda\) depends only on \(M^{\prime}\) by [16, Lemma 2.1]. It suffices to show that \(d(y,z)\leq d(\alpha(0),z)\).
Using the triangle inequality and the definition of \(\pi_{\alpha}\), we find
\[d(y,z) \leq d(y,p)\leq d(y,q)+d(q,p)\leq d(y,q)+\lambda=d(y,\beta(t))-d( q,\beta(t))+\lambda=d(\alpha(b),\alpha(t))-d(q,\beta(t))+\lambda\] \[\leq d(\alpha(a),\alpha(t))-d(p,\alpha(t))+\lambda+\delta_{M}+ \lambda=d(\alpha(a),p)+2\lambda+\delta_{M}\leq d(\alpha(a),z)+L+2\lambda+ \delta_{M}.\]
So, if \(a\geq L+2\lambda+\delta_{M}\), we have \(d(y,z)\leq d(\alpha(a),z)+L+2\lambda+\delta_{M}\leq d(\alpha(a),z)+d(\alpha( 0),\alpha(a))=d(\alpha(0),z)\).
**Proposition 3.7**.: _Let \(x\in\partial X_{\mathfrak{o}}^{N}\). Let \(\alpha:[0,\infty)\to X\) be an \(N^{\prime}\)-Morse geodesic with \(\alpha(\infty)=x\). Suppose \(S=\delta_{N^{\prime}}\). Define \(\alpha^{\prime}=\alpha|_{[S,\infty)}\). Then \(F^{N^{\prime\prime}}_{\mathfrak{o}}(\alpha^{\prime})\subseteq H^{N^{\prime \prime}}_{\mathfrak{o}}(\alpha)\) for any Morse gauge \(N^{\prime\prime}\)._
Proof.: Let \(y\in F^{N^{\prime\prime}}_{\mathfrak{o}}(\alpha^{\prime})\). By definition, \(d(y,\pi_{\alpha^{\prime}}(y))\leq d(\alpha(S),\pi_{\alpha^{\prime}}(y))\). Let \(p\in\pi_{\alpha^{\prime}}(y)\) such that \(d(\alpha(S),p)=d(\alpha(S),\pi_{\alpha^{\prime}}(y))\), i.e., let \(p\) be the element of \(\pi_{\alpha^{\prime}}(y)\) which is closest to \(\alpha(S)\). Then \(d(y,p)\leq d(\alpha(S),p)\). Construct \(\beta:[b,\infty)\to X\) such that \(\beta(b)=y\) and \(\beta\sim_{\delta_{N^{\prime}}}\alpha\). We want to show that \(b\geq 0\). Choose \(T\geq 0\) so that \(d(\beta(T),\alpha(T))\leq\delta_{N^{\prime}}\). Then \(T-b=d(y,\beta(T))\leq d(y,p)+d(p,\alpha(T))+d(\alpha(T),\beta(T))\leq d( \alpha(S),p)+d(p,\alpha(T))+\delta_{N^{\prime}}=d(\alpha(S),\alpha(T))+ \delta_{N^{\prime}}=d(\alpha(0),\alpha(T))-d(\alpha(0),\alpha(S))+\delta_{N^{ \prime}}=T-S+\delta_{N^{\prime}}=T\). In summary, \(T-b\leq T\), but this immediately shows that \(0\leq b\), as desired.
Figure 3.3. Diagram for Lemma 3.6
**Theorem 3.8**.: _Let \(x\in\partial X_{\mathfrak{o}}\). Then \(x\) is a Morse horospherical limit point of \(A\subseteq X\) if and only if \(x\) is a Morse funnelled limit point of \(A\)._
Proof.: This is a direct consequence of Propositions 3.6 and 3.7.
## 4. Limit Set Conditions For Stability
In this section, we show that the horospherical limit point condition, combined with the limit set being compact, is enough for to show that the group action on the weak convex hull is cobounded. The main idea behind this argument is to show the contrapositive: when the group action is not cobounded, then geodesic rays in the space eventually end up very far from the orbit of the group. We begin by showing the following helpful fact, which states that if a group acts non-coboundedly on the weak convex hull of its compact limit set, then there exists a sequence of points \(p_{n}\) in the weak convex hull that "maximally avoids" the orbit.
**Lemma 4.1** (Sliding Spheres).: _Suppose \(X\) is a proper geodesic metric space and suppose that \(H\) acts properly on \(X\) by isometries. Assume that \(\Lambda H\neq\emptyset\), every point of \(\Lambda H\) is a conical limit point of \(H\mathfrak{o}\), and that \(\Lambda H\subseteq\partial X_{\mathfrak{o}}^{N}\) for some Morse gauge \(N\). If the action \(H\curvearrowright WCH(\Lambda H)\) is not cobounded, there exists a sequence of points \(p_{n}\in WCH(\Lambda H)\) such that, for sufficiently large \(n\), \(B_{n}(p_{n})\cap H\mathfrak{o}=\emptyset\) and \(\mathfrak{o}\in B_{n+1}(p_{n})\)._
Proof.: Let \(K>0\) be the conical limit point constant. Let \(n\in\mathbb{N}\) with \(n>K+1\). By [11, Corollary 5.8], we may assume that \(\Lambda H\) has at least two distinct points. Since \(H\curvearrowright WCH(\Lambda H)\) is not cobounded, there exists \(p\in WCH(\Lambda H)\) with \(d(p,H\mathfrak{o})>n\). By definition, \(p\in\gamma\) for some bi-infinite geodesic \(\gamma\) with \(\gamma(\pm\infty)\in\Lambda H\). Since \(\Lambda H\subseteq\partial X_{\mathfrak{o}}^{N}\), we have by [10, Proposition 4.2] that \(\gamma\) is is Morse for some Morse gauge depending only on \(N\). Since every point in \(\Lambda H\) is a conical limit point of \(H\mathfrak{o}\), there exists \(h^{\prime}\in H\) such that \(d(h^{\prime}\mathfrak{o},\gamma)<K\). Put \(q\in\pi_{\gamma}(h^{\prime}\mathfrak{o})\).
We may assume that \(\gamma(s)=q\) and \(\gamma(s^{\prime})=p\) with \(s<s^{\prime}\). Let \(A=\{r\in[s,s^{\prime}]\ :\ n<d(\gamma(r),H\mathfrak{o})\}\). (Equivalently, one may define \(A=\{r\in[s,s^{\prime}]\ :\ B_{n}(\gamma(r))\cap H\mathfrak{o}=\emptyset\}\).) Note that \(s^{\prime}\in A\). Put \(t=\inf A\). By the definition of \(t\), we have \(n\leq d(\gamma(t),H\mathfrak{o})\). See Figure 4.1. We now claim that \(d(\gamma(t),H\mathfrak{o})<n+1\).
Suppose for contradiction that \(n+1\leq d(\gamma(t),H\mathfrak{o})\). By the triangle inequality, \(n\leq d(\gamma(t-1),H\mathfrak{o})\). So if \(t-1\in[s,s^{\prime}]\), then \(t-1\in A\), however \(t=\inf A\). Thus \(t-1\not\in[s,s^{\prime}]\). Therefore, \(t\in[s,s+1]\), and so \(n+1\leq d(\gamma(t),h\mathfrak{o})\leq d(\gamma(t),h^{\prime}\mathfrak{o}) \leq d(\gamma(t),q)+d(q,h^{\prime}\mathfrak{o})=d(\gamma(t),\gamma(s))+d(q,h^ {\prime}\mathfrak{o})\leq 1+K\leq n\), a contradiction.
Figure 3.4. Diagram for Lemma 3.7
Therefore, there exists \(h\in H\) such that \(h\mathfrak{o}\in B_{n+1}(\gamma(t))\), but \(B_{n}(\gamma(t))\cap H\mathfrak{o}=\emptyset\). Put \(p_{n}=h^{-1}(\gamma(t))\). By [12, Lemma 3.3], \(h\gamma\) is a bi-infinite Morse geodesic with endpoints in \(\Lambda H\), and since the action of \(H\) on \(X\) is by isometries, \(B_{n}(p_{n})\cap H\mathfrak{o}=\emptyset\), and \(\mathfrak{o}\in B_{n+1}(p_{n})\), as desired.
We now prove that (4) implies (2) in the language of Theorem 1.2. We show that, if the action is not cobounded on the weak convex hull, then using Lemma 4.1 we can find a sequence of points which maximally avoid the orbit of \(H\), but by Arzela-Ascoli, geodesics connecting these points to the basepoint eventually fellow-travel a ray which has orbit points that at most linearly diverge from the ray.
**Theorem 4.2**.: _Suppose \(X\) is a proper geodesic metric space and suppose \(H\) acts properly on \(X\) by isometries. Assume that \(\Lambda H\neq\emptyset\), every point of \(\Lambda H\) is a horospherical limit point of \(H\mathfrak{o}\), and that there exists a Morse gauge \(N\) such that \(\Lambda H\subset\partial X_{\mathfrak{o}}^{N}\). Then the action of \(H\curvearrowright WCH(\Lambda H)\) is cobounded._
Proof.: For contradiction, assume that the action \(H\curvearrowright WCH(\Lambda H)\) is not cobounded. By Lemma 4.1, there exists a sequence of points \(p_{n}\in WCH(\Lambda H)\) such that, for all \(n>K\), \(B_{n}(p_{n})\cap H\mathfrak{o}=\emptyset\), and \(\mathfrak{o}\in B_{n+1}(p_{n})\). Let \(\gamma_{n}:[0,d(0,p_{n})]\to X\) be a geodesic with \(\gamma_{n}(0)=\mathfrak{o}\) and \(\gamma_{n}(d(0,p_{n}))=p_{n}\). Notice that since \(\Lambda H\subset\partial X_{\mathfrak{o}}^{N}\), we have that \(\gamma_{n}\) is \(N^{\prime}\)-Morse for some \(N^{\prime}\) depending only on \(N\). Construct a subsequence \(\gamma_{n_{i}}\) such that \(\gamma_{n_{i}}\) converges, uniformly on compact subsets, to an \(N^{\prime}\)-Morse geodesic ray \(\gamma\) with \(\gamma(0)=\mathfrak{o}\).
By construction and by Lemma 2.22, \(\gamma(\infty)\in\Lambda(WCH(\Lambda H))\subseteq\Lambda H\). So, by [12, Corollary 2.6], there exists an \(N\)-Morse geodesic ray \(\alpha\) with \(\alpha(0)=\mathfrak{o}\) and \(d(\alpha(t),\gamma(t))<D\) for all \(t\geq 0\), where \(D\geq 0\) is a constant that depends only on \(N\). Let \(T=2D+4\), and put \(\alpha^{\prime}=\alpha_{[T,\infty)}\). Since \(\alpha^{\prime}(\infty)\in\Lambda H\), and so by Theorem 3.8, \(\alpha^{\prime}(\infty)\) is a funneled limit point of \(H\). Thus there exists \(h\in H\) so that \(h\mathfrak{o}\in F_{\mathfrak{o}}^{N}(\alpha^{\prime})\). Let \(t_{0}=\min\{s:\alpha^{\prime}(s)\in\pi_{\alpha^{\prime}}(h\mathfrak{o})\}\). Since the sequence \(\gamma_{n_{i}}\) converges uniformly on compact sets to \(\gamma\), we may choose \(n\) large enough so that \(d(\gamma_{n}(t_{0}),\gamma(t_{0}))\leq 1\). See Figure 4.2.
By the triangle inequality we have that \(d(\gamma_{n}(t_{0}),\alpha^{\prime}(t_{0}))\leq D+1\), and thus \(|d(0,\gamma_{n}(t_{0}))-d(0,\alpha(t_{0}))|\leq D+1\). Also, by construction we have that \(d(h\mathfrak{o},\alpha^{\prime}(t_{0}))\leq d(\alpha(T),\alpha(t_{0}))\). Therefore we have
\[d(p_{n},h\mathfrak{o})\leq d(p_{n},\gamma_{n}(t_{0}))+d(\gamma_{n}(t_{0}), \alpha^{\prime}(t_{0}))+d(\alpha^{\prime}(t_{0}),h_{0})\leq d(0,\gamma_{n}(t_ {0}))+(D+1)+d(\alpha(T),\alpha(t_{0}))\]
\[=d(\mathfrak{o},p_{n})-d(\mathfrak{o},\gamma_{n}(t_{0}))+(D+1)+d(0,\alpha(t_{0 }))-d(0,\alpha(T))\leq(n+1)+(D+1)+(D+1)-(2D+4)\leq n-1.\]
However, this contradicts the assumption that \(B_{n}(p_{n})\cap H\mathfrak{o}=\emptyset\).
We now present an alternate definition of a conical limit point which agrees with Definition 3.2 in the case where \(\Lambda A\) is compact, and requires us to only consider of the geodesic rays which emanate from the given basepoint. By Corollary 2.18 and by [12, Lemma 4.1], the requirement that \(\Lambda H\) is compact is equivalent to the requirement that \(\Lambda H\) is contained in the boundary of a single Morse stratum.
**Proposition 4.3**.: _Let \(X\) be a proper, geodesic metric space. Let \(Y\subseteq X\). Suppose \(\Lambda Y\neq\emptyset\). Then the following are equivalent:_
1. \(x\in\partial X_{\mathfrak{o}}\) _is a conical limit point of_ \(Y\)__
2. _There exists_ \(K>0\) _such that, for every_ \(N\)_-Morse geodesic ray_ \(\alpha:[0,\infty)\to X\) _with_ \(\alpha(0)=\mathfrak{o}\) _and_ \(\alpha(\infty)=x\)_, and for every_ \(T>0\)_, there exists_ \(y\in Y\) _such that_ \(y\in\mathcal{N}_{K}(\alpha^{\prime})\)_, where_ \(\alpha^{\prime}:[0,\infty)\to X\) _is defined by_ \(\alpha^{\prime}(t)=\alpha(t+T)\)_._
Figure 4.1. Diagram for Lemma 4.1. We can think of this proof as sliding the ball on the right towards the left until it is “up against” the orbit \(H\mathfrak{o}\), such as the ball centered at \(\gamma(t)\).
Proof.: Showing that (1) implies (2) is a direct consequence of Lemma 2.8 and Definition 3.2.
Instead assume (2). Let \(\beta:[b,\infty)\to X\) be an \(N^{\prime}\)-Morse ray with \(\beta(\infty)=x\). Let \(\alpha:[0,\infty)\to X\) an \(N\)-Morse geodesic ray with \(\alpha(0)=\mathfrak{o}\) and \(\alpha(\infty)=x\). Without loss of generality, by Cor 2.13 there exists \(T>0\) such that \(d(\alpha(t),\beta(t))<\delta_{N}\) for all \(t>T\). Put \(\alpha^{\prime}:[0,\infty)\to X\) via \(\alpha^{\prime}(t)=\alpha(t+T)\). By hypothesis, there exists \(y\in Y\) such that \(y\in\mathcal{N}_{K}(\alpha^{\prime})\). Say \(s\in[0,\infty)\) such that \(d(\alpha^{\prime}(s),y)<K\), so via the triangle inequality we have \(d(\beta(s+T),y)\leq d(\beta(s+t),\alpha(s+t))+d(\alpha^{\prime}(s),y)\leq \delta_{N}+K\). Thus, \(y\in\mathcal{N}_{K+\delta_{N}}(\beta)\), which shows (1).
We conclude this section by showing that (3) \(\Rightarrow\) (2) for Theorem 1.2, which was first shown in [1, Corollary 1.14], however here we present a direct proof that does not rely on [1, Theorem 1.1].
**Proposition 4.4**.: _Let \(X\) be a proper geodesic metric space and let \(H\) be a finitely generated group of isometries of \(X\) such that the orbit map \(H\to X\) via \(h\mapsto h\mathfrak{o}\) is a stable mapping. If there exists a Morse gauge \(N\) so that \(\Lambda H\subseteq\partial X_{\mathfrak{o}}^{N}\), then every \(x\in\Lambda H\) is a conical limit point of \(H\mathfrak{o}\)._
Proof.: Let \(x\in\Lambda H\), and let \(\alpha:[0,\infty)\to X\) be an \(N\)-Morse geodesic ray with \(\alpha(\infty)=x\), \(\alpha(0)=\mathfrak{o}\). Let \(\alpha^{\prime}=\alpha|_{[a,\infty)}\) be a subray of \(\alpha\). Notice that \(\alpha^{\prime}\) is \(N^{\prime}\)-Morse where \(N^{\prime}\) depends only on \(N\) by Lemma 2.8. By Proposition 4.3, it suffices to show that there exists some \(K\geq 0\), depending only on \(N^{\prime}\) and \(H\), so that \(H\mathfrak{o}\cap\mathcal{N}_{K}(\alpha^{\prime})\neq\emptyset\).
Since \(H\) is a stable subgroup of isometries on \(X\), we have that for any \(h\in H\), there exists a \((\lambda,\lambda)\)-quasi-geodesic \(\gamma\) from \(\mathfrak{o}\) to \(h\mathfrak{o}\) such that, for any \(p\in\gamma\), \(B_{2\lambda}(p)\cap H\mathfrak{o}\neq\emptyset\). (To find such a path \(\gamma\), take a geodesic in a Cayley graph for \(H\) and embed it into \(X\) by extending the orbit map along appropriate geodesic segments.)
Now, since \(x\in\Lambda H\), there exists a sequence \(h_{n}\in H\) such that the sequence of geodesic segments, \(\beta_{n}=[\mathfrak{o},h_{n}\mathfrak{o}]\), converges (uniformly on compact subsets) to a geodesic ray \(\beta:[b,\infty)\to X\) with \(\beta(\infty)=x\) and \(\beta(b)=\mathfrak{o}\). Since \(H\) is a stable group of isometries, \(\beta_{n}\) is \(N^{\prime\prime}\)-Morse by Definition 2.23. Up to potentially re-parameterizing \(\beta\), there exists \(T>a\) so that \(d(\beta(T),\alpha(T))<\delta_{N}\) by Corollary 2.13.
Since \(\beta_{n}\) converges to \(\beta\) uniformly on \(\overline{B_{T+1}(\mathfrak{o})}\), the ball of radius \(T+1\) centered at \(\mathfrak{o}\), there exists \(n\in\mathbb{N}\) and \(p\in\beta_{n}\) so that \(d(\beta(T),p)<1\). Since \(\gamma_{n}\) is an \((\lambda,\lambda)\)-quasi-geodesic with endpoints on \(\beta_{n}\), there exists \(q\in\gamma_{n}\) so that \(d(p,q)\leq N^{\prime\prime}(\lambda,\lambda)\). Finally, there exists \(h\in H\) so that \(d(h\mathfrak{o},q)\leq\lambda\).
Therefore by the triangle inequality, \(d(\alpha(T),h\mathfrak{o})\leq d(\alpha(T),\beta(T))+d(\beta(T),p)+d(p,q)+d(q, h\mathfrak{o})\leq\delta_{N}+1+N^{\prime\prime}(\lambda,\lambda)+2\lambda\). As \(\alpha(T)\in\alpha^{\prime}\), this completes the proof.
## 5. Applications to Teichmuller Space
We conclude by illustrating applications to the above work in the setting of Teichmuller space for a finite type surface \(S\). We begin by setting some notation. Let \(\operatorname{Mod}(S)\) denote the mapping class group of \(S\) and let \(\mathcal{T}(S)\) denote the associated Teichmuller space. We will denote the set of projective measured foliations on \(S\) by \(\operatorname{PMF}(S)\). The Thurston compactification of Teichmuller space is \(\overline{\mathcal{T}(S)}=\mathcal{T}(S)\cup\operatorname{PMF}(S)\).
Figure 4.2. Diagram for Theorem 4.2
We take a moment to restate Corollary 1.5 using the above notation:
**Corollary 5.1**.: _(Restatement of Corollary 1.5.) Let \(H\) be a finitely generated subgroup of \(\operatorname{Mod}(S)\). The following are equivalent:_
1. _Every element of_ \(\Lambda H\subset\partial\operatorname{Mod}(S)\) _is a conical limit point of_ \(H\curvearrowright\operatorname{Mod}(S)\) _and_ \(\Lambda H\) _is compact (in the Morse boundary of_ \(\operatorname{Mod}(S)\)_)._
2. _Every element of_ \(\Lambda H\subset\operatorname{PMF}(S)\) _is a conical limit point of_ \(H\curvearrowright\mathcal{T}(S)\)_._
By work of Cordes, \(\partial\operatorname{Mod}(S)\) is homeomorphic to \(\partial\mathcal{T}(S)\) (where \(\partial\) refers to the Morse boundary) [11, Theorem 4.12], and there exists a natural continuous injective map \(h_{\infty}:\partial\mathcal{T}(S)\hookrightarrow\operatorname{PMF}(S)\)[11, Proposition 4.14]. Keeping this in mind, we denote the continuous inclusion \(f_{\infty}:\partial\operatorname{Mod}(S)\hookrightarrow\operatorname{PMF}(S)\). The purpose of this section is to prove the following theorem.
**Theorem 5.2**.: _Let \(H\) be a subgroup of \(\operatorname{Mod}(S)\), and let \(x_{\infty}\in\Lambda H\subseteq\partial\operatorname{Mod}(S)\) be a conical limit point of \(H\curvearrowright\operatorname{Mod}(S)\). Then \(f_{\infty}(x_{\infty})\in\operatorname{PMF}(S)\) is a conical limit point of \(H\curvearrowright\mathcal{T}(S)\)._
**Remark 5.3**.: This theorem directly proves \((1)\Rightarrow(2)\) of Corollary 1.5.
Our proof of Theorem 5.2 uses several of the tools developed in [11], so we take a moment to recall the construction and definitions presented therein and from [10]. The _curve graph_, denoted \(\mathcal{C}(S)\), is a locally infinite simplicial graph whose vertices are isotopy classes of simple closed curves on \(S\). We join two vertices with an edge it there exists representative from each class that are disjoint.
A set of (pairs of) curves \(\mu=\{(\alpha_{1},\beta_{1}),(\alpha_{2},\beta_{2}),\ldots,(\alpha_{m},\beta_{ m})\}\) is called a complete clean _marking_ of \(S\) if the \(\{\alpha_{1},\ldots,\alpha_{m}\}\) forms a pants decomposition of \(S\), if each \(\alpha_{i}\) is disjoint from \(\beta_{j}\) whenever \(i\neq j\), and if each \(\alpha_{i}\) intersects \(\beta_{i}\) once if the surface filled by \(\alpha_{i}\) and \(\beta_{i}\) is a one-punctured torus. (Otherwise, \(\alpha_{i}\) and \(\beta_{i}\) will intersect twice, and the filling surface is a four-punctured sphere.) We call \(\{\alpha_{1},\ldots,\alpha_{m}\}\) the _base_ of \(\mu\) and we call \(\beta_{i}\) the _transverse curve to \(\alpha_{i}\) in \(\mu\)_. For the sake of completeness, we also define the marking graph, \(\mathcal{M}(S)\), although the definition is not needed in this paper. \(\mathcal{M}(S)\) is the simplicial graph whose vertices are markings as defined above, and two markings are joined by an edge if they differ by an elementary move. The marking graph \(\mathcal{M}(S)\) is quasi-isometric to the mapping class group \(\operatorname{Mod}(S)\), see [10].
For each \(\sigma\in\mathcal{T}(S)\) there is a _short marking_, which is constructed inductively by picking the shortest curves in \(\sigma\) for the base and repeating for the transverse curves. Now define a map \(\Upsilon:\mathcal{M}(S)\to\mathcal{T}(S)\) by taking a marking \(\mu\) to the region in the \(\epsilon\)-thick part of \(\mathcal{T}(S)\), denoted \(\mathcal{T}_{\epsilon}(S)\), where \(\mu\) is a short marking in that region. As stated in [11], it is a well known fact that \(\Upsilon\) is a coarsely well defined map which is coarsely Lipschitz. We take a moment to prove that this map is coarsely equivariant.
**Lemma 5.4**.: _Let \(\Upsilon:\mathcal{M}(S)\to\mathcal{T}(S)\) be as above, and let \(H<\operatorname{Mod}(S)\) be finitely generated. Then there exists a constant \(K\geq 0\) such that, for any marking \(\mu\in\mathcal{M}\) and for any \(h\in H\),_
\[d_{\mathcal{T}(S)}(h\Upsilon(\mu),\Upsilon(h\mu))\leq K.\]
Proof.: Let \(\mu=\{(\alpha_{1},\beta_{1}),\ldots,(\alpha_{m},\beta_{m})\}\in\mathcal{M}(S)\) and \(h\in H\) be arbitrary. Let \(\sigma\in\mathcal{T}(S)\) so that \(\mu\) is a short marking on \(\sigma\). (Equivalently, let \(\sigma=\Upsilon(\mu)\).) Since the action of \(H\) on \(\mathcal{T}(S)\) permutes the lengths of curves, the length of each pair \((\alpha_{i},\beta_{i})\) with respect to \(\sigma\) is the same as the length of the pair \((h\alpha_{i},h\beta_{i})\) with respect to \(h\sigma\). Therefore as \(\mu\) was a short marking for \(\sigma\), this shows that \(h\mu\) is a short marking for \(h\sigma=h\Upsilon(\mu)\). However, by definition of \(\Upsilon\), \(h\mu\) is also a short marking for \(\Upsilon(h\mu)\). As \(\Upsilon\) was a coarsely well defined function, this shows that \(d_{\mathcal{T}(S)}(h\Upsilon(\mu),\Upsilon(h\mu))\leq K\) for some \(K\geq 0\), as desired.
We now prove Theorem 5.2, using the above lemma and several tools from [11] to show that points in conical neighborhoods in \(\mathcal{M}(S)\) end up in conical neighborhoods of \(\mathcal{T}(S)\).
Proof.: Fix \(\mu_{0}\in\mathcal{M}(S)\). Let \(x\in\partial\mathcal{M}(S)_{\mu_{0}}\) be a conical limit point of \(H\mu_{0}\). Put \(\sigma_{0}=\Upsilon(\mu_{0})\). We shall show that \(f_{\infty}(x)\) is a conical limit point of \(H\sigma_{0}\) by verifying the condition in Proposition 4.3. Let \(T\geq 0\) be arbitrary, and let \(\lambda:[0,\infty)\to\mathcal{T}(S)\) be an arbitrary Morse geodesic ray with \(\lambda(0)=\sigma_{0}\) and \(\lambda(\infty)=f_{\infty}(x)\).
Let \(\alpha:\mathbb{N}\to\mathcal{M}(S)\) be an \(N\)-Morse geodesic with \(\alpha(0)=\mu_{0}\) and \(\alpha(\infty)=x\). By [11, Lemma 4.9], \(\Upsilon(\alpha)\) is an \(N^{\prime}\)-Morse \((A,B)\)-quasi-geodesic, for some \(A\), \(B\), and \(N^{\prime}\) depending only on \(N\). Put \(\beta=\Upsilon(\alpha)\). Notice that \(\beta(0)=\sigma_{0}\) and, by the construction of \(f_{\infty}\), we have \(\beta(\infty)=f_{\infty}(x)\). (For details on the construction of \(f_{\infty}\), we refer to [11], specifically Proposition 4.11, Theorem 4.12, and Proposition 4.14.)
Now let \(\gamma_{n}=[\sigma_{0},\beta(n)]\). Then each \(\gamma_{n}\) is \(N^{\prime\prime}\)-Morse for \(N^{\prime\prime}\) depending on \(N\), and by Arzela-Ascoli and Cor17, Lemma 2.10], a subsequence of the \(\gamma_{n}\) converges to a geodesic ray \(\beta\) which is \(N^{\prime\prime}\)-Morse, and by Cor17, Lemma 4.9], \(\beta\) is bounded Hausdorff distance from \(\gamma\), where the bound only depends on \(N\). Say that \(d_{Haus}(\beta,\gamma)\leq K_{1}\) for \(K_{1}\geq 0\). By [13, Corollary 2.6], \(d_{Haus}(\gamma,\lambda)\leq K_{2}\) where \(K_{2}\geq 0\) depends only on \(N\). Choose \(S\geq 0\) so that, for all \(s\geq S\), \(d_{\mathcal{T}(S)}(\beta(s),\lambda_{[T,\infty)})\leq K_{1}+K_{2}\).
By Proposition 4.3, there exists \(L\geq 0\) where, for all \(r\geq 0\), \(d_{\mathcal{M}(S)}(h\mu_{0},\alpha|_{[r,\infty)})\leq L\) for some \(h\in H\). Since \(\beta=\Upsilon(\alpha)\) and \(\Upsilon\) is coarse Lipschitz, there exits \(K_{3}\geq 0\) and \(h\in H\) so that \(d_{\mathcal{T}(S)}(\Upsilon(h\mu_{0}),\beta|_{[S,\infty)})\leq K_{3}\). Let \(s_{0}\in[S,\infty)\) so that \(d_{\mathcal{T}(S)}(\Upsilon(h\mu_{0}),\beta(s_{0}))\leq K_{3}\). By Lemma 5.4, there exists \(K_{4}\geq 0\) such that \(d_{\mathcal{T}(S)}(\Upsilon(h\mu_{0}),h\Upsilon(\mu_{0}))\leq K_{4}\).
By the triangle inequality, we have
\[d_{\mathcal{T}(S)}(h\sigma_{0},\lambda|_{[T,\infty)})\leq d(h\Upsilon(\mu_{0} ),\Upsilon(h\mu_{0}))+d(\Upsilon(h\mu_{0}),\beta(s_{0}))+d(\beta(s_{0}), \lambda|_{[T,\infty)})\leq K_{4}+K_{3}+K_{2}+K_{1}\]
By Proposition 4.3, \(\lambda(\infty)=f_{\infty}(x)\) is a conical limit point of \(H\sigma_{0}\).
|
2309.06212 | Long-term drought prediction using deep neural networks based on
geospatial weather data | The problem of high-quality drought forecasting up to a year in advance is
critical for agriculture planning and insurance. Yet, it is still unsolved with
reasonable accuracy due to data complexity and aridity stochasticity. We tackle
drought data by introducing an end-to-end approach that adopts a
spatio-temporal neural network model with accessible open monthly climate data
as the input.
Our systematic research employs diverse proposed models and five distinct
environmental regions as a testbed to evaluate the efficacy of the Palmer
Drought Severity Index (PDSI) prediction. Key aggregated findings are the
exceptional performance of a Transformer model, EarthFormer, in making accurate
short-term (up to six months) forecasts. At the same time, the Convolutional
LSTM excels in longer-term forecasting. | Alexander Marusov, Vsevolod Grabar, Yury Maximov, Nazar Sotiriadi, Alexander Bulkin, Alexey Zaytsev | 2023-09-12T13:28:06Z | http://arxiv.org/abs/2309.06212v6 | # Long Term Drought Prediction using Deep Neural Networks based on Geospatial Weather Data
###### Abstract
The accurate prediction of drought probability in specific regions is crucial for informed decision-making in agricultural practices. In particular, for long-term decisions it is important to make predictions for one year ahead. However, forecasting this probability presents challenges due to the complex interplay of various factors within the region of interest and neighboring areas. In this study, we propose an end-to-end solution based on various spatio-temporal neural networks to address this issue. Considered models focus on predicting the drought intensity based on Palmer Drought Severity Index (PDSI) for subregions of interest, leveraging intrinsic factors and insights from climate models to enhance drought predictions.
Comparative evaluations demonstrate the superior accuracy of Convolutional LSTM and Transformer models compared to baseline Gradient Boosting and Logistic Regression solutions. The two former models achieved impressive \(ROC\ AUC\) scores from 0.90 to 0.70 for forecast horizons from 1 to 6 months, outperforming baseline models. The transformer showed superiority for shorter horizons, while ConvLSTM took the lead for longer horizons. Thus, we recommend selecting the model in this way for long-term drought forecasting.
To ensure the broad applicability of considered models, we conduct extensive validation across regions worldwide, considering different environmental conditions. Moreover, we also run several ablation and sensitivity studies to challenge our findings and provide additional information on an appropriate way to solve the problem at hand.
keywords: weather, climate, drought forecasting, deep learning, long-term forecasting +
Footnote †: journal: Journal of Forecasting and Forecasting
## 1 Introduction
Drought forecasting is one of the crucial problems nowadays [1]. Indeed, monitoring and forecasting droughts hold significant importance due to their high occurrence across diverse terrains [2]. These natural climate events are costly and have far-reaching impacts on populations and various economic sectors [3]. Additionally, it is worth noting that global climate change will likely increase the probability of droughts [4]. So, drought forecasting is an essential but complicated task since the drought's beginning, ending, and duration are often hard to predict [5].
A quantitative measure of the drought is needed to solve the task described above. However, it is well-known that drought depends on many other climatic factors such as temperature and precipitation [6]. Hence, the drought measure (indices) can be calculated in many ways. Ones of the most fundamental and mostly known indices [7] are the Standardized Precipitation Index (SPI) [8] and Palmer Drought Severity Index (PDSI) [9]. In our studies, we use monthly PDSI for several reasons. First of all, our forecasting horizon is quite long - 12 months. According to recent work [10], PDSI is a good choice for long-term forecasting. Also, PDSI has a long recording history and allows us to consider the effect of global warming, which is quite a significant problem in our days [11].
More and more machine learning algorithms are involved in solving drought forecasting tasks. Regarding classic approaches, in [5] prominent stochastic models, AutoRegressive Integrated Moving Average (ARIMA) and multiplicative Seasonal AutoRegressive Integrated Moving Average (SARIMA), were exploited to predict drought via SPI index. Multivariate regression having a history of PDSI and some other global climate indexes on input was used to predict the PDSI index in South-Central Oklahoma [10]. Gradient boosting algorithm shown effectiveness for geospatial data [12; 13] and can handle imbalanced problems [14] encountered in drought prediction. Besides, this algorithm has proved its power in drought classification in Turkish regions [15]. Authors [16] used a logistic regression model for binary drought classification task via SPI index.
Moving on to deep learning methods, the authors in [17] proposed to use a recursive multistep neural network and direct multistep neural network for
drought forecasting via the SPI index. One of the most prominent approaches for time series data is Recurrent Neural Network (RNN) [18]. It is known that RNNs are often better than classical approaches [19]. Indeed, research [20] shows that Long Short-Term Memory (LSTM) [21] outperforms ARIMA for long-term forecasting of SPI index, but in short-term prediction, ARIMA has quite well results. To exploit the advantages of ARIMA and LSTM, authors of [22] proposed to use a hybrid ARIMA-LSTM model. The approaches described above used only historical (temporal) information about the data. Nevertheless, besides temporal information, the data has spatial dependencies. One of the most famous methods to account for both temporal and spatial information is ConvLSTM [23]. This method applies to many tasks, such as earthquake prediction [24]. Authors of [7] used ConvLSTM for short-term drought forecasting (8 days) of satellite-based drought indices - Scaled Drought Condition Index (SDCI) and SPI. Another idea is to use Transformer architectures. Initially developed for Natural Language Processing (NLP) tasks [25] now these models are widely used in a wide variety of domains, e.g., processing video frames in Computer Vision (CV) [26]. Consequently, adopting attention-based architectures to spatio-temporal modeling is a natural idea. One of the most known and modern weather transformers are EarthFormer [27] and FourCastNet [28]. These models solve different regression tasks (e.g., precipitation nowcasting) and show superior performance for various spatio-temporal problems. We adopted these architectures to our task.
The applications of deep learning models in conjunction with geospatial data extend beyond drought forecasting and can be effectively employed in diverse weather scenarios. Unlike computationally intensive numerical weather prediction methods based on solving partial differential equations, recent research by [29] demonstrates the enhanced capabilities of transformer-based models in delivering accurate medium-range predictions for various upper-air and surface variables. Furthermore, [30] successfully combined multiple deep-learning models to rectify short-term wind predictions derived from conventional numerical algorithms. In the realm of meteorological forecasting complexity, [31] harnessed the power of Generative Adversarial Networks (GANs) and Variational Autoencoder GANs (VAE-GANs) to generate spatial forecasts of precipitation in the UK. Similarly, in the context of the contiguous United States, [32] showcased the superior performance of a 3D convolutional network in short-term precipitation predictions. The existing literature highlights the strengths of deep learning models for generating short-term and
medium-term forecasts, spanning from durations of several hours to several days or even weeks. This diverse range of applications underscores the adaptability and potential of deep learning in enhancing meteorological prediction methodologies.
Besides weather forecasting problems, deep learning models for geospatial data have found many applications. To begin with, this type of data could be transformed into images and then used to forecast traffic with CNNs, as in [33]. Naturally, one would exploit spatial dependencies to predict epidemic propagation, as in the case of the Madrid region and COVID-19 in the paper [34]. More recently, [35] reviewed various machine learning methods for forecasting dengue risk, including CNNs and RNNs. Next, in the paper by [36], geospatial LSTMs were exploited to predict invasive species outbreaks. Finally, various convolutional and recurrent neural network architectures were successfully exploited in crop yield forecasts, thus immediately providing economic value for the agricultural sector.
Our paper considers different classic (Gradient boosting, logistic regression) and deep-learning (ConvLSTM, EarthFormer, and FourCastNet) methods for long-term drought forecasting using PDSI values. The main contribution of our research are:
1. Deep learning models solve the problem of medium (up to 12 months) drought forecasting. We formulate the problem as the prediction of the probability of a drought.
2. _Medium-term_ forecasting (up to 6 months ahead) is better to provide via **EarthFormer** meanwhile _Long-term_ forecasting is quite suitable for **ConvLSTM**. So, we recommend to use them accordingly.
3. Surprisingly, we observed pretty good performance of Logistic regression and Gradient Boosting models, especially in _Short-term_ forecasting, if correct features are used.
Detailed explanations and intuition on achieved results are in 4.1 section. The list of key novelties of our work is the following:
1. We adopted and explored traditional spatio-temporal methods (ConvLSTM, gradient boosting, logistic regression) as modern approaches (EarthFormer, FourCastNet). According to our knowledge, we are the first to use these models for long-term forecasting of PDSI values.
2. Unlike many previous works, which tried to predict PDSI value directly (regression task), we treated the drought forecasting task as a binary
classification task. We chose \(-2\) as threshold according to [10]. All PDSI values below \(-2\) are interpreted as a drought. So, now a model report a probability of a drought, making it more suitable for estimation of financial risk for particular
3. We provided extensive experiments using freely accessible data from diverse regions and continents.
## 2 Data
We have used publicly available geospatial data from Google Earth Engine [37]. Specifically, for obtaining the PDSI data, we employed the Terra-Climate Monthly dataset [38]. Our PDSI data encompasses a comprehensive range of climatic values spanning from 1958 to 2022, covering the entirety of Earth. To test the consistency of considered models, we look at regions across continents and various climate zones - from the state of Missouri and Eastern Europe to India. The considered regions are depicted in Figure 1. Their characteristics are given in Table 1.
The input data is downloaded as a tif file and later restructured as a 3D tensor, with one dimension representing the time scale (monthly intervals). In contrast, the other two dimensions correspond to spatial coordinates (x and y) of the grid. The resolution of the regional dataset varies, where grid
Figure 1: Regions chosen for PDSI forecast
dimensions range from 30 by 60 up to 128 by 192 (spatial resolution of a single cell of TerraClimate data is roughly 5 km), accommodating different levels of granularity and spatial detail.
## 3 Methods
We compare deep learning approaches, including Convolutional LSTM and novel transformer models, such as FourCastNet from Nvidia and Earth-Former from Amazon, with classic methods, including baseline model, gradient boosting, and logistic regression.
### Baseline
As a global baseline and a coherence check, we took the most prevalent class from train data and compared it with actual targets from the test subset. We also checked a rolling baseline - i.e., the most frequent class from recent history (from 6 to 24 months), but the results were almost indistinguishable from the global baseline, so we kept them out of our tables and graphs.
### Basic methods: Logistic regression and Gradient boosting
Both logistic regression and gradient boosting cannot work with raw data itself as it is. Hence, we created a data module that treated each grid cell as an individual value and transformed our task into a typical time series forecasting problem. To benefit from spatial correlations, we incorporate values from neighboring cells. For example, if we consider a 3x3 neighborhood, this includes eight additional time series. It is important to note that for "edge" cells, some of the neighboring cells may contain all zeros due to data limitations.
\begin{table}
\begin{tabular}{l c c c c} \hline Region & Span, & \% of normal & \% of drought & Spatial \\ & months & PDSI \(\geq-2\) & PDSI \(\leq-2\) & dimensions, km \\ \hline Missouri, USA & 754 & 74.91 & 25.09 & 416x544 \\ Madhya Pradesh, India & 754 & 70.66 & 29.34 & 512x768 \\ Goias, Brazil & 754 & 68.97 & 31.03 & 640x640 \\ Northern Kazakhstan & 742 & 68.70 & 31.30 & 256x480 \\ Eastern Europe & 742 & 66.28 & 33.72 & 352x672 \\ \hline \end{tabular}
\end{table}
Table 1: Regions’ summary statistics
Logistic RegressionLogistic regression is usually the natural choice for tasks with linear dependence or as a baseline model. The novel research [39] shows that time series linear models are a good choice.
Gradient boostingWe adopted the gradient boosting of decision trees, implemented using the well-established library XGBoost [40]. XGBoost, renowned for its speed and efficiency across a wide range of predictive modeling tasks, has consistently been favored by data science competition winners. It operates as an ensemble of decision trees, with new trees aimed at rectifying errors made by existing trees in the model. Trees are successively added until no further improvements can be made.
### Transformer-based methods
FourCastNetFourCastNet (short for Fourier ForeCasting Neural Network) is a weather forecasting model developed by authors from Nvidia [28]. It constitutes part of their Modulus Sym deep learning framework for solving various applied physics tasks. It combines Adaptive Fourier Neural Operator (AFNO) from [41] with Vision Transformer, and the model is both computationally, reducing spatial mixing complexity to \(O(N\log N)\), and memory efficient, where \(N\) is the sequence length. In the original papers, authors produced high-resolution short-term forecasts of wind speed and precipitation. We modified the last layer of the model to switch it from regression to classification task and evaluate it for the long-term forecasting problem.
EarthFormerVanilla transformers have \(O(N^{2})\) complexity, and consequently, it is hard to apply them to weather spatio-temporal data because of their large dimensionality. Authors [27] propose to use the "divide and conquer" method: they divide original data into non-intersecting parts (called cuboids) and use a self-attention mechanism on each cuboid separately in parallel. Such an approach allows significantly reduced complexity. The authors introduced EarthFormer for regression tasks, so we adopted it for a classification problem.
### ConvLSTM
This model draws inspiration from the work presented in the paper by Kail et al. (2021) [24], and it represents a modified version of the Convolutional LSTM architecture proposed by Shi et al. (2015) [23]. To capture temporal dependencies, we adopt Recurrent Neural Networks (RNNs).
Specifically, we employ Long Short-Term Memory (LSTM), a type of RNN that utilizes an additional state cell to facilitate long-term memory retention. We extend the traditional one-dimensional hidden states of LSTM to two-dimensional feature maps, enabling grid-to-grid transformations, which are crucial for our task. We process grids of PDSI values with various dimensions, ranging from 9x16 to 40x200.
To exploit spatial dependencies among drought severity values, we incorporate Convolutional Neural Networks (CNNs), well-suited for image and two-dimensional signal processing, including geospatial data. This approach combines the strengths of both RNN and CNN architectures by passing information through the RNN component as a feature map obtained using CNN operations. This integration allows us to capture and utilize temporal and spatial information within the following architecture.
Details of architectureConvolutional LSTM follows the pipeline:
1. We represent data as a sequence of grids: for each cell, we specify a value of PDSI for a particular month; the input grid at each time moment has dimension \(grid_{h}\times grid_{w}\) (varying from 50x50 to 200x200 for different regions of interest).
2. We pass the input grid through a convolutional network to create an embedding of grid dimensionality size with 16 channels. As an output of LSTM at each time moment, we have a hidden representation (short-term memory) of size \(h\times grid_{h}\times grid_{w}\), cell (long-term memory) representation of a similar size, and the output of size \(h\times grid_{h}\times grid_{w}\).
3. We transform the output to the size \(1\times grid_{h}\times grid_{w}\) using convolution \(1\times 1\) to receive probabilities of the drought for each cell as a final prediction (or to \(k\times grid_{h}\times grid_{w}\), where \(k>2\) is a number of classes of drought conditions that we are trying to predict). As an additional hyperparameter, we vary the forecasting horizon - i.e., we can forecast PDSI for the next month or \(k\)-th month.
## 4 Results
Description of experimentFor a binary classification problem, we arbitrarily set a drought threshold as \(-2\), which can be justified by PDSI bins from Table 2 and later adjusted for further experiments. For each cell in a considered region, the problem to solve is a binary classification problem with
the true value is the indicator that PDSI exceeds \(-2\). By construction, the model outputs the probability of a serious drought from the model output.
For validation purposes, we have divided our dataset into the train (\(70\%\)) and test (\(30\%\)) subsets for all of the five world regions (Missouri, Northern Kazakhstan, Madhya Pradesh, Eastern Europe, and Goias). The model is trained via a train subset of the data, the quality metrics are calculated via an independent test subset. The validation is out-of-time: the test subset follows the training subset in time.
Evaluation procedureWe use \(ROC\,AUC\), \(PR\,AUC\) and \(F1\) score to evaluate the model. During validation for early stopping and hyperparameter optimization, we chose \(ROC\,AUC\) score. All these scores are the medians - i.e., concatenating spatial prediction, we receive a temporal vector at every cell of spatial prediction. Next, we receive a single score for each cell, so we end up with a grid full of metrics, and finally, we compute a median. Higher values of all scores correspond to better models. \(ROC\,AUC\) scores has the perfect value being \(1\) and the value for a random prediction being \(0.5\).
Compared methodsFor our main experiments, we have exploited the performance of baseline (most frequent class from historical data), XGBoost (gradient boosting of random forests), LogReg (logistic regression), Convolutional LSTM (ConvLSTM), FourCastNet (transformer model from Nvidia) and Eartformer (transformer model from Amazon).
XGBoost and LogReg are two basic algorithms often used as strong baselines. The last three are variants of neural networks that showed strong performance in various geospatial problems. They represent two dominating architectures in geospatial modeling: ConvLSTM is a combination of recurrent and convolutional neural networks, FourCastNet and Eartformer
### Main results
Analysis of resultsOur findings indicate that ConvLSTM model outperformed the competitors, including the gradient boosting tree algorithm, achieving an impressive \(ROC\,AUC\) score of \(0.9\) for a one-month prediction. Notably, ConvLSTM exhibits a gradual decline in performance, reaching \(0.6-0.65\) for longer forecasting horizons (ranging from \(1\) month to \(12\) months). The standard gradient boosting approach initially yielded a similar \(ROCAUC\) score of \(0.9\) but sharply dropped to \(0.5\) as the forecasting horizon extended. EarthFormer outperforms other approaches for shorter horizons and falling
short of ConvLSTM at longer horizons of 9-12 months. These results are visually depicted in Figure 2. Additionally, we present the results for six-month prediction by regions in Figure 4, where the variation in scores for different geographies across models is visible.
_Why does transformer fail in long-term prediction?_ Our assumption of such behavior is the permutation-invariance of the attention mechanism. Despite positional encoding, transformers can't effectively extract temporal information from long input sequences. Since a long input sequence is essential for long-term forecasting, transformers don't show good results. Similar results for different time-series tasks were observed in [39]. ConvLSTM naturally extracts temporal information via LSTM.
_Why is logistic regression impressive?_ Since Gradient boosting results are almost identical to Logistic regression, we want to discuss only Logistic regression performance. First of all, the power of linear models was already shown in [39], where they beat modern Transformer architectures on almost all datasets. In our experiments, linear models are worse than other models on _Long-term_ prediction, but on _Short-term_ scale, we can see quite comparable results. We tried different history lengths, but our results show that it is enough to take only the nearest to the future horizons element. Our intuition is that the nearest future predictor variables are closely (particularly linearly) related to the history element. For example, PDSI in July is close to PDSI in August but far away from December. Hence, linear models are good at _Short-term_ predictions but poor at _Long-term_ forecasting.
\begin{table}
\begin{tabular}{l l} \hline PDSI value & Drought severity class \\ \hline
4.00 and above & Extreme wet spell \\
3.00-3.99 & Severe wet spell \\
2.00-2.99 & Moderate wet spell \\
1.00-1.99 & Mild wet spell \\ -1.00 to 0.99 & Normal \\ -1.00 to -1.99 & Mild dry spell \\ -2.00 to -2.99 & Moderate dry spell \\ -3.00 to -3.99 & Severe dry spell \\ -4.00 and below & Extreme dry spell \\ \hline \end{tabular}
\end{table}
Table 2: Classification of various PDSI values, source [42]
\begin{table}
\begin{tabular}{r r r r r r} \hline \hline Horizon, months & 1 & 3 & 6 & 9 & 12 \\ \hline \multicolumn{5}{l}{Median ROC AUC:} \\ \hline Baseline & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \\ LogReg & 0.886 & 0.774 & 0.640 & 0.546 & 0.518 \\ XGBoost & 0.878 & 0.754 & 0.628 & 0.568 & 0.542 \\ FourCastNet & 0.881 & 0.711 & 0.624 & 0.561 & 0.536 \\ EarthFormer & **0.948** & **0.840** & 0.690 & 0.556 & 0.480 \\ ConvLSTM & 0.887 & 0.802 & **0.693** & **0.650** & **0.617** \\ \hline \multicolumn{5}{l}{Median PR AUC:} \\ \hline Baseline & 0.355 & 0.354 & 0.354 & 0.355 & 0.356 \\ LogReg & 0.766 & 0.598 & 0.470 & 0.394 & 0.360 \\ XGBoost & 0.752 & 0.574 & 0.44 & 0.39 & 0.372 \\ FourCastNet & 0.776 & 0.546 & 0.455 & 0.402 & 0.382 \\ EarthFormer & **0.880** & 0.650 & 0.514 & 0.438 & 0.362 \\ ConvLSTM & 0.772 & **0.689** & **0.565** & **0.505** & **0.452** \\ \hline \multicolumn{5}{l}{Median F1:} \\ \hline Baseline & 0.645 & 0.646 & **0.646** & **0.645** & **0.644** \\ LogReg & **0.846** & **0.698** & 0.480 & 0.226 & 0.094 \\ XGBoost & 0.836 & 0.674 & 0.480 & 0.366 & 0.292 \\ FourCastNet & 0.831 & 0.603 & 0.460 & 0.366 & 0.314 \\ EarthFormer & 0.816 & 0.604 & 0.448 & 0.226 & 0.178 \\ ConvLSTM & 0.784 & **0.698** & 0.600 & 0.558 & 0.543 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Median Metrics vs Forecast Horizon, binary classification; best values are in bold, second best are underlined
Figure 2: Median metrics: ROC-AUC, PR-AUC, F1 from left to right for different forecast horizons averaged over five considered regions, binary drought classification
### Predictions and errors for a particular region
To assess the performance of the Convolutional LSTM algorithm (which proved to be the most stable and promising for drought forecasting), we focused on the region of Missouri state, where we will run several ablation studies. As an illustration, the spatial distribution of \(ROCAUC\) scores is depicted in Figure 3. Notably, we observed a non-uniform distribution of \(ROCAUC\) values across the cells within the region. The standard deviation of the scores is substantial, and individual values range from those close to random predictors (\(ROC\ AUC=0.6\)) to near-perfect scores approaching 1.0. This variability highlights the diverse predictive capabilities of our algorithm across different spatial locations within Missouri.
\begin{table}
\begin{tabular}{r c c c c c} \hline \hline Region & Northern & Eastern & Madhya & Goias & Missouri \\ & Kazakhstan & Europe & Pradesh & & \\ \hline median ROC AUC: & & & & & \\ \hline Baseline & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \\ LogReg & 0.60 & 0.63 & 0.61 & 0.61 & **0.75** \\ XGBoost & 0.59 & 0.64 & 0.59 & 0.61 & 0.71 \\ FourCastNet & 0.60 & 0.55 & 0.65 & 0.61 & 0.69 \\ EarthFormer & 0.54 & **0.69** & **0.84** & **0.65** & 0.73 \\ ConvLSTM & **0.71** & 0.68 & 0.71 & 0.60 & **0.75** \\ \hline median PR AUC: & & & & & \\ \hline Baseline & 0.37 & 0.43 & 0.35 & 0.39 & 0.23 \\ LogReg & 0.46 & **0.67** & 0.24 & **0.55** & 0.43 \\ XGBoost & 0.46 & 0.63 & 0.20 & 0.54 & 0.37 \\ FourCastNet & 0.46 & 0.55 & 0.36 & 0.54 & 0.37 \\ EarthFormer & 0.47 & 0.22 & **0.77** & 0.52 & **0.59** \\ ConvLSTM & **0.55** & 0.65 & 0.57 & 0.54 & 0.50 \\ \hline median F1: & & & & & \\ \hline Baseline & 0.63 & 0.57 & 0.65 & **0.61** & **0.77** \\ LogReg & 0.42 & 0.62 & 0.35 & 0.41 & 0.60 \\ XGBoost & 0.45 & 0.58 & 0.30 & 0.54 & 0.53 \\ FourCastNet & 0.42 & 0.39 & 0.46 & 0.51 & 0.52 \\ EarthFormer & 0.03 & 0.18 & **0.80** & 0.55 & 0.65 \\ ConvLSTM & **0.65** & **0.64** & 0.64 & 0.52 & 0.56 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Median Metrics vs Region, binary classification, 6 months horizon; best values are in bold, second best are underlined
#### 4.2.1 Performance Evaluation for Cropped Region
Description of experimentAs is typical with ROC AUC maps, the worst predictions are found on the edges and some corners. We have observed that this behavior is consistent regardless of the region being studied. Consequently, making predictions for a larger region and cropping the desired region of interest may be advantageous. We have conducted a study to test this hypothesis for the attached map, and the results are shown in Table 5.
Analysis of resultsBased on the findings of this specific experiment, we deduce that cropping approximately 40-50% of the initially selected region maximizes our score. In other words, choosing a region that is initially 1.6-2 times larger than our target region is advisable. However, the precise amount of zoom required for optimal results has to be determined through further experiments.
Next, we conducted several similar experiments to investigate how the model
Figure 3: Spatial distribution of ROC AUC for 6 month forecast, Missouri, ConvLSTM
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Percent of map cropped & 0 & 10 & 20 & 30 & 40 \\ median ROC AUC & 0.7525 & 0.7592 & 0.7665 & 0.7749 & 0.7834 \\ \hline Percent of map cropped & 50 & 60 & 70 & 80 & 90 \\ median ROC AUC & 0.7886 & 0.7899 & 0.7880 & 0.7838 & 0.7825 \\ \hline \hline \end{tabular}
\end{table}
Table 5: ROC AUC score vs crop percentage, 6 month forecast, ConvLSTM model for Missouri
predictions change with the decrease in the total squared area. For this, we took the same geographic region, Missouri, and examined various combinations of history length and forecast horizon. We trained a new checkpoint for each variant of history length, forecast horizon, and region area (varying from entire state to about a quarter of it).
The results are summarized in Table 6, and in more detail, they are presented in Tables 8, 9, 10, 11. We observed that predicting at a smaller area with pre-trained model from a larger area generally works better. However, the degree of improvement is marginal, usually not exceeding 0.5-1%. Figure 4 presents the evolution of spatial maps.
convex [44], and Gradient boosting is an ensemble per se [15]. Obtained ensembles can also be used to access the uncertainty of predictions by machine learning models improving the decision-making process [45].
## 5 Conclusion
Droughts have emerged as severe natural disasters that significantly impact the economy, influencing sectors such as agriculture and the well-being of populations. The frequency and intensity of droughts have been amplified worldwide due to climate change, as exemplified by the summer of 2022 in the Northern Hemisphere.
Accurate prediction of future drought occurrences and adequate preparation for their consequences is crucial in mitigating the adverse impacts of climate change. Our research study has shown substantial improvements in forecasting capabilities for PDSI indicator, including the Palmer Drought Severity Index (PDSI), by combining convolutional neural networks and recurrent neural networks. This Convolutional LSTM model surpasses the performance of standard algorithms, such as gradient boosting, and even more advanced transformer models for long-term predictions.
By enhancing our understanding and predictive abilities related to droughts, we can proactively address the challenges of climate change. We hope that the insights gained from this study will contribute to better preparedness and mitigation strategies to alleviate the adverse effects of droughts on the economy and society.
|
2309.12501 | Knowledge Graph Embedding: An Overview | Many mathematical models have been leveraged to design embeddings for
representing Knowledge Graph (KG) entities and relations for link prediction
and many downstream tasks. These mathematically-inspired models are not only
highly scalable for inference in large KGs, but also have many explainable
advantages in modeling different relation patterns that can be validated
through both formal proofs and empirical results. In this paper, we make a
comprehensive overview of the current state of research in KG completion. In
particular, we focus on two main branches of KG embedding (KGE) design: 1)
distance-based methods and 2) semantic matching-based methods. We discover the
connections between recently proposed models and present an underlying trend
that might help researchers invent novel and more effective models. Next, we
delve into CompoundE and CompoundE3D, which draw inspiration from 2D and 3D
affine operations, respectively. They encompass a broad spectrum of techniques
including distance-based and semantic-based methods. We will also discuss an
emerging approach for KG completion which leverages pre-trained language models
(PLMs) and textual descriptions of entities and relations and offer insights
into the integration of KGE embedding methods with PLMs for KG completion. | Xiou Ge, Yun-Cheng Wang, Bin Wang, C. -C. Jay Kuo | 2023-09-21T21:52:42Z | http://arxiv.org/abs/2309.12501v1 | # Knowledge Graph Embedding: An Overview
###### Abstract
Many mathematical models have been leveraged to design embeddings for representing Knowledge Graph (KG) entities and relations for link prediction and many downstream tasks. These mathematically-inspired models are not only highly scalable for inference in large KGs, but also have many explainable advantages in modeling different relation patterns that can be validated through both formal proofs and empirical results. In this paper, we make a comprehensive overview of the current state of research in KG completion. In particular, we focus on two main branches of KG embedding (KGE) design: 1) distance-based methods and 2) semantic matching-based methods. We discover the connections between recently proposed models and present an underlying trend that might help researchers invent novel and more effective models. Next, we delve into CompoundE and CompoundE3D, which draw inspiration from 2D and 3D affine operations, respectively. They encompass a broad spectrum of techniques including distance-based and semantic-based methods. We will also discuss an emerging approach for KG completion which leverages pre-trained language models (PLMs) and textual descriptions of entities and relations and offer insights into the integration of KGE embedding methods with PLMs for KG completion.
Knowledge Graph Embedding Knowledge Graph Completion Link Prediction.
## 1 Introduction
Knowledge Graphs (KGs) serve as vital repositories of information for many real-world applications and services, including search engines, virtual assistants, knowledge discovery, and fraud detection. The construction of KGs primarily involves domain expert curation or the automated extraction of data from vast web corpora. Despite the precision achieved by machine learning models in entity and relation extraction, they can introduce errors during KG construction. Furthermore, due to the inherent incompleteness of entity information, KG embedding (KGE) techniques come into play to identify missing relationships between entities. Over the past decade, there has been a surge in interest with the creation of various KGE models as evidenced in Figure 1. As such, it will be valuable to have an overview of extant KGE models to compare their similarities and differences, as well as a summary of research resources such as public KGs, benchmarking datasets, and leaderboards. In this paper, we will give a comprehensive overview of previous developments in KGE models. In particular, we will focus on distance-based and semantic matching KGE models. In recent development of KGE models, we have observed an interesting trend of combining different geometric transformations to improve the performance of existing KGE models. Basic transformations, including translation, rotation, scaling, reflection, and shear, are simple yet very powerful tools for representing relations between entities in KG. In this paper, we will also present how these tools can be combined to come up with more powerful models.
### Background
KG finds vast and diverse applications. It enables swift retrieval of structured data about target entities during user searches. For instance, when you search for a well-known person, place, or popular topic on Google, the Google Knowledge Panel, shown in Figure 2, accompanies search results, providing quick insights into the subject of interest. The data source for the Knowledge Panel is the Google KG, launched in 2012, initially derived from Freebase, an open-source KG acquired by Google in 2010, and later augmented with data from sources like Wikidata.
Figure 1: Timeline of Knowledge Graph Embedding models.
Figure 2: Illustration of Knowledge Panel from Google Search.
KG information integration extends to various domains, with E-commerce companies creating user and product KGs and merging them with other KGs to gain business intelligence. Hospitals and clinics employ KGs to share patient medical conditions, especially for patients relocating to different areas. Financial institutions use KGs to track illegal activities such as money laundering. Moreover, KGs serve as essential information sources for AI-powered virtual assistants like Siri, Alexa, and Google Assistant. Natural Language Understanding (NLU) algorithms analyze dialogs, extracting keywords to locate relevant KG subgraphs. By traversing these subgraphs, the assistants generate sensible responses using Natural Language Generation models. KGs also find applications in music recommendation systems, event forecasting based on temporal KGs, and more.
KG representation learning has been a subject of intensive research in recent years and remains a foundational challenge in Artificial Intelligence (AI) and Data Engineering. KGs are composed of triples, denoted as \((h,r,t)\), where \(h\) and \(t\) denote head and tail entities, while \(r\) signifies the connecting relation. For instance, the statement "Los Angeles is located in the USA" is encapsulated as the triple (Los Angeles, **isLocatedIn**, USA). KGE plays a critical role in a range of downstream applications, including multihop reasoning [57, 58], KG alignment [59, 60, 61], entity classification [62, 63].
The evaluation of KGE models often revolves around the link prediction task, assessing their ability to predict \(t\) given \(h\) and \(r\), or \(h\) given \(r\) and \(t\). The effectiveness of KGE models is determined by how closely their predictions align with the ground truth. Designing effective KGE models presents several challenges. First, KGE needs to be scalable since real-world KGs often contain millions of entities and relations. Designing embeddings that scale efficiently to handle large graphs is a significant challenge. Second, KGs are typically incomplete and subject to continuous updates. It is desirable for embedding models for handling missing data and capturing temporal dynamics and the history of relationships. Third, embeddings must also be expressive enough to capture the complexity of real-world relationships, such as 1-to-N, N-to-1, N-to-N, antisymmetric, transitive, and hierarchical relations and multi-modal data. Fourth, some entities and relations are rare. Embeddings should handle the long-tail distribution of data effectively.
We have collected a list of survey papers as shown in Table 1. Among these surveys, [64, 65, 66, 67, 68, 69, 70, 71] focus on discussing different embedding models, whereas [72, 70, 73] discuss the use of KG for reasoning and different applications. [74] elucidates the evolution of KGs and the reasons for their invention in a historical context. [72] summarizes methods for the creation, enrichment, quality assessment, refinement, and publication of KGs, and provides an overview of prominent open KGs and enterprise KGs. [73] also discusses the advantages and disadvantages of using KGs as background knowledge in the context of Explainable Machine Learning. However, none of these papers discuss the intrinsic connections between different distance-based embedding models that use geometric transformations. We believe that this paper will be helpful to the research community by providing a unique perspective on this topic.
\begin{table}
\begin{tabular}{c|c|p{142.3pt}|p{142.3pt}|p{142.3pt}} \hline
**No.** & **Year** & **Title** & **Author** & **Venue** \\ \hline
1 & 2013 & Representation Learning: A Review and New Perspectives [64] & Yoshua Bengio, Aaron C. Courville, Pascal Vincent. & IEEE Transactions on Pattern Analysis and Machine Intelligence \\ \hline
2 & 2016 & A Review of Relational Machine Learning for Knowledge Graphs [65] & Maximilian Nickel, Kevin Murphy, Volker Tresp, Evgeniy Gabrilovich. & Proceedings of the IEEE \\ \hline
3 & 2017 & Knowledge Graph Embedding: A Survey of Approaches and Applications [66] & Quan Wang, Zhendong Mao, Bin Wang, Li Guo. & IEEE Transactions on Knowledge and Data Engineering \\ \hline
4 & 2018 & A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications. [67] & HongYun Cai, Vincent W. Zheng, Kevin Chen-Chuan Chang. & IEEE Transactions on Knowledge and Data Engineering \\ \hline
5 & 2020 & A review: Knowledge Reasoning over Knowledge Graph. [68] & Xiaojun Chen, Shengbin Jia, Yang Xiang. & Expert Systems with Applications \\ \hline
6 & 2021 & Knowledge Graphs [74] & Claudio Gutierrez, Juan F. Sequeda. & Communications of the ACM \\ \hline
7 & 2021 & Knowledge Graph Embedding for Link Prediction: A Comparative Analysis [69] & Rossi, Andrea and Barbosa, Denilson and Firmani, Donatella and Matinata, Antonio and Merialdo, Paolo. & ACM Transactions on Knowledge Discovery from Data \\ \hline \end{tabular}
\end{table}
Table 1: Survey Papers.
### Our Contributions
Our main contributions in this paper can be summarized as follows.
* We review different KGE models, focusing on distance-based and semantic matching models.
* We collect relevant resources for KG research, including previously published survey papers, major open sourced KGs, and KG benchmarking datasets for link prediction, as well as link prediction performance on some of the most popular datasets.
* We discover the connections between recently published KGE models and propose CompoundE and CompoundE3D, which follow this direction of thought.
* We discuss recent work that leverages neural network models such as graph neural networks and pretrained language models, and how embedding-based models can be combined with neural network-based models.
The rest of this paper is organized as follows. In Section 2, we first introduce existing KGE models in both distance-based and semantic matching-based categories. We also discuss a number of commonly used loss functions and their suitability for different types of KGE scoring functions. In Section 3, we present CompoundE, followed by CompoundE3D, which unifies all distance-based KGE models that use affine operations. In Section 4, we summarize a list of open sourced KGs, popular benchmarking datasets for performance evaluation, and the evaluation metrics used in these datasets. We also provide the recent leaderboard for some popular datasets. In Section 5, we discuss existing neural network-based models for KG completion and emerging directions that use pretrained language models. Finally, in Section 6, we make concluding remarks.
## 2 Existing Models
KGE models are often categorized based on scoring functions and tools applied to model entity-relation interactions and representations. In this paper, we mainly discuss two major classes, namely 1) distance-based models, and 2) semantic matching models.
### Distance-based Models
Distance-based scoring function is one of the most popular strategies for learning KGE. The intuition behind this strategy is that relations are modeled as transformations to place head entity vectors in the proximity of their corresponding tail entity vectors or vice versa. For a given triple \((h,r,t)\), the goal is to minimize the distance between \(h\) and \(t\) vectors after the transformation introduced by \(r\).
TransE [1] is one of the first KGE models that interpret relations between entities as translation operations in vector space. Let \(\mathbf{h},\mathbf{r},\mathbf{t}\in\mathbb{R}^{d}\) denote the embedding for head, relation, and tail of a triple, respectively. TransE scoring function is defined as:
\[f_{r}(h,r)=\|\mathbf{h}+\mathbf{r}-\mathbf{t}\|_{p}, \tag{1}\]
where \(p=1\) or \(2\) denote 1-Norm or 2-Norm, respectively. However, this efficient model has difficulty modeling complex relations such as 1-N, N-1, N-N, symmetric and transitive relations. Many later works attempt to overcome this shortcoming. For example, TransH [2] projects entity embedding onto relation-specific hyperplanes so that complex relations can be modeled by the translation embedding model. Formally, let \(\mathbf{w}_{r}\) be the normal vector to a relation-specific hyperplane, then the head and tail representation in the hyperplane can be written as,
\[\mathbf{h}_{\perp}=\mathbf{h}-\mathbf{w}_{r}^{\top}\mathbf{h}\mathbf{w}_{r}, \quad\mathbf{t}_{\perp}=\mathbf{t}-\mathbf{w}_{r}^{\top}\mathbf{t}\mathbf{w}_{ r}. \tag{2}\]
The projected representations are then linked together using the same translation relationship,
\[f_{r}(h,r)=\|\mathbf{h}_{\perp}+\mathbf{r}-\mathbf{t}_{\perp}\|_{2}^{2}. \tag{3}\]
However, this orthogonal projection prevents the model from encoding inverse and composition relations. A similar idea called TransR [3] transforms entities into a relation-specific space instead. The TransR scoring function can be written as,
\[f_{r}(h,r)=\|\mathbf{M}_{r}\mathbf{h}+\mathbf{r}-\mathbf{M}_{r}\mathbf{t}\|_{ 2}^{2}. \tag{4}\]
However, the relation-specific transformation introduced in TransR requires \(O(kd)\) additional parameters. To save the additional parameters introduced, TransD [4] uses entity projection vectors to populate the mapping matrices, instead of using a dense matrix. TransD reduces the additional parameters from \(O(kd)\) to \(O(k)\). The scoring function can be written as,
\[f_{r}(h,r)=\left\|\left(\mathbf{r}_{p}\mathbf{h}_{p}^{\top}+\mathbf{I}\right) \mathbf{h}+\mathbf{r}-\left(\mathbf{r}_{p}\mathbf{t}_{p}^{\top}+\mathbf{I} \right)\mathbf{t}\right\|_{2}^{2}. \tag{5}\]
\begin{table}
\begin{tabular}{c c c c} \hline Model & Ent. emb. & Rel. emb. & Scoring Function & Space \\ \hline TransE [1] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}\in\mathbb{R}^{d}\) & \(-\|\mathbf{h}+\mathbf{r}-\mathbf{t}\|_{1/2}\) & \(O(md+nd)\) \\ TransR [3] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}\in\mathbb{R}^{\mathbf{R}},\mathbf{M}_{r}\in\mathbb{R}^{kd}\) & \(-\|\mathbf{M}_{\mathbf{h}}+\mathbf{r}-\mathbf{M}_{r}\|_{2}^{2}\) & \(O(mdk+nd)\) \\ TransH [2] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r},\mathbf{w}_{r}\in\mathbb{R}^{d}\) & \(-\left\|\left(\mathbf{h}-\mathbf{w}_{r}^{\top}\mathbf{h}_{w}\right)+\mathbf{r} -(\mathbf{t}-\mathbf{w}_{r}^{\top}\mathbf{w}_{r})\right\|_{2}^{2}\) & \(O(md+nd)\) \\ TransA [11] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}\in\mathbb{R}^{d},\mathbf{M}_{r}\in\mathbb{R}^{kd}\) & \((\|\mathbf{h}+\mathbf{r}-\mathbf{t}\|)^{\top}\mathbf{W}_{r}(\|\mathbf{h}+ \mathbf{r}-\mathbf{t}\|)\) & \(O(md^{2}+nd)\) \\ TransF [8] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}\in\mathbb{R}^{d}\) & \(\left(\mathbf{h}+\mathbf{r}^{\top}\mathbf{t}+(\mathbf{t}-\mathbf{r})^{\top} \mathbf{h}\right.\) & \(O(md+nd)\) \\ TransD [4] & \(\mathbf{h},\mathbf{h}_{p},\mathbf{t},\mathbf{t}_{p}\in\mathbb{R}^{d}\) & \(\mathbf{r},\mathbf{r}_{p}\in\mathbb{R}^{k}\) & \(-\|\left(\mathbf{r}_{p}\mathbf{h}_{r}^{\top}+\mathbf{I}\right)\mathbf{h}+ \mathbf{r}-\left(\mathbf{r}_{p}\mathbf{t}_{p}^{\top}+\mathbf{I}\right)\mathbf{t }\|_{2}^{2}\) & \(O(mk+nd)\) \\ TransM [76] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}\in\mathbb{R}^{d},w_{r}\in\mathbb{R}\) & \(-w_{r}\|\mathbf{h}+\mathbf{r}-\mathbf{t}\|_{1/2}\) & \(O(md+nd)\) \\ TransParse [13] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}\in\mathbb{R}^{k},\mathbf{M}_{r}\left(\theta_{r}\right)\in\mathbb{R}^{kd }\) & \(-\|\mathbf{M}_{r}\left(\theta_{r}\right)\mathbf{h}+\mathbf{r}-\mathbf{M}_{r} \left(\theta_{r}\right)\mathbf{t}\|_{1/2}^{2}\) & \(O(mdk+nd)\) \\ & & & & \(-\|\mathbf{M}_{1}^{\top}\left(\theta_{1}^{\top}\right)\mathbf{h}+\mathbf{r}- \mathbf{M}_{r}^{\top}\left(\theta_{r}^{\top}\right)\mathbf{t}\|_{1/2}^{2}\) & \(O(mdk+nd)\) \\ & & & & \(-\|\mathbf{M}_{1}^{\top}\left(\theta_{1}^{\top}\right)\mathbf{h}+\mathbf{r}- \mathbf{M}_{r}^{\top}\left(\theta_{r}^{\top}\right)\mathbf{t}\|_{1/2}^{2}\) & \(O(md+nd)\) \\ ManifoldE [14] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}\in\mathbb{R}^{d}\) & \(\left\|\mathcal{M}(h,r,t)-D_{r}^{2}\right\|^{2}\) & \(O(md+nd)\) \\ TorusE [21] & \([\mathbf{h}],[\mathbf{t}]\in\mathbb{T}^{u}\) & \([\mathbf{r}]\in\mathbb{T}^{u}\) & \(\min_{(x,y)\left((\|\mathbf{h}|+|\mathbf{r}|)\times\|\mathbf{t}\|\)}\left\|x-y \right\|_{i}\) & \(O(md+nd)\) \\ RotatE [24] & \(\mathbf{h},\mathbf{t}\in\mathbb{C}^{d}\) & \(\mathbf{r}\in\mathbb{C}^{d}\) & \(-\|\mathbf{h}\in\mathbf{r}-\mathbf{t}\|\) & \(O(md+nd)\) \\ PairRE [45] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}^{\mathbf{H}},\mathbf{r}^{\top}\in\mathbb{R}^{d}\) & \(-\|\mathbf{h}\in\mathbf{r}^{\top}\mathbf{t}\|\) & \(O(md+nd)\) \\ \hline \end{tabular}
\end{table}
Table 2: Distance-based KGE models
\begin{table}
\begin{tabular}{c c c c c} \hline Model & Ent. emb. & Rel. emb. & Scoring Function & Space \\ \hline RESCAL [77] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{M}_{r}\in\mathbb{R}^{kd,d}\) & \(\mathbf{h}^{\top}\mathbf{M}_{r}\mathbf{t}\) & \(O(md^{2}+nd)\) \\ DistMult [78] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}\in\mathbb{R}^{d}\) & \(\mathbf{h}^{\top}\mathrm{diag}(\mathbf{r})\mathbf{t}\) & \(O(md+nd)\) \\ HolE [79] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r}\in\mathbb{R}^{d}\) & \(\mathbf{r}^{\top}(\mathbf{h}\,\mathbf{r}\,\mathbf{t})\) & \(O(md+nd)\) \\ ANALOXY [80] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{M}_{r}\in\mathbb{R}^{kd,d}\) & \(\mathbf{h}^{\top}\mathbf{M}_{r}\mathbf{t}\) & \(O(md+nd)\) \\ Complex [81] & \(\mathbf{h},\mathbf{t}\in\mathbb{C}^{d}\) & \(\mathbf{r}\in\mathbb{C}^{d}\) & \(\mathrm{Re}\left((\mathbf{r},\mathbf{h},\mathbf{\bar{t}})\right)\) & \(O(md+nd)\) \\ SimpleE [82] & \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) & \(\mathbf{r},\mathbf{r}^{\prime}\in\mathbb{R}^{d}\) & \(\frac{1}{2}\left((\mathbf{h},\mathbf{r}
With the same goal of saving additional parameters, TranSparse enforces the transformation matrix to be a sparse matrix. The scoring function can be written as,
\[f_{r}(h,t)=\left\|\mathbf{M}_{r}\left(\theta_{r}\right)\mathbf{h}+\mathbf{r}- \mathbf{M}_{r}\left(\theta_{r}\right)\mathbf{t}\right\|_{1/2}^{2}, \tag{6}\]
where \(\theta_{r}\in[0,1]\) is the sparse degree for the mapping matrix \(\mathbf{M}_{r}\). Variants of TranSparse [13] include separate mapping matrices for head and tail. TransM [76] assigns different weights to complex relations for better encoding power. TransMS [27] attempts to consider multidirectional semantics using nonlinear functions and linear bias vectors. TransF [8] mitigates the burden of relation projection by explicitly modeling the basis of projection matrices. ITransF [16] makes use of concept projection matrices and sparse attention vectors to discover hidden concepts within relations.
In recent years, researchers expand their focus to spaces other than Euclidean geometry. TorusE [87] projects embedding in an n-dimensional torus space, where \([\mathbf{h}],[\mathbf{r}],[\mathbf{t}]\in\mathbb{T}^{n}\) denotes the projected representation of head, relation, tail. TorusE models relational translation in Torus space by optimizing the objective as follows.
\[\min_{(x,y)\in([h]+[r])\times[t]}\|x-y\|_{i}. \tag{7}\]
Multi-Relational Poincare model (MuRP) [22] embeds KG entities in a Poincare ball of hyperbolic space. It transforms entity embeddings using relation-specific Mobius matrix-vector multiplication and Mobius addition. The negative curvature introduced by hyperbolic space is empirically better in capturing the hierarchical structure in KGs. However, MuRP has difficulty encoding relation patterns and only uses a constant curvature. ROTH [28] improve over MuRP by introducing a relation-specific curvature.
RotatE [24] models entities in the complex vector space and interprets relations as rotations instead of translations. Formally, let \(\mathbf{h},\mathbf{r},\mathbf{t}\in\mathbb{C}^{d}\) denote the representation of head, relation, and tail of a triple in the complex vector space. The RotatE scoring function can be defined as,
\[f_{r}(h,t)=\left\|\mathbf{h}\circ\mathbf{r}-\mathbf{t}\right\|. \tag{8}\]
The self-adversarial negative sampling strategy also contributes to RotatE's significant performance improvement compared to its predecessors. Quite a few models attempt to extend RotatE. MRotatE adds an entity rotation constraint to the optimization objective to handle multifold relations. HAKE rewrites the rotation formula in polar coordinates and separates the scoring function into two components, that is, the phase component and the modulus component. The scoring function of HAKE can be written as,
\[f_{r}(h,t)=d_{r,m}(\mathbf{h},\mathbf{t})+\lambda d_{r,p}(\mathbf{h},\mathbf{ t}), \tag{9}\]
where
\[d_{r,p}(\mathbf{h},\mathbf{t})=\|\sin((\mathbf{h}_{p}+\mathbf{r}_{p}-\mathbf{ t}_{p})/2)\|_{1}, \tag{10}\]
and
\[d_{r,m}(\mathbf{h},\mathbf{t})=\|\mathbf{h}_{m}\circ((\mathbf{r}_{m}+\mathbf{ r}_{m}^{\prime})/(1-\mathbf{r}_{m}^{\prime}))-\mathbf{t}_{m}\|_{2}. \tag{11}\]
This modification leads to better modeling capability of hierarchy structures in KG. Rotate3D performs quaternion rotation in 3D space and enables the model to encode non-commutative relations. Rot-Pro extends the RotatE by transforming entity embeddings using an orthogonal projection that is also idempotent. This change enables RotPro to model transitive relations. PairRE also tries to improve over RotatE. Instead of rotating the head to match the tail, PairRE [45] performs transformations on both head and tail. The scoring function can be defined as,
\[f_{r}(h,t)=\|\mathbf{h}\circ\mathbf{r}^{\mathbf{H}}-\mathbf{t}\circ\mathbf{r} ^{\mathbf{T}}\|, \tag{12}\]
where \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) are head and tail entity embedding, and \(\mathbf{r}^{\mathbf{H}},\mathbf{r}^{\mathbf{T}}\in\mathbb{R}^{d}\) are relation-specific weight vectors for head and tail vectors respectively, and \(\circ\) is an elementwise product. In fact, this elementwise multiplication is simply the scaling operation. One advantage of PairRE compared to previous models is that it is capable of modeling subrelation structures in KG. LinearRE [40] is a similar model but adds a translation component between the scaled head and tail embedding. The transformation strategy can still be effective by adding it to entity embedding involved in relation rotation. SFPR [88] introduces a semantic filter which includes a scaling and shift component. HousE [89] and ReflectE [55] models relation as Householder reflection. UltraE [90] unifies Euclidean and hyperbolic geometry by modeling each relation as a pseudo-orthogonal transformation that preserves the Riemannian bilinear form. On the other hand, RotL [42] investigates the necessity of introducing hyperbolic geometry in learning KGE and proposes two more efficient Euclidean space KGE while retaining the advantage of flexible normalization.
### Semantic Matching Models
Another related idea of developing KGE models is to measure the semantic matching score. RESCAL [77] adopts a bilinear scoring function as the objective in solving a three-way rank-\(r\) matrix factorization problem. Formally, let \(\mathbf{h},\mathbf{t}\in\mathbb{R}^{d}\) denote the head and tail embedding and \(\mathbf{M}_{r}\in\mathbb{R}^{d\times d}\) is the representation for relation. Then, the RESCAL scoring function can be defined as,
\[f_{r}(h,t)=\mathbf{h}^{\top}\mathbf{M}_{r}\mathbf{t}. \tag{13}\]
However, one obvious limitation of this approach is that it uses a dense matrix to represent each relation, which requires an order of magnitude more parameters compared to those using vectors. DistMult [78] reduces free parameters by enforcing the relation embedding matrix to be diagonal. Let \(\mathbf{r}\in\mathbb{R}^{d}\) be the relation vector. Then, \(\text{diag}(\mathbf{r})\in\mathbb{R}^{d\times d}\) is the diagonal matrix constructed from \(\mathbf{r}\). Then, the DistMult scoring function can be written as,
\[f_{r}(h,t)=\mathbf{h}^{\top}\text{diag}(\mathbf{r})\mathbf{t}. \tag{14}\]
However, because the diagonal matrix is symmetric, it has difficulty modeling antisymmetric relations. ANALOGY [80] has the same scoring function as RESCAL but instead it attempts to incorporate antisymmetric configurations by imposing two regularization constraints: 1) \(\mathbf{M}_{r}\mathbf{M}_{r}^{\top}=\mathbf{M}_{r}^{\top}\mathbf{M}_{r}\) which requires the relation matrix to be orthonormal; 2) \(\mathbf{M}_{r}\mathbf{M}_{r^{\prime}}=\mathbf{M}_{r^{\prime}}\mathbf{M}_{r}\) which requires the relation matrix to be commutative. HolE [79] introduces circular correlation between head and tail vectors, which can be interpreted as a compressed tensor product to capture richer interactions. The HolE scoring function can be written as,
\[f_{r}(h,t)=\mathbf{r}^{\top}(\mathbf{h}\star\mathbf{t}). \tag{15}\]
ComplEx [81] extends the bilinear product score to the complex vector space so as to model antisymmetric relations more effectively. Formally, let \(\mathbf{h},\mathbf{r},\mathbf{t}\in\mathbb{C}^{d}\) be the head, relation, tail complex vectors, and \(\mathbf{\overline{t}}\) denote the complex conjugate of the \(\mathbf{t}\). The ComplEx scoring function can be defined as,
\[f_{r}(h,t)=\mathrm{Re}(\langle\mathbf{r},\mathbf{h},\mathbf{\overline{t}} \rangle). \tag{16}\]
where \(\langle\cdot,\cdot,\cdot\rangle\) denotes trilinear product, and \(\mathrm{Re}(\cdot)\) means taking the real part of a complex value. However, relation compositions remain difficult for ComplEx to encode. SimplE [82] models inverse of relations with an enhanced version of Canonical Polyadic decomposition. The scoring function of SimplE is defined as,
\[f_{r}(h,t)=\frac{1}{2}\left(\langle\mathbf{h},\mathbf{r},\mathbf{t}\rangle+ \langle\mathbf{t},\mathbf{r}^{\prime},\mathbf{h}\rangle\right). \tag{17}\]
TuckER [84] extends the semantic matching model to 3D tensor factorization of the binary tensor representation of KG triples. The scoring function is defined as,
\[f_{r}(h,t)=\mathcal{W}\times_{1}\mathbf{h}\times_{2}\mathbf{r}\times_{3} \mathbf{t}. \tag{18}\]
QuatE [23] and DualE [51] extend from the complex representation to the hypercomplex representation with 4 degrees of freedom to gain more expressive rotational capability. Let \(Q_{h},W_{r},Q_{t}\in\mathbb{H}^{k}\) be the representation of head, relation, and tail in quaternion space of the form \(Q=a+b\mathbf{i}+c\mathbf{j}+d\mathbf{d}\). Then the QuatE scoring function is defined as,
\[f_{r}(h,t)=Q_{h}\otimes W_{r}^{\sphericalangle}. \tag{19}\]
Specifically, the normalization of relation vector in quaternion space is defined as,
\[W_{r}^{\sphericalangle}(p,q,u,v)=\frac{W_{r}}{|W_{r}|}=\frac{a_{r}+b_{r}\mathbf{i }+c_{r}\mathbf{j}+d_{r}\mathbf{k}}{\sqrt{a_{r}^{2}+b_{r}^{2}+c_{r}^{2}+d_{r}^ {2}}}. \tag{20}\]
And the Hamiltonian product in quaternion space is computed as,
\[Q_{h}\otimes W_{r}^{\sphericalangle} =(a_{h}\circ p-b_{h}\circ q-c_{h}\circ u-d_{h}\circ v) \tag{21}\] \[+(a_{h}\circ q+b_{h}\circ p+c_{h}\circ v-d_{h}\circ u)\mathbf{i}\] \[+(a_{h}\circ u-b_{h}\circ v+c_{h}\circ p+d_{h}\circ q)\mathbf{j}\] \[+(a_{h}\circ v+b_{h}\circ u-c_{h}\circ q+d_{h}\circ p)\mathbf{k}.\]
And the inner product in quaternion space is computed as,
\[Q_{1}\cdot Q_{2}=\langle a_{1},a_{2}\rangle+\langle b_{1},b_{2}\rangle+\langle c _{1},c_{2}\rangle+\langle d_{1},d_{2}\rangle. \tag{22}\]
However, one disadvantage of these models is that they require very high dimensional spaces to work well and therefore it is difficult to scale to large KGs. CrossE introduces crossover interactions to better represent the birdirectional interactions between entity and relations. The scoring function of CrossE is defined as,
\[f_{r}(h,t)=\sigma\left(\tanh\left(\mathbf{c}_{r}\circ\mathbf{h}+\mathbf{c}_{r} \circ\mathbf{h}\circ\mathbf{r}+\mathbf{b}\right)\mathbf{t}^{\top}\right), \tag{23}\]
where the relation specific interaction vector \(\mathbf{c}_{r}\) is obtained by looking up interaction matrix \(\mathbf{C}\in\mathbb{R}^{n_{r}\times d}\). Diehdral [83] construct elements in a dihedral group using rotation and reflection operations over a 2D symmetric polygon. The advantage of the model is with encoding relation composition. SEEK [86] and AutoSF [91] identify the underlying similarities among popular KGEs and propose an automatic framework of designing new bilinear scoring functions while also unifying many previous models. However, the search space of AutoSF is computationally intractable and it is difficult to know if one configuration will be better than another unless the model is trained and tested with the dataset. Therefore, the AutoSF search can be time-consuming.
### Loss Functions
Loss function is an important part of KGE learning. Loss functions are designed to effectively distinguish valid triples from negative samples. The ultimate goal of optimizing the loss function is to get valid triples ranked as high as possible. In early days of KGE learning, margin-based ranking loss is widely adopted. The pairwise max-margin loss can be formally defined as,
\[L_{R}=\sum_{\begin{subarray}{c}(h,r,t)\in\mathcal{G}\\ (h^{\prime},r,t^{\prime})\in\mathcal{G}^{\prime}\end{subarray}}\max(0,\gamma+ f_{r}(h,t)-f_{r}(h^{\prime},t^{\prime})), \tag{24}\]
where \((h,r,t)\) denotes ground truth triple from the set of all valid triples \(\mathcal{G}\), \((h^{\prime},r,t^{\prime})\) denotes negative sample from the set of corrupted triples \(\mathcal{G}^{\prime}\). \(\gamma\) is the margin parameter which specifies how different \(f_{r}(h,t)\) and \(f_{r}(h^{\prime},t^{\prime})\) should be at optimum. In fact, a similar loss function is applied to optimize multiclass Support Vector Machine (SVM) [92]. Both distance-based embedding models, such as TransE, TransH, TransR, and TransD, and semantic matching-based models, such as LFM, NTN, and SME have successfully leveraged this scoring function. [18] proposes a Limit-based scoring loss to limit the score of positive triples so that the translation relation in positive triples can be guaranteed. The Limit-based score can be defined as,
\[L_{RS}=\sum_{\begin{subarray}{c}(h,r,t)\in\mathcal{G}\\ (h^{\prime},r,t^{\prime})\in\mathcal{G}^{\prime}\end{subarray}}\{[\gamma+f_{ r}(h,t)-f_{r}(h^{\prime},t^{\prime})]_{+}+\lambda[f_{r}(h,t)-\mu]_{+}\}. \tag{25}\]
More recently, a Double Limit Scoring Loss is proposed by [93] to independently control the golden triplets' scores and negative samples' scores. It can be defined as,
\[L_{SS}=\sum_{\begin{subarray}{c}(h,r,t)\in\mathcal{G}\\ (h^{\prime},r,t^{\prime})\in\mathcal{G}^{\prime}\end{subarray}}\{[f_{r}(h,t) -\mu_{p}]_{+}+\lambda[\mu_{n}-f_{r}(h^{\prime},t^{\prime})]_{+}\}, \tag{26}\]
where \(\mu_{n}>\mu_{p}>0\). This loss function intends to encourage low distance score for positive triplets and high distance scores for negative triplets. We can also trace the usage of a similar contrastive loss from Deep Triplet Network [94] for different image classification tasks.
Self adversarial negative sampling was proposed in RotatE [24] and can be defined as,
\[L_{SANS}=-\log\sigma(\gamma-f_{r}(h,t))-\sum_{i=1}^{n}p(h^{\prime}_{i},r,t^{ \prime}_{i})\log\sigma(f_{r}(h^{\prime}_{i},t^{\prime}_{i})-\gamma). \tag{27}\]
Cross entropy or negative log-likelihood of logistic models are often used in semantic matching models where a product needs to be computed. The negative log-likelihood loss function can be defined as,
\[L_{CE}=\sum_{(h,r,t)\in\mathcal{G}\cup\mathcal{G}^{\prime}}\{1+\exp[-y_{(h,r,t )}\cdot f_{r}(h,t)]\}. \tag{28}\]
Binary cross entropy or Bernoulli negative log-likelihood of logistic is also a popular loss function which can be defined as,
\[L_{BCE}=\frac{1}{N_{e}}\sum_{i=1}^{N_{e}}y_{i}\log p_{i}+(1-y_{i})\log(1-p_{i }). \tag{29}\]
The binary cross entropy scoring function is more suitable for neural network-based models such as ConvE. Although TuckER is a semantic matching-based model, it also uses binary cross entropy as its loss function because its implementation is similar to a neural network.
### Connections between Distance-based KGE Models
Another interesting finding is that we discover the connections between different recent embedding models. In recent years, there is a trend to design more effective KGE using affine transformations. In fact, models are related to each other, and many of the new embedding models emerge from models that only use the most fundamental operations, such as translation (TransE), rotation (RotatE), and scaling (PairRE). We illustrate connections between recent KGE models in Figure 3. For example, MuRE is a Euclidean version of MuRP, a popular hyperbolic space KGE model proposed in [22]. In MuRE, a diagonal matrix \(\mathbf{R}\in\mathbb{R}^{d\times d}\) is applied to the head entity, and a translation vector \(\mathbf{r}\in\mathbb{R}^{d}\) to the tail entity. The scoring function can be written as,
\[\phi(e_{s},r,e_{o})=-d(\mathbf{Re}_{s},\mathbf{e}_{o}+\mathbf{r})^{2}+b_{s}+b_ {o}, \tag{30}\]
where \(b_{s}\) and \(b_{o}\) are the entity-specific bias terms for head and tail entities, respectively. This is essentially combining translation and scaling operations and MuRP implements a similar idea in the hyperbolic space. Similarly in [28], RotE and RotH are baseline models proposed in Euclidean space and hyperbolic space, respectively, that essentially apply 2D rotation operators to head entity embedding in the translational distance scoring function. The RotE scoring function can be defined as,
\[s(h,r,t)=d(\text{Rot}(\mathbf{\Theta})\mathbf{e}_{h}+\mathbf{r}_{r},\mathbf{e }_{t})+b_{h}+b_{t} \tag{31}\]
RefE and ReH can be derived similarly by applying 2D reflection operators. More recently, [95] combines translation and scaling operations with both distance-based models (TransE, RotatE) and semantic matching models (DistMult, ComplEx). Performance improvement in link prediction can be observed following this approach. SFBR [88] applies
Figure 3: Connections between different knowledge graph embedding models
semantic filters to distance-based and semantic matching-based models. One of the more effective MLP-based filters has diagonal weight matrices, which essentially apply scaling and translation operations to the entity embeddings. STaR [96] focuses on designing bilinear product matrices of semantic matching-based models by inserting scaling and translation components in order to enable the KGE to handle complex relation types such as 1-N, N-1, and N-N relations.
Similar trend has also been observed in quaternion space KGE that was first proposed by [23]. DualE [51] introduces translation operations and combines with quaternion rotations to encode additional relation patterns, including multiplicity. BiQUE [52] further includes hyperbolic rotations and scaling operations to better model hierarchical semantics. 3H-TH adds quaternion rotation to hyperbolic embedding model MuRP to further improve link prediction performance.
Quite a few models also leverage scaling operation that is first demonstrated by PairRE [45] to have good performance in link prediction. Both LinearRE [40] and TranSHER [97] introduce translation vectors in the scaling-based scoring functions, but give different interpretations. LinearRE treats triple encoding as a linear regression problem, whereas TranSHER frames it as a translating distance model on a hyper-ellipsoid surface. Inspired by the idea of compounding affine operations for image manipulation, CompoundE [98] further includes rotation operators to encode non-commutative relation compositions. Both ReflectE [55] and HousE [89] encodes relations with householder reflections that have intrinsic symmetry property. ReflectE further explores the effect of combining translation vectors for also modeling antisymmetric relations in KG. On the other hand, HousE evaluates the effect of having a sequence of householder reflections on the link prediction performance. However, ATTH [28] was in fact the first work that introduced reflection operations in KGE. Additional operators have also been applied to existing KGEs to encode temporal information. For instance, TeRo [99] applies 2D rotation to head and tail entity embedding in TransE to encode time-specific information in temporal KG quadruples. Similarly, RotateQVS [100] adds a time-specific translation vector to entity embedding in Rotate3D [30], which leverages quaternion rotation. CompoundE3D [101] gets inspired by the evolution from Rotate to Rotate3D and proposes a KGE that unifies all geometric operators including translation, scaling, rotation, reflection, and shear in 3D space. Apart from proposing a unified framework, CompoundE3D also suggests a beam search-based procedure for finding the optimal KGE design for each KG dataset.
## 3 Unified Framework for Distance-based KGE with Affine Transformations: CompoundE and CompoundE3D
In this section, we will introduce the detailed formulation of CompoundE, followed by CompoundE3D. For both CompoundE and CompoundE3D, we have three forms of scoring functions, namely
* CompoundE-Head \[f_{r}(h,t)=\|\mathbf{M_{r}}\cdot\mathbf{h}-\mathbf{t}\|,\] (32)
* CompoundE-Tail \[f_{r}(h,t)=\|\mathbf{h}-\mathbf{\hat{M}_{r}}\cdot\mathbf{t}\|,\] (33)
* CompoundE-Complete \[f_{r}(h,t)=\|\mathbf{M_{r}}\cdot\mathbf{h}-\mathbf{\hat{M}_{r}}\cdot\mathbf{t}\|,\] (34)
where \(\mathbf{M_{r}}\) and \(\mathbf{\hat{M}_{r}}\) are geometric operators to be designed. We first discuss the CompoundE operators, which include translation, scaling, and 2D rotation operations.
### CompoundE Operators
First, we think it is helpful to formally introduce some definitions in group theory.
**Definition 3.1**.: _The special orthogonal group is defined as_
\[\mathbf{SO}(n)=\bigg{\{}\mathbf{A}\bigg{|}\mathbf{A}\in\mathbf{GL_{n}}( \mathbb{R}),\mathbf{A}^{\top}\mathbf{A}=\mathbf{I},\det(\mathbf{A})=1\bigg{\}}. \tag{35}\]
**Definition 3.2**.: _The special Euclidean group is defined as_
\[\mathbf{SE}(n)=\bigg{\{}\mathbf{A}\bigg{|}\mathbf{A}=\begin{bmatrix}\mathbf{ R}&\mathbf{v}\\ \mathbf{0}&1\end{bmatrix},\mathbf{R}\in\mathbf{SO}(n),\mathbf{v}\in\mathbb{R} ^{n}\bigg{\}}\,. \tag{36}\]
**Definition 3.3**.: _The affine group is defined as_
\[\mathbf{Aff}(n)=\left\{\mathbf{M}\Big{|}\mathbf{M}=\begin{bmatrix}\mathbf{A}&\mathbf{v} \\ \mathbf{0}&1\end{bmatrix},\mathbf{A}\in\mathbf{GL}_{n}(\mathbb{R}),\mathbf{v}\in \mathbb{R}^{n}\right\}. \tag{37}\]
By comparing Eqs. (36) and (37), we see that \(\mathbf{SE}(n)\) is a subset of \(\mathbf{Aff}(n)\).
Without loss of generality, consider \(n=2\). If \(\mathbf{M}\in\mathbf{Aff}(2)\), we have
\[\mathbf{M}=\begin{bmatrix}\mathbf{A}&\mathbf{v}\\ \mathbf{0}&1\end{bmatrix},\mathbf{A}\in\mathbb{R}^{2\times 2},\mathbf{v}\in \mathbb{R}^{2}. \tag{38}\]
The 2D translational matrix can be written as
\[\mathbf{T}=\begin{bmatrix}1&0&v_{x}\\ 0&1&v_{y}\\ 0&0&1\end{bmatrix}, \tag{39}\]
while the 2D rotational matrix can be expressed as
\[\mathbf{R}=\begin{bmatrix}\cos(\theta)&-\sin(\theta)&0\\ \sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{bmatrix}. \tag{40}\]
It is easy to verify that they are both special Euclidean groups (i.e. \(\mathbf{T}\in\mathbf{SE}(2)\) and \(\mathbf{R}\in\mathbf{SE}(2)\)). On the other hand, the 2D scaling matrix is in form of
\[\mathbf{S}=\begin{bmatrix}s_{x}&0&0\\ 0&s_{y}&0\\ 0&0&1\end{bmatrix}. \tag{41}\]
It is not a special Euclidean group but an affine group of \(n=2\) (i.e., \(\mathbf{S}\in\mathbf{Aff}(2)\)).
Compounding translation and rotation operations, we can get a transformation in the special Euclidean group,
\[\mathbf{T}\cdot\mathbf{R} =\begin{bmatrix}1&0&v_{x}\\ 0&1&v_{y}\\ 0&0&1\end{bmatrix}\begin{bmatrix}\cos(\theta)&-\sin(\theta)&0\\ \sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{bmatrix} \tag{42}\] \[=\begin{bmatrix}\cos(\theta)&-\sin(\theta)&v_{x}\\ \sin(\theta)&\cos(\theta)&v_{y}\\ 0&0&1\end{bmatrix}\in\mathbf{SE}(2).\]
Yet, if we add the scaling operation, the compound will belong to the Affine group. One of such compound operator can be written as
\[\mathbf{T}\cdot\mathbf{R}\cdot\mathbf{S}=\begin{bmatrix}s_{x}\cos(\theta)&-s _{y}\sin(\theta)&v_{x}\\ s_{x}\sin(\theta)&s_{y}\cos(\theta)&v_{y}\\ 0&0&1\end{bmatrix}\in\mathbf{Aff}(2). \tag{43}\]
When \(s_{x}\neq 0\) and \(s_{y}\neq 0\), the compound operator is invertible. It can be written in form of
\[\mathbf{M}^{-1}=\begin{bmatrix}\mathbf{A}^{-1}&-\mathbf{A}^{-1}\mathbf{v}\\ \mathbf{0}&1\end{bmatrix}. \tag{44}\]
In actual implementation, a high-dimensional relation operator can be represented as a block diagonal matrix in the form of
\[\mathbf{M}_{\mathbf{r}}=\mathbf{diag}(\mathbf{O}_{\mathbf{r},1},\mathbf{O}_{ \mathbf{r},2},\ldots,\mathbf{O}_{\mathbf{r},n}), \tag{45}\]
where \(\mathbf{O}_{\mathbf{r},i}\) is the compound operator at the \(i\)-th stage. We can multiply \(\mathbf{M}_{\mathbf{r}}\cdot\mathbf{v}\) in the following manner,
\[\left[\begin{array}{c|c|c|c}\mathbf{O}_{r,1}&0&\ldots&0\\ \hline 0&\mathbf{O}_{r,2}&\ldots&0\\ \hline\vdots&\vdots&\ddots&\vdots\\ \hline 0&0&\ldots&\mathbf{O}_{r,n}\\ \end{array}\right]\left[\begin{array}{c}x_{1}\\ y_{1}\\ x_{2}\\ y_{2}\\ \vdots\\ y_{n}\end{array}\right] \tag{46}\]
where \(\mathbf{v}=[x_{1},y_{1},x_{2},y_{2},\ldots,x_{n},y_{n}]^{T}\) are \(2n\) dimensional entity vectors that are split into multiple 2d subspaces.
### CompoundE3D
#### 3.2.1 Translation
Component \(\mathbf{T}\in\mathbf{SE}(3)\), illustrated by Fig. 4a, is defined as
\[\mathbf{T}=\begin{bmatrix}1&0&0&v_{x}\\ 0&1&0&v_{y}\\ 0&0&1&v_{z}\\ 0&0&0&1\end{bmatrix}, \tag{47}\]
#### 3.2.2 Scaling
Component \(\mathbf{S}\in\mathbf{Aff}(3)\), illustrated by Fig. 4b, is defined as
\[\mathbf{S}=\begin{bmatrix}s_{x}&0&0&0\\ 0&s_{y}&0&0\\ 0&0&s_{z}&0\\ 0&0&0&1\end{bmatrix}, \tag{48}\]
#### 3.2.3 Rotation
Component \(\mathbf{R}\in\mathbf{SO}(3)\), illustrated by Fig. 4c, is defined as
\[\mathbf{R}=\mathbf{R}_{z}(\alpha)\mathbf{R}_{y}(\beta)\mathbf{R}_{x}(\gamma)= \begin{bmatrix}a&b&c&0\\ d&e&f&0\\ g&h&i&0\\ 0&0&0&1\end{bmatrix}, \tag{49}\]
Figure 4: Composing different geometric operations in the 3D subspace.
where
\[\begin{split} a&=\cos(\alpha)\cos(\beta),\\ b&=\cos(\alpha)\sin(\beta)\sin(\gamma)-\sin(\alpha) \cos(\gamma),\\ c&=\cos(\alpha)\sin(\beta)\cos(\gamma)+\sin(\alpha) \sin(\gamma),\\ d&=\sin(\alpha)\cos(\beta),\\ e&=\sin(\alpha)\sin(\beta)\sin(\gamma)+\cos(\alpha) \cos(\gamma),\\ f&=\sin(\alpha)\sin(\beta)\cos(\gamma)-\cos(\alpha) \sin(\gamma),\\ g&=-\sin(\beta),\\ h&=\cos(\beta)\sin(\gamma),\\ i&=\cos(\beta)\cos(\gamma).\end{split} \tag{50}\]
This general 3D rotation operator is the result of compounding yaw, pitch, and roll rotations. They are, respectively, defined as
* Yaw rotation component: \[\mathbf{R}_{z}(\alpha)=\begin{bmatrix}\cos(\alpha)&-\sin(\alpha)&0&0\\ \sin(\alpha)&\cos(\alpha)&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix},\] (51)
* Pitch rotation component: \[\mathbf{R}_{y}(\beta)=\begin{bmatrix}\cos(\beta)&0&-\sin(\beta)&0\\ 0&1&0&0\\ \sin(\beta)&0&\cos(\beta)&0\\ 0&0&0&1\end{bmatrix},\] (52)
* Roll rotation component: \[\mathbf{R}_{x}(\gamma)=\begin{bmatrix}1&0&0&0\\ 0&\cos(\gamma)&-\sin(\gamma)&0\\ 0&\sin(\gamma)&\cos(\gamma)&0\\ 0&0&0&1\end{bmatrix}.\] (53)
#### 3.2.4 Reflection
Component \(\mathbf{F}\in\mathbf{SO}(3)\), illustrated by Fig. 4d, is defined as
\[\mathbf{F}=\begin{bmatrix}1-2n_{x}^{2}&-2n_{x}n_{y}&-2n_{x}n_{z}&0\\ -2n_{x}n_{y}&1-2n_{y}^{2}&-2n_{y}n_{z}&0\\ -2n_{x}n_{z}&-2n_{y}n_{z}&1-2n_{z}^{2}&0\\ 0&0&0&1\end{bmatrix}. \tag{54}\]
The above expression is derived from the Householder reflection, \(\mathbf{F}=\mathbf{I}-\mathbf{2nn^{T}}\). In the 3D space, \(\mathbf{n}\) is a 3-D unit vector that is perpendicular to the reflecting hyper-plane, \(\mathbf{n}=[n_{x},n_{y},n_{z}]\).
#### 3.2.5 Shear
Component \(\mathbf{H}\in\mathbf{Aff}(3)\), illustrated by Fig. 4e, is defined as
\[\mathbf{H}=\mathbf{H}_{yz}\mathbf{H}_{xz}\mathbf{H}_{xy}=\begin{bmatrix}1& \text{Sh}_{x}^{y}&\text{Sh}_{z}^{z}&0\\ \text{Sh}_{z}^{x}&1&\text{Sh}_{z}^{x}&0\\ \text{Sh}_{z}^{y}&\text{Sh}_{z}^{y}&1&0\\ 0&0&0&1\end{bmatrix}. \tag{55}\]
The shear operator is the result of compounding 3 operators: \(\mathbf{H}_{yz}\), \(\mathbf{H}_{xz}\), and \(\mathbf{H}_{xy}\) They are mathematically defined as
\[\mathbf{H}_{yz} =\begin{bmatrix}1&0&0&0\\ \text{Sh}_{y}^{x}&1&0&0\\ \text{Sh}_{z}^{x}&0&1&0\\ 0&0&0&1\end{bmatrix}, \tag{56}\] \[\mathbf{H}_{xz} =\begin{bmatrix}1&\text{Sh}_{y}^{x}&0&0\\ 0&1&0&0\\ 0&\text{Sh}_{z}^{y}&1&0\\ 0&0&0&1\end{bmatrix},\] (57) \[\mathbf{H}_{xy} =\begin{bmatrix}1&0&\text{Sh}_{z}^{x}&0\\ 0&1&\text{Sh}_{y}^{x}&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix}. \tag{58}\]
Matrix \(\mathbf{H}_{yz}\) has a physical meaning - the shear transformation that shifts the \(y\)- and \(z\)- components by a factor of the \(x\) component. Similar physical interpretations are applied to \(\mathbf{H}_{xz}\) and \(\mathbf{H}_{xy}\).
The above transformations can be cascaded to yield a compound operator; e.g.,
\[\mathbf{O}=\mathbf{T}\cdot\mathbf{S}\cdot\mathbf{R}\cdot\mathbf{F}\cdot \mathbf{H}, \tag{59}\]
In the actual implementation, we use the operator's representation in regular Cartesian coordinates instead of the homogeneous coordinate. Furthermore, a high-dimensional relation operator can be represented as a block diagonal matrix in the form of
\[\mathbf{M}_{\mathbf{r}}=\text{\bf diag}(\mathbf{O}_{\mathbf{r},1},\mathbf{O}_ {\mathbf{r},2},\ldots,\mathbf{O}_{\mathbf{r},n}), \tag{60}\]
where \(\mathbf{O}_{\mathbf{r},\mathbf{i}}\) is the compound operator at the \(i\)-th stage. We can multiply \(\mathbf{M}_{\mathbf{r}}\cdot\mathbf{v}\) in the following manner,
\[\begin{bmatrix}\mathbf{O}_{r,1}&0&\ldots&0\\ \hline 0&\mathbf{O}_{r,2}&\ldots&0\\ \hline\vdots&\vdots&\ddots&\vdots\\ \hline 0&0&\ldots&\mathbf{O}_{r,n}\\ \end{bmatrix}\begin{bmatrix}x_{1}\\ y_{1}\\ z_{1}\\ y_{2}\\ z_{2}\\ \vdots\\ \hline x_{n}\\ y_{n}\\ z_{n}\end{bmatrix} \tag{61}\]
where \(\mathbf{v}=[x_{1},y_{1},z_{1},x_{2},y_{2},z_{2},\ldots,x_{n},y_{n},z_{n}]^{T}\) are \(3n\) dimensional entity vectors that are split into multiple 3d subspaces.
## 4 Dataset and Evaluation
### Open Knowledge Graphs and Benchmarking Datasets
Table 4 collects a set of KG that are available for public access. We list these KGs according to the time of their first release. We provide information for the number of entities, the number of facts to indicate the scale and size of each KG. Among these public KGs, WordNet [102] is the oldest and it is first created as a lexical database of English words. Words are connected with semantic relations including Synonymy, Antonymy, Hyponymy, Meronymy, Troonomy, and Entailment. There are similarities and differences between the KGs. For example, both DBpedia [103] and YAGO [104] are derived from information in Wikipedia infoboxes. However, YAGO also includes information from GeoNames that contains spatial information. In addition, DBpedia, YAGO, and Wikidata [105] contain multilingual information. Both ConceptNet [106] and OpenCyc contain a wide range of commonsense concepts and relations.
Table 5 contains commonly used benchmarking datasets. These datasets have different sizes. Each of them covers a different domain. For example, UMLS contains biomedical concepts from the Unified Medical Language System.
Similarly, OGB-biokg is also a biomedical KG but with a larger size. Kinship contains relationship between members of a tribe and Countries contains relationships between countries and regions. As indicated by the dataset names, many of them are subsets of the public KGs that are described above. CoDEx is extracted from Wikidata, but presents 3 different versions and each has different degrees and densities. Among these datasets, FB15K-237 and WN18RR are the most adopted datasets for performance benchmarking since inverse relations are removed to avoid the test leakage problem.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline
**Name** & **Year** & **\# Entities** & **\# Facts** & **Source** & **Highlights** & **Access Link** \\ \hline WordNet & 1980s & 155K & 207K & Curated by experts & Lexical database of semantic relations between words & [https://wordnet.princeton.edu](https://wordnet.princeton.edu) \\ \hline ConceptNet & 1999 & 44M & 2.4B & Crowdsourced human data and structured feeds & Commonsense concepts and relations & [https://conceptnet.io](https://conceptnet.io) \\ \hline Freebase & 2007 & 44M & 2.4B & Crowdsourced human data structured feeds & One of the first public KBs/KGs & [https://developers.google.com/freebase/](https://developers.google.com/freebase/) \\ \hline DBpedia & 2007 & 4.3M & 70M & Automatically extracted structured data from Wikipedia & Multilingual, Cross-Domain & [https://dbpedia.org/sparql](https://dbpedia.org/sparql) \\ \hline YAGO & 2008 & 10M & 120M & Derived from Wikipedia, WordNet, and GeoNames & Multilingual, Fine grained entity types & [https://yago-knowledge.org/sparql](https://yago-knowledge.org/sparql) \\ \hline Wikidata & 2012 & 50M & 500M & Crowdsourced human curation & Collaborative, Multilingual, Structured & [https://query.wikidata.org](https://query.wikidata.org) \\ \hline OpenCyc & 2001 & 2.4M & 240k & Created by domain experts & Commonsense concepts and relations & [http://www.cyc.com](http://www.cyc.com) \\ \hline NELL & 2010 & - & 50M & Extracted from Clueweb09 corpus (1B web pages) & Each fact is given a confidence score. 2.8M beliefs has high confidence score. & [http://rtw.ml.cmu.edu/rtw/](http://rtw.ml.cmu.edu/rtw/) \\ \hline \end{tabular}
\end{table}
Table 4: Major Open Knowledge Graphs.
\begin{table}
\begin{tabular}{c|c c|c c c|c|c} \hline \hline
**Dataset** & \multicolumn{6}{c}{**Statistics**} & \multicolumn{1}{c}{**Remarks**} \\ \cline{2-7} & **\#Ent** & **\#Rel** & **\#Train** & **\#Valid** & **\#Test** & **Avg.** & **Deg.** & \\ \hline Kinship & 104 & 26 & 8,544 & 1,068 & 1,074 & 82.15 & Information about complex relational structure among members of a tribe. \\ \hline UMLS & 135 & 49 & 5,216 & 652 & 661 & 38.63 & Biomedical relationships between categorized concepts of the Unified Medical Language System. \\ \hline Countries & 272 & 2 & 1,111 & 24 & 24 & 4.35 & Relationships between countries, regions, and subregions. \\ \hline \hline \end{tabular}
\end{table}
Table 5: Knowledge Graph Benchmarking Datasets for Link Prediction.
### Evaluation Metrics and Leaderboard
The link prediction performance of KGE models is typically evaluated using the Mean Reciprocal Rank (MRR) and Hits@\(k\) metrics. The MRR is the average of the reciprocal ranks of the ground truth entities. The Hits@\(k\) is the fraction of test triples for which the ground truth entity is ranked among the top \(k\) candidates. The MRR and Hits@\(k\) metrics are defined as follows:
* The MRR is calculated as: \[\text{MRR}=\frac{1}{|D|}\sum_{i\in D}\frac{1}{\text{Rank}_{i}},\] (62) where \(|D|\) is the number of test triples, and \(\text{Rank}_{i}\) is the rank of the ground truth entity in the list of top candidates for the \(i\)th test triple.
* The Hits@k is calculated as: \[\text{Hits@}k=\frac{1}{|D|}\sum_{i\in D}\mathds{1}\{\text{Rank}_{i}\leq k\},\] (63)
where \(\mathds{1}\{\cdot\}\) is the indicator function.
Higher MRR and Hits@\(k\) values indicate better model performance. This is because they mean that the model is more likely to rank the ground truth entity higher in the list of top candidates, and to rank it among the top \(k\) candidates, respectively. In order to prevent the model from simply memorizing the triples in the KG and ranking them higher, the filtered rank is typically used to evaluate the link prediction performance of KGE models. The filtered rank is the rank of the ground truth entity in the list of top candidates, but only considering candidates that would result in unseen triples. In addition to the MRR and Hits@\(k\) metrics, other evaluation metrics for KG completion include Mean Average Precision (MAP), Precision@\(k\), etc. The choice of evaluation metric depends on the specific application. For example, if the goal is to rank the top entities for a given triple, then the MRR or Hits@\(k\) metrics may be more appropriate. If the goal is to find all entities that are related to a given entity, then the MAP or Precision@\(k\) metrics may be more appropriate.
We also show a comparison of the Hits@10 performance of recently published works for link prediction in Fig. 5. The figure includes the results on the FB15k, FB15k-237, WN18, WN18RR, and YAGO3-10 datasets. While KGE models such as MEIM [107] remain competitive, many of the pretrained language model (PLM) approaches such as SimKGC [108] and LMKE [109] top the leaderboards. In the next section, we will discuss this emerging trend of using PLM for KG completion.
## 5 Emerging Direction
### Neural Network Models for Knowledge Graph Completion
Before discussing the PLM approach, it is worthwhile to introduce neural network models for KG completion since PLMs also belong to this line of approach and the logic for training and inference between these models are similar. a Multilayer Perceptron (MLP) [110] is used to measure the likelihood of unseen triples for link prediction. NTN [111] adopts a bilinear tensor neural layer to model interactions between entities and relations of triples. ConvE [112] reshapes and stacks the head entity and the relation vector to form a 2D shape data, applies Convolutional Neural Networks (CNNs) to extract features, and uses extracted features to interact with tail embedding. R-GCN [113] applies a Graph Convolutional Network (GCN) and considers the neighborhood of each entity equally. CompGCN [114] performs a composition operation over each edge in the neighborhood of a central node. The composed embeddings are then convolved with specific filters representing the original and the inverse relations, respectively. KBGAN [115] optimizes a generative adversarial network to generate the negative samples. KBGAT [116] applies graph attention networks to capture both entity and relation features in any given entity's neighborhood. ConvKB [117] applies 1D convolution on stacked entity and relation embeddings of a triple to extract feature maps and applies a nonlinear classifier to predict the likelihood of the triple. Structure-Aware Convolutional Network (SACN) [118] uses a weighted-GCN encoder and a Conv-TransE decoder to extract the embedding. This synergy successfully leverages graph connectivity structure information. InteractE [119] uses network design ideas including feature permutation, a novel feature reshaping, and circular convolution compared to ConvE and outperforms baseline models significantly. ParamE [120] uses neural networks instead of relation embedding to model the relational interaction between head and tail entities. MLP, CNN, and gated structure layers are experimented and the gated layer turns out to be far more effective than embedding approaches. ReInceptionE [121] applies Inception network to increase the interactions between head and relation embeddings. Relation-aware attention mechanism in the model aggregates the local neighborhood features and the global entity information. M-DCN [122] adopts a multi-scale dynamic convolutional network to model complex relations such as 1-N, N-1, and N-N relations. A related model called KGBoost is a tree classifier-based method that proposes a novel negative sampling method and uses the XGBoost classifier for link prediction. GreenKGC [123] is a modularized KGC method inspired by Discriminant Feature Learning (DFT) [124; 125], which extracts the most discriminative feature from trained embeddings for binary classifier learning.
### Pretrained Language Models for Knowledge Graph Completion
With the advent of large language models (LLM) in recent years, more and more NLP tasks are improved significantly improved by pretrained transformer-based models. In recent years, researchers have also started to think about using transformer-based models as solutions for KG-related tasks. A general illustration of the transformer-based approach for KG completion is shown in 6. However, initial results from early papers have not fully demonstrated the effectiveness of language model-based solutions. It requires not only significantly more computational resources for training than KGE models, but also has slow inference speed. There are still many issues with PLM approach that are yet to be solved.
The key difference between traditional KGE approach and PLM-based approach is that the former focuses on local structural information in graphs, whereas the latter relies on PLMs to decide the contextual relatedness between
entities' names and descriptions. Textual descriptions of entities are usually available in many KGs such as Freebase,
Figure 5: HITS@10 score of previous KGE models for datasets.
Source: [https://paperswithcode.com/sota](https://paperswithcode.com/sota)
WordNet, and Wikidata. Triples are created from a large amount of corpus on entity description through information
Figure 5: HITS@10 score of previous KGE models for datasets (cont.).
Source: [https://paperswithcode.com/sota](https://paperswithcode.com/sota)
extraction. These entity descriptions are often stored in the knowledge base together with the entity entry. These textual descriptions are very useful information and one can use PLMs to extract textual features from these descriptions. PLMs are transformer-based models with many model parameters that are trained over large-scale corpora. PLMs are trained over large-scale corpora.
Figure 5: HITS@10 score of previous KGE models for datasets (cont.)
Source: [https://paperswithcode.com/sota](https://paperswithcode.com/sota)
Figure 6: An Illustrative example of the transformer-based approach for KG completion.
known to be able to generate good features for various NLP tasks. One of the famous PLMs is BERT [126]. BERT is a pretrained bidirectional language model that is built based on transformer architecture. Since one of the tasks of BERT pretraining is next-sentence prediction, it naturally generates good features for characterizing whether 2 sentences are closely related. KG-BERT [127] is one of the first models we know of that uses PLMs to extract linguistic features from entity descriptions. It leverages the advantage of BERT, which is trained using next-sentence prediction to determine the association of head entity and tail entity descriptions for link prediction, and also triple classification tasks using the same logic.
ERNIE [128] proposes a transformer-based architecture that leverages lexical, syntactic, and knowledge information simultaneously by encoding textual description tokens through cascaded transformer blocks, as well as concatenating textual features and entity embedding features to achieve information fusion. K-BERT alleviates the knowledge noise issue by introducing soft positions and a visible matrix. StAR [129] and SimKGC [108] both use two separate transformer encoders to extract the textual representation of (head entity, relation) and (tail entity). However, they adopt very different methods to model the interaction between two encodings. StAR learns from previous NLP literature [130, 131] to apply interactive concatenation of features such as multiplication, subtraction, and the embedding vectors themselves to represent the semantic relationship between two parts of triples. On the other hand, SimKGC computes cosine similarity between two textual encodings. SimKGC also proposes new negative sampling methods including in-batch negatives, pre-batch negatives, and self negatives to improve the performance. Graph structural information is also considered in SimKGC to boost the score of entities that appear in the K-hop neighborhood. KEPLER [132] uses the textual descriptions of head and tail entities as initialization for entity embeddings and uses TransE embedding as a decoder. The masked language modeling (MLM) loss is added to the knowledge embedding (KE) loss for overall optimization. InductivE [133] uses features from pretrained BERT as graph embedding initialization for inductive learning on commonsense KGs (CKG). Experimental results on CKG show that fastText features can exhibit comparable performance as that from BERT. BERTRL [134] fine-tunes PLM by using relation instances and possible reasoning paths in the local neighborhood as training samples. Relation paths in the local neighborhood are known to carry useful information for predicting direct relations between two entities [135, 136, 57]. In BERTRL, each relation path is linearized to a sequence of tokens. The target triple and linearized relation paths are each fed to a pretrained BERT model to produce a likelihood score. The final link prediction decisions are made by aggregating these scores. The basic logic in this work is to perform link prediction through the relation path around the target triple. KGT5 [137] propose to use a popular seq2seq transformer model named T5 [138] to pretrain on the link prediction task and perform KGQA. During link prediction training, a textual signal "predict tail" or "predict head" is prepended to the concatenated entity and relation sequence that's divided by a separation token. This sequence is fed to the encoder and the decoder's objective is to autoregressively predict the corresponding tail or head based on the textual signal. To perform question answering, textual signal "predict answer" is prepended to the query and we expect the decoder to autoregressively generate a corresponding answer to the query. This approach claims to significantly reduce the model size and inference time compared to other models. PKGC [139] proposes a new evaluation metric that is more accurate under an open-world assumption (OWA) setting.
## 6 Conclusion
In conclusion, this paper has provided a comprehensive overview of the current state of research in KGE. We have explored the evolution of KGE models, with a particular focus on two main branches: distance-based methods and semantic matching-based methods. Through our analysis, we have uncovered intriguing connections among recently proposed models and identified a promising trend that combines geometric transformations to enhance the performance of existing KGE models. Moreover, this paper has curated valuable resources for KG research, including survey papers, open KGs, benchmarking datasets, and leaderboard results for link prediction. We have also delved into emerging directions that leverage neural network models, including graph neural networks and PLM, and highlighted how these approaches can be integrated with embedding-based models to achieve improved performance on diverse downstream tasks. In the rapidly evolving field of KG completion, this paper serves as a valuable reference for researchers and practitioners, offering insights into the past developments, current trends, and potential future directions. By providing a unified framework in the form of CompoundE and CompoundE3D, we aim to inspire further innovation in KGE methods and facilitate the construction of more accurate and comprehensive KGs. As the demand for knowledge-driven applications continues to grow, the pursuit of effective KGE models remains a pivotal area of research, and this paper lays the groundwork for future advancements in the field.
## 7 Acknowledgments
The authors acknowledge the Center for Advanced Research Computing (CARC) at the University of Southern California for providing computing resources that have contributed to the research results reported within this publication. URL: [https://carc.usc.edu](https://carc.usc.edu).
|
2309.00105 | Enhanced weak superconductivity in trigonal $γ$-PtBi$_2$ | Electrical resistivity experiments show superconductivity at $T_c = 1.1\,$K
in a high-quality single crystal of trigonal $\gamma$-PtBi$_2$, with an
enhanced critical magnetic field $\mu_0H_{c2}(0) \gtrsim 1.5\,$Tesla and a low
critical current-density $J_c(0)\approx 40\,\mathrm{A}/\mathrm{cm}^2$ at $H=0$.
Both $T_c$ and $H_{c2}(0)$ are the highest reported values for stoichiometric
bulk samples at ambient pressure. We found a weak $H_{c2}$ anisotropy with
$\Gamma=H_{c2}^{ab}/H_{c2}^{c} < 1$, which is unusual among superconductors.
Under a magnetic field, the superconducting transition becomes broader and
asymmetric. Along with the low critical currents, this observation suggests an
inhomogeneous superconducting state. In fact, no trace of superconductivity is
observed through field-cooling--zero-field-cooling magnetization experiments. | J. Zabala, P. Pedrazzini, F. J. Castro, V. F. Correa | 2023-08-31T19:53:47Z | http://arxiv.org/abs/2309.00105v2 | # Enhanced weak superconductivity in trigonal \(\gamma\)-PtBi\({}_{2}\)
###### Abstract
Electrical transport experiments show superconductivity in a high-quality single crystal of trigonal \(\gamma\)-PtBi\({}_{2}\). The critical temperature shows a large dependence on the electrical current and in the limit of very low currents, a \(T_{c}=1.1\) K is observed, while a zero temperature critical field \(B_{c}(0)\)\(\approx\) 1.5 Tesla is estimated. These are the highest superconducting parameters reported (at ambient pressure) in a stoichiometric \(\gamma\)-PtBi\({}_{2}\) bulk sample so far. Under a magnetic field a strict zero resistance state is no longer observed even though an incipient superconducting transition is seen. Such a behavior is most probably associated with very low critical currents and is reminiscent of filamentary superconductivity. The superconducting state is elusive to magnetization measurements discarding a bulk phase down to \(T\) = 0.3 K.
pacs: 74.25.F-, 74.62.Bf, 73.43.Qt Topological matter (TM) is arguably one of the current hot topics in condensed matter physics. Unlike traditional states of matter where the physical properties are ultimately set by the inter-particle interactions, in the TM the key role is played by symmetry. Though originally restricted to a few specific systems, the extent of topological material candidates has grown immensely over the last years [1]. Electronic topological phases range from insulators to superconductors showing singular surface states that may differ substantially from bulk states [2; 3]. The concurrence of different topological phases on a single system would be a major finding aimed at getting further insight into the TM physics.
PtBi\({}_{2}\) is quite an unique system. It crystallizes in four different polymorphs [4]. Two of them, cubic \(\beta\)-PtBi\({}_{2}\) and trigonal \(\gamma\)-PtBi\({}_{2}\), are topological semimetal candidates. Both phases are metastable at room temperature, so a thermal-quenching procedure is done during the crystal growth to retain those phases [5]. Despite this, high-quality single crystals are obtained with residual resistivities \(\rho_{0}<1\)\(\mu\Omega\)-cm and residual resistivity ratios RRR = \(\rho(300K)/\rho(0)\) of several hundreds [6; 7]. The semimetallic character is confirmed by the low carrier density \(n_{c}\sim 10^{-20}\) cm\({}^{-3}\)[6; 8]. Nonetheless, the carrier mobility \(\mu>1\) m\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\) is very high [6; 8] due to a superposition of a large mean free path \(l_{0}>1\)\(\mu\)m [8; 9] and a low effective mass [10; 11].
Cubic \(\beta\)-PtBi\({}_{2}\) shows one of the largest non-saturating transverse magnetoresistances (MR = \(\left(R(B)-R(0)\right)/R(0)\) = \(1.1\)\(\times\)\(10^{5}\) at temperature \(T\) = 1.8 K and magnetic field \(B\) = 33 T) among semimetals [8]. This behavior is ascribed to nearly compensated electrons and holes. Hall effect measurements and a quadratic MR confirm this scenario [8]. However, the MR becomes linear at high fields, a behavior that is claimed to be associated with Dirac-like cones in the band structure [8]. Angle-resolved photoemission spectroscopy (ARPES) experiments along with density functional theory (DFT) show the existence of Dirac points close to the Fermi surface confirming \(\beta\)-PtBi\({}_{2}\) is a 3D Dirac semimetal [12; 13].
Trigonal \(\gamma\)-PtBi\({}_{2}\) also shows a semimetallic behavior [14] with a large magnetoresistance (MR = 1.5 \(\cdot\)\(10^{3}\) at \(T\) = 2 K and \(B\) = 9 T) [6]. The MR displays an intricate non-saturating sub-quadratic field dependence greatly dependent on the field direction. At some specific angles, the MR is linear [15; 11]. The presence of open orbits [15] and/or Dirac surface states [16] and/or triply degenerate point fermions [11] are among the possible mechanisms to explain this behavior. In any case, carrier compensation does not seem to be the primary source of the large MR.
Both polymorphs show pressure-induced superconductivity (SC) with a critical temperature \(T_{c}\sim\) 2 K, which remains almost unchanged up to 50 GPa. Pressure is detrimental to both the RRR and the MR while no trace of a pressure-induced structural transition is detected [17; 18]. The onset of SC is accompanied by an abrupt increase of the carrier concentration and a sudden drop of the mobilities in \(\beta\)-PtBi\({}_{2}\), while the electron-hole balance still holds [17]. In \(\gamma\)-PtBi\({}_{2}\), a sign change of the Hall coefficient occurs exactly at the onset of SC [18].
Ambient-pressure SC has also been reported in \(\gamma\)-PtBi\({}_{2}\). The nature of the superconducting state, however, is debatable. It was first observed in the electrical resistivity of a bulk single crystal with a \(T_{c}\) = 0.6 K and a critical field \(B_{c}(0)\) = 0.06 T. The superconducting transition is rather broad (\(\sim\) 0.2 K). Rh-doping, though unfavorable for the metallic character due to the disorder introduced, produces higher critical temperatures (\(T_{c}\) = 2.75 K). Nonetheless, the transition remains broad and it is considerably dependent on the electrical current [5].
Local enhancement of SC (\(T_{c}\) = 3.5 K) is observed in point-contact spectroscopy experiments. The origin of such effect seems to be a combination of local pressure and a change in the electron local density of states induced by the point contact [19].
Recently, robust SC (0.3 K \(\leq\)\(T_{c}\)\(\leq\) 0.4 K) has been observed in \(\gamma\)-PtBi\({}_{2}\) thin flakes. Anisotropic critical fields suggest that SC is 2D rather than surface SC. On the
other hand, detailed DFT calculations predict a type-I Weyl semimetal band structure due to a broken inversion symmetry and strong spin-orbit coupling. Were it confirmed, \(\gamma\)-PtBi\({}_{2}\) would be the first type-I Weyl semimetal with superconducting properties [20].
Two stunning results have been disclosed even more recently in \(\gamma\)-PtBi\({}_{2}\). Scanning tunneling microscopy (STM) experiments detect superconducting gaps as large as 20 meV equivalent to critical temperatures far above 100 K. The gaps are robust against a magnetic field \(B\) of several Tesla and a temperature of up to 5 K [21]. On the other hand, ARPES experiments combined with ab-initio calculations identify topological surface Fermi arcs that become superconducting at \(T\sim\) 10 K [22]. However, both results are dependent on the preparation and type of sample surfaces.
The true origin and nature of superconductivity in \(\gamma\)-PtBi\({}_{2}\) remains controversial. No unquestionable evidence supporting either bulk (3D or 2D) or surface SC has been reported so far. Additionally, a detailed study of the dependence of the superconducting state on the sample quality is still missing.
In this work we report on electrical transport experiments in a high-quality \(\gamma\)-PtBi\({}_{2}\) bulk single crystal. A superconducting transition is clearly detected. The critical temperature shows a large dependence on the electrical current and in the limit of very low currents a \(T_{c}\) = 1.1 K is observed. This is the highest \(T_{c}\) reported in a \(\gamma\)-PtBi\({}_{2}\) bulk sample so far. Under a magnetic field a strict zero resistance state is no longer observed even though an incipient superconducting transition is seen. Such a behavior is reminiscent of filamentary superconductivity. Indeed, the SC is elusive to magnetization measurements discarding a bulk phase down to \(T=\) 0.3 K.
Single crystals of \(\gamma\)-PtBi\({}_{2}\) are grown by the self-flux technique. The initial mixture (Pt:Bi = 2.7:97.3; total mass = 9 g) of pure elements (Bi: \(>\) 99.999%; Pt: \(>\) 99.98%) is placed in an alumina Canfield Crucible Set which is then introduced in a quartz tube. The tube is evacuated and finally sealed after incorporating a small amount of Ar. The mixture is homogeneized at 950 \({}^{\circ}\)C during 5 hours, then rapidly cooled down (200 \({}^{\circ}\)C/hour) to 360 \({}^{\circ}\)C followed by a low ramp (8 \({}^{\circ}\)C/day) to 296 \({}^{\circ}\)C. At this temperature, the residual Bi-rich flux is removed using a centrifuge in a very rapid process, which also acts as a thermal quench. Through this process, platelet-like hexagonal-shaped single crystals are obtained.
A 1.4 \(\times\) 0.7 \(\times\) 0.2 mm\({}^{3}\) crystal labeled as M10 is separated for the magnetotransport experiments (see the left inset of Fig. 1). Energy-dispersive x-ray spectroscopy (EDS) and x-ray diffraction (XRD) scans confirm the correct stoichiometry and trigonal structure of the sample. Figure 1 shows a \(\theta-2\theta\) XRD scan corresponding to the (00l) family of planes. No spurious peaks other than Cu \(K_{\beta}\) reflections are observed. Lattice parameters \(c\) = 6.157(8) A and \(a\) = 6.59(1) A of the hexagonal conventional cell are calculated from (00l) and (h0h) reflections, in fair agreement with previously reported values [5; 11]. The rocking-curve (\(\omega\)-scan) FWHM for the (002) peak is 0.06\({}^{\circ}\) further supporting the good quality of the sample (right inset of Fig. 1).
For the magnetotransport experiments, M10 is cut into a bar geometry using a thin razorblade. The longest length runs along the [120] direction (see left inset of Fig. 1). The sample is then successively exfoliated using scotch tape to a final thickness \(e\) = 19(1) \(\mu\)m along the c-axis. Electrical contacts using EPOTEK H20E silver epoxy in a standard four-probe setup are fabricated. The electrical current flows in the [120] direction. Resistance is measured either with a LR-700 ac resistance bridge or
Figure 1: XRD \(\theta-2\theta\) scan of \(\gamma\)-PtBi\({}_{2}\) M10 crystal corresponding to the (00l) family planes. Left inset: crystal image showing the direction of the crystal axes. Right inset: XRD \(\omega\)-scan scan for the (002) peak.
Figure 2: Temperature dependence of the electrical resistivity at different magnetic fields. Inset: Magnetoresistance at different temperatures. The dashed line is a linear fit to the MR(2K) data above \(B\) = 10 T.
in dc mode using a high-gain EM-A10 preamplifier with a HP3457A multimeter and a Keithley 220 current source.
Figure 2 shows the temperature dependence of the zero-field electrical resistivity from room temperature down to 2 K. A metallic behavior is observed. The extrapolated \(\rho_{0}=0.15(2)\)\(\mu\Omega\cdot\)cm and RRR \(\sim\) 300 are comparable to those obtained from the best samples in the literature [6; 11]. A prominent magnetoresistance (MR(2K,9T) = 41) is also observed at low temperature in Fig. 2. At the lowest temperature, the MR is linear in field above \(B=10\) T (inset of Fig. 2). The MR is temperature independent below 4 K and it is drastically suppressed above 25 K. No Shubnikov-de Haas oscillations are observed for this field direction (\(B\parallel[001]\)). However, they unequivocally show up (not shown here) as the field is tilted away from this direction in agreement with previous reports [20].
Magnetotransport experiments are further extended to dilution fridge temperatures. Results are displayed in Fig. 3. A superconducting transition is clearly observed. The critical temperature (defined as the temperature at 50 % of the resistance in the normal state) is strongly dependent on the applied electrical current \(I\), reaching \(T_{c}=1.1\) K at \(I=0.05\) mA. This current yields an upper limit for the critical current density as low as \(J_{c}=0.6\) A/cm\({}^{2}\) supposing an homogenous current distribution. The transition width, between the transition onset and the zero-resistance state, is \(\Delta\)T = 0.15 K at \(I=0.05\) mA and broadens systematically as the current is increased, as already observed [5]. Our \(T_{c}^{\prime}s\), though, are higher and narrower.
As expected, SC is suppressed by a magnetic field as seen in the main panels of Fig. 3 for two magnetic field directions, \(B\parallel[001]\perp I\) (top panel) and \(B\parallel[120]\parallel I\) (bottom panel). An incipient superconducting transition is detected even for applied fields above 1 T. Two aspects of Fig. 3 are worth mentioning. First, the sizable MR observed in the normal state for \(B\perp I\) which is partially suppressed for \(B\parallel I\) (a small misalignment between \(B\) and \(I\) could explain the surviving MR). Second, and most surprising, is the fact that a strict zero-resistance state is not achieved under a magnetic field, even though an incipient transition is evident. This observation, strongly dependent on the current, is reminiscent of filamentary superconductivity (FSC): non-percolating superconducting _filaments_ or _islands_ embedded in a normal metal matrix [23], as sketched in the inset of Fig. 4. FSC has recently been suggested to occur in the relative trigonal compound PtSe\({}_{2}\)[24], a proposed Dirac semimetal [25]. Unlike PtBi\({}_{2}\), a non-zero resistance state is observed even at zero field in PtSe\({}_{2}\) and the proposed FSC is associated with inhomogeneous strain induced by high-rate thermal-cycles of the samples [24].
Figure 4 shows the phase diagram \(B-T\) for both directions of the magnetic field and at different applied currents. \(B_{c}(T)\) is obviously dependent on the current. Despite this, several things must be remarked. First, the negligible anisotropy between both configurations which is something unexpected in case of conventional surface SC. Particularly, given the laminar shape of the sample, surface SC should be negligible when \(B\parallel[001]\). Second, the upward curvature of \(B_{c}(T)\) close to \(T_{c}\). This behavior can be explained by multiband effects [26; 27] or filamentary superconductivity [28; 29]. Finally, the magnitude of the zero-temperature critical field. An estimate for an upper limit of \(B_{c}(0)\) can be inferred from the resistivity vs. field curves (insets of Fig. 3). The sudden increase of \(\rho(B,T<T_{c})\) at \(B\sim 1\) T signals the SC transition. On the other hand, \(\rho(B,T>T_{c})\) shows a smooth field dependence. Both curves merge at \(B\approx 1.5\) T, which is a good estimate of \(B_{c}(0)\). Along with the observed \(T_{c}\), these are the highest superconducting parameters reported in a stoichiometric \(\gamma\)-PtBi\({}_{2}\) bulk sample so far.
Given that \(\gamma\)-PtBi\({}_{2}\) is a metastable phase, a study of the phase stability under a thermal annealing process is
Figure 3: Very-low-temperature dependence of the electrical resistivity at different magnetic fields and applied currents. A SC transition is observed at \(T_{c}=1.1\) K. Top panel: \(B\parallel[001]\perp I\), ac and dc current. Bottom panel: \(B\parallel[120]\parallel I\), dc current. Insets: Magnetoresistance at two temperatures, above and below \(T_{c}\). The dashed line is an upper limit estimate of the critical field \(B_{c}(0)\) = 1.5 T.
performed using differential scanning calorimetry (DSC), aimed at searching for possible sources of sample inhomogeneities that could explain an eventual FSC. A \(\gamma\)-PtBi\({}_{2}\) single crystal from the same batch is first heated up from room temperature to 450\({}^{\circ}\)C at 5\({}^{\circ}\)C/min. Then it is cooled down to room temperature again along 10 min and finally the first ramp is repeated. The results are shown in Fig. 5 where phase transitions are assigned according to the Pt-Bi binary phase diagram [4].
During the first run, a wide exothermic peak is observed above \(T\approx 200^{\circ}\)C signaling a progressive transition from the metastable \(\gamma\)-phase to the stable orthorhombic \(\alpha\)-phase. At \(T=270^{\circ}\)C a narrow endothermic peak is associated with the equilibrium transition from the \(\alpha\)-phase to the \(\beta\)-phase. It is interesting that the broad exothermic peak extends above this \(\alpha\)-\(\beta\) transformation, indicating that not all the metastable \(\gamma\)-phase has already re-transformed into the corresponding stable phase (either \(\alpha\) or \(\beta\)). Another large endothermic peak is observed around \(T=420^{\circ}\)C corresponding to the equilibrium \(\beta\)-\(\gamma\) transition. During the second run, only the equilibrium transitions are observed implying that the cooling intermediate process is slow enough to allow the successive \(\gamma\)-\(\beta\)-\(\alpha\) re-transformation. A zoom of the low temperature region (inset of Fig. 5) shows that the partial decomposition of the metastable \(\gamma\)-phase begins at temperatures as low as \(T\approx 130^{\circ}\)C (where _run 1_ and _run 2_ curves split apart). These DSC results confirm that a moderate thermal manipulation can introduce inhomogeneities in the \(\gamma\)-PtBi\({}_{2}\) crystals (i.e., stable \(\alpha\)-phase grains), which could induce internal strains eventually giving rise to superconducting filaments [24].
Our magnetotransport experiments are consistent with filamentary superconductivity. In fact, we observe null Meissner screening in magnetization experiments practically discarding bulk superconductivity down to \(T=0.3\) K. However, our experiments cannot rule out the existence of intrinsic 2D in very thin samples or a component of surface SC (in very clean surfaces) as it was proposed in previous works. The need for further macroscopic and spectroscopic studies is thus evident.
The authors gratefully acknowledge N. Haberkorn and J. I. Facio for helpful discussions and a critic reading of the manuscript. Work partially supported by CONICET grant number PIP2021-11220200101796CO, ANPCyT grant number PICT2019-02396, Universidad Nacional de Cuyo (SIIP) grant number 06/C55T1.
|
2309.10316 | Constraining Thermal Emission of Pluto's Haze From Infrared Rotational
Lightcurves | The rotational lightcurves of the Pluto-Charon system were previously
believed to be solely attributed to their surfaces. However, a proposed
scenario of haze cooling \citep{2017Natur.551..352Z} suggests that the
atmospheric haze of Pluto could significantly contribute to mid-infrared
emission, which calls for a revisit of previous analyses. In this study, we
employ a Bayesian retrieval approach to constrain the haze emission from the
rotational lightcurves of the Pluto-Charon system. The lightcurves were
observed by the Spitzer and Herschel telescopes at 24 and 70 $\mu$m, and were
combined with the latest surface albedo maps of Pluto and Charon from the New
Horizons spacecraft. Our results show that including the haze emission is
consistent with all current observations, with the best-fit haze flux around
1.63 mJy. This is in agreement with the composition of Titan-like tholins.
However, the ``surface only" scenario, which excludes the haze contribution,
can still explain the observations. We conclude that the current data at 24
$\mu$m cannot constrain Pluto's haze emission due to the degeneracy with
Charon's surface emission. Regardless, some surface properties of Pluto are
well constrained by the shape of the lightcurves, with a thermal inertia of
approximately 8--10 MKS and a relatively low CH$_4$ emissivity of 0.3--0.5. We
suggest that observations by the JWST telescope at 18 $\mu$m, which can resolve
Pluto from Charon, could directly probe the haze emission of Pluto due to the
low surface emission at that wavelength. | Linfeng Wan, Xi Zhang, Jason D. Hofgartner | 2023-09-19T04:57:16Z | http://arxiv.org/abs/2309.10316v1 | # Constraining Thermal Emission of Pluto's Haze From Infrared Rotational Lightcurves
###### Abstract
The rotational lightcurves of the Pluto-Charon system were previously believed to be solely attributed to their surfaces. However, a proposed scenario of haze cooling (Zhang et al., 2017) suggests that the atmospheric haze of Pluto could significantly contribute to mid-infrared emission, which calls for a revisit of previous analyses. In this study, we employ a Bayesian retrieval approach to constrain the haze emission from the rotational lightcurves of the Pluto-Charon system. The lightcurves were observed by the Spitzer and Herschel telescopes at 24 and 70 \(\mu\)m, and were combined with the latest surface albedo maps of Pluto and Charon from the New Horizons spacecraft. Our results show that including the haze emission is consistent with all current observations, with the best-fit haze flux around 1.63 mJy. This is in agreement with the composition of Titan-like tholins. However, the "surface only" scenario, which excludes the haze contribution, can still explain the observations. We conclude that the current data at 24 \(\mu\)m cannot constrain Pluto's haze emission due to the degeneracy with Charon's surface emission. Regardless, some surface properties of Pluto are well constrained by the shape of the lightcurves, with a thermal inertia of approximately 8-10 MKS and a relatively low CH\({}_{4}\) emissivity of 0.3-0.5. We suggest that observations by the JWST telescope at 18 \(\mu\)m, which can resolve Pluto from Charon, could directly probe the haze emission of Pluto due to the low surface emission at that wavelength.
planets: surface - planets: Pluto - techniques: retrieval 0000-0002-0002]Linfeng Wan
0000-0002-4882-7886]Linfeng Wan
## 1 Introduction
Rotational lightcurves provide insights into the surface properties and inhomogeneities of Pluto and Charon as the surface units rotate in and out of view. While reflection lightcurves mainly reveal the body's albedo and scattering properties, emission lightcurves at infrared and radio wavelengths offer valuable information about the surface and subsurface, such as temperature, thermal inertia, and emissivity. Before the New Horizons flyby, Pluto's surface albedo distribution could only be resolved with poor precision using techniques such as Pluto-Charon mutual events (e.g., Buie et al., 1992; Young & Binzel, 1993; Young et al., 1999, 2001) and direct imaging by HST (e.g., Buie et al., 1997; Stern et al., 1997; Buie et al., 2010, 2010).
Nevertheless, by combining reflection and emission lightcurves from HST, ISO, Spitzer, and Herschel, Lellouch et al. (2000, 2011, 2016) found that Pluto's surface temperature is non-uniform. They also constrained the thermal inertia and emissivity of the surfaces of Pluto and Charon. Multiple so
lutions have been identified, primarily due to the inability to resolve the Pluto-Charon system and the various choices of Pluto's surface map (Figure 4 in Lellouch et al., 2011). Although radio telescopes such as the Atacama Large Millimeter/submillimeter Array (ALMA), Submillimeter Array (SMA), and Karl G. Jansky Very Large Array (VLA) are able to observe Pluto and Charon individually, the seasonal effect and the mixing of subsurface and surface information at radio wavelengths have complicated data interpretation (Lellouch et al., 2016).
The New Horizons flyby of the Pluto-Charon system in 2015 provided a wealth of new information that calls for revisiting previous rotational lightcurve data. High-definition images of Pluto and Charon obtained by the Long Range Reconnaissance Imager (LORRI, Cheng et al., 2008) and the Multispectral Visible Imaging Camera (MVIC, Reuter et al., 2008) provided detailed spatial distribution of geological units, and further spectral analysis of data by the Linear Etalon Imaging Spectral Array (LEISA, Reuter et al., 2008) revealed complex ice compositions on different surface units composed of N\({}_{2}\), CO, CH\({}_{4}\), H\({}_{2}\)O, etc. It was found that Pluto's surface has complex volatile distributions that are closely related to ice sublimation, gas condensation, and glacial flow (Grundy et al., 2016; Moore et al., 2016). On the other hand, Charon has a relatively uniform surface with a reddish northern polar region (Grundy et al., 2016; Stern et al., 2017). The preliminary surface albedo maps of Pluto and Charon based on New Horizons data, derived by Buratti et al. (2017, 2019), contain much greater detail than the surface maps in previous studies. Therefore, it is important to revisit previous thermal rotational lightcurve data based on the new albedo maps, which is the first motivation of this study.
The second motivation is related to the hazy atmosphere of Pluto. Traditionally, the rotational lightcurve analysis of the Pluto-Charon system was focused only on their surfaces. However, during the New Horizons flyby, it was discovered that Pluto's atmospheric temperature was much colder than previously thought based on the gas-only model. Proposed solutions include cooling from the high concentration of super-saturated water vapor (Strobel and Zhu, 2017) or from the atmospheric haze (Zhang et al., 2017; Wan et al., 2021) that was clearly visible in the New Horizons images (Gladstone et al., 2016). If haze is the main coolant in Pluto's atmosphere, it is predicted that the thermal emission from the haze could significantly contribute to the observed mid-infrared flux, such as that observed at 23.68 (hereafter 24) \(\mu\)m by Spitzer (Lellouch et al., 2011), while the haze contribution might be negligible in the far-infrared wavelengths like 71.42 (hereafter 70) \(\mu\)m by Spitzer (Lellouch et al., 2011) or longer wavelengths by Herschel (Lellouch et al., 2016). This effect is important if the haze particles behave optically similar to Titan-like tholins (Zhang et al., 2017). Recent studies suggested that the haze particles might be primarily composed of hydrocarbon and nitrile ices, which might produce a much smaller cooling effect (Lavvas et al., 2021). A later study found that the ice particles could also explain the cold temperature of Pluto's atmosphere (Wan et al., 2021). In this case, the infrared flux from the icy haze seems much smaller than the Titan-like organic haze, but its contribution to the thermal lightcurve might not be negligible. The large thermal emission from haze could obscure the surface features and greatly reduce the relative amplitude of rotational lightcurve variations in the mid-infrared. As a result, including the haze emission in the rotational lightcurve analysis would not only lead to a better understanding of how the haze affects the derivation of surface properties from observed lightcurves, but also provide an opportunity to constrain the haze emission, which is largely uncertain in atmospheric models due to the lack of constraints from composition and optical
constants for icy particles (Lavvas et al., 2021, Section 2.2 and Discussion in Wan et al., 2021) and non-icy tholin-like materials (Khare et al., 1984; Jovanovic et al., 2020, 2021; Moran et al., 2022).
To better constrain both the surface properties and haze emission, in this study we also introduce a Bayesian retrieval framework that can efficiently evaluate the posterior distribution of parameters. The Bayesian inference technique has been widely used in analyzing remote sensing data on solar system planets (e.g., Zhang et al., 2013; Fan et al., 2019) and exoplanets (e.g., Line et al., 2011, 2013). It has also been used to retrieve the chemical abundances in Pluto's atmosphere from occultation data (Young et al., 2018) as well as the haze distribution like particle sizes (Fan et al., 2022). However, it has not been applied to Pluto's rotational lightcurve analysis before. With detailed surface albedo maps of Pluto and Charon now available, using Bayesian inference might provide useful constraints on the posterior distribution of the derived surface properties and haze emission.
We analyze three rotational lightcurves of the Pluto-Charon system in the infrared, selecting data from Spitzer at 24 and 70 \(\mu\)m in 2004 (Lellouch et al., 2011), as well as data from Herschel at 70 \(\mu\)m in 2012 (Lellouch et al., 2016). We select the 24 \(\mu\)m data as the haze emissions could be significant at this wavelength (Zhang et al., 2017), while the 70 \(\mu\)m data was selected as haze emissions are not important at this wavelength (as explained in Section 2.3). By using both channels, we wish to differentiate the contribution of haze in the lightcurves. We include the Herschel data because of its better quality compared to the Spitzer data. However, we do not include the Herschel data at sub-mm wavelengths (such as 500 \(\mu\)m, Lellouch et al., 2016) due to the significant subsurface contribution and seasonal effects that could complicate the haze flux retrieval analysis.
Our paper is organized as follows: In Section 2, we introduce the surface maps of Pluto and Charon from New Horizons, our thermal physical model, as well as the retrieval models; In Section 3, we discuss our retrieval results in two scenarios, with and without the haze contribution. We also use our best-fit models to predict the rotational lightcurves of several channels on JWST's Mid-Infrared Instrument (MIRI) in Section 3; Finally, we conclude this paper in Section 4.
## 2 Methods
### Maps of Albedo and Emissivity
The surface temperatures of Pluto and Charon are greatly affected by albedo, emissivity, and thermal inertia. The distributions of all three parameters on Pluto are primarily determined by the distribution of volatile ices and non-volatile materials (Grundy et al., 2016; Moore et al., 2016; Schmitt et al., 2017). Surfaces of the Pluto-Charon system were observed during the New Horizons flyby in 2015, including high-resolution mosaics of the "near side" and \(\sim\)20\(\times\) poorer imaging of the "far side" (Buratti et al., 2017; Stern et al., 2021).
As a disk-integrated quantity, Bond albedo was widely used to measure the global energy balance of a planetary body. To calculate the local surface temperature, here we use the bolometric, hemispherical (disk-resolved) albedo. Preliminary maps of disk-resolved albedo for Pluto and Charon using panchromatic LORRI observations were presented in Buratti et al. (2017). Hofgartner et al. (2023) updated the Pluto albedo map by fitting the lunar-Lambert photometric function (e.g., Buratti and Veverka, 1983) to both LORRI and MVIC measurements from a wide range of imaging geometries and by considering different photometric functions for Pluto's extreme dark and extreme bright terrains. The bolometric albedo was approximated using New Horizons LORRI panchromatic
and MVIC panchromatic and color filter observations. The updated albedo map of Pluto is shown in Figure 1(a) and the map of Charon is shown in Figure 1(d).
In our study, we directly apply these latest albedo maps of Pluto and Charon from the New Horizons 2015 flyby to explain the 2004 Spitzer data and 2012 Herschel data. In other words, we assume the albedo distribution does not change between 2004 and 2015. For Pluto, this is only an approximation because the ice distribution on its surface might change with season (Bertrand and Forget, 2016; Bertrand et al., 2020). However, this is the only high-resolution map of Pluto's surface to date. Before the New Horizons flyby, Pluto's surface was barely resolved. Thus, Lellouch et al. (2011) chose to use Pluto's HST reflection lightcurve in 2002-2003 to constrain the albedo of different geological units in their assumed surface maps. One might achieve a more accurate albedo map of Pluto's surface brightness.
Figure 1: Surface maps of Pluto and Charon derived from New Horizons observations. Part of the southern hemispheres were in winter darkness (polar night) during the flyby, and thus the albedo distributions are not well determined south of \(\sim\) -33\({}^{\circ}\). (a) Map of Pluto’s bolometric hemispherical albedo from Hofgartner et al. (2023). (b) Histogram of Pluto’s albedo distribution, where local minima of 0.36 and 0.82 are adopted as unit boundaries. (c) The distribution of different albedo units on Pluto’s surface. (d) Map of Charon’s albedo from Buratti et al. (2017). Note that, Charon rotates synchronously with Pluto and their lit hemispheres exhibit a 180\({}^{\circ}\) longitude difference.
Pluto in 2003 based on both the New Horizons and the HST data. That is beyond the scope of this study as it requires a detailed investigation using an atmosphere-surface model of Pluto (e.g., Young, 2017; Bertrand et al., 2020). As shown in Section 3, using the New Horizons albedo maps can produce a decent fit of both the 2004 Spitzer and 2012 Herschel observations in our retrieval.
Due to the observing geometry, the New Horizons maps do not cover locations south of \(\sim\) -33\({}^{\circ}\) for both Pluto and Charon. Because the sub-solar and sub-observer latitudes were 34.5\({}^{\circ}\) and 32.9\({}^{\circ}\) in 2004 (Spitzer) and about 47.0\({}^{\circ}\) in 2012 (Herschel), the missing albedos in the southern hemisphere would not strongly impact the lightcurves due to the projection effect. We tested a range of albedo values from 0 to 1 in those regions, and confirmed that our results were not changed. In our study, we utilize 120\(\times\)60 congrid maps for both Pluto and Charon, corresponding to a spatial resolution of 3\({}^{\circ}\). We also tested lightcurve simulations with higher spatial resolution, and the results were unchanged.
The emissivity distributions of Pluto and Charon are not directly constrained by New Horizons observations. For simplicity, we follow the approach in Lellouch et al. (2011) that separated Pluto's surface into three geological units based on albedo and assumed a constant emissivity for each unit. We associate such composition units: dark H\({}_{2}\)O ice/tholin mix (unit 3), CH\({}_{4}\) ice of intermediate albedo (unit 2), and bright N\({}_{2}\) ice (unit 1) with three albedo units in Figure 1(b) based on the histogram of Pluto's updated albedo distribution from New Horizons: unit 3 of darkest ices (0\(-\)0.36), unit 2 of intermediate ices (0.36\(-\)0.82), and unit 1 of brightest ices (0.82\(-\)1.0). The divided units (Figure 1c) appear to represent an approximate distribution of icy components identified by LEISA (Grundy et al., 2016; Schmitt et al., 2017; Gabasova et al., 2021; Emran et al., 2023), although multiple ice mixtures show a much more complex pattern in the infrared LEISA maps. In contrast, Charon's surface (Figure 1d) is relatively uniform, and we use only one emissivity value. The thermal inertia distributions are also less constrained than that of albedo and we use a single value for Pluto and a separate single value for Charon.
### Thermophysical Model of Surface
Similar to previous work (Spencer et al., 1989; Lellouch et al., 2000, 2011; Forget et al., 2017), we model the surface and subsurface temperature distribution considering incoming solar heating, surface thermal emission, and thermal conduction for Pluto and Charon. Sublimation and deposition of volatiles are not included. At every point on the surface, the temperature (\(T\)) of subsurface layers at a time (\(t\)) and depth (\(z\)) obeys a diffusion equation:
\[\rho c_{p}\frac{\partial T}{\partial t}=\frac{\partial}{\partial z}(k\frac{ \partial T}{\partial z}), \tag{1}\]
where \(\rho\) (kg m\({}^{-3}\)) is the density, \(c_{p}\) (J K\({}^{-1}\) kg\({}^{-1}\)) is the specific heat capacity and \(k\) (W m\({}^{-1}\) K\({}^{-1}\)) is the thermal conductivity. \(\rho c_{p}=10^{6}\) J m\({}^{-3}\) K\({}^{-1}\) is adopted (Forget et al., 2017) everywhere at any depth on Pluto. The upper (\(z=0\)) and lower (\(z=\infty\)) boundary conditions are
\[k\frac{\partial T}{\partial z}|_{z=0}+Q_{s}=E_{s}, \tag{2}\] \[\frac{\partial T}{\partial z}|_{z=\infty}=0, \tag{3}\]
respectively. At the surface, thermal conduction and the absorbed insolation (\(Q_{s}\)) balance with the infrared emission to space (\(E_{s}\)) (Hayne et al., 2017). We assumed no heat flux at the lower boundary.
To intuitively understand how deep the surface heat flux can propagate downwards to affect the subsurface temperature, we can introduce the skin depth (\(l_{s}\))
\[l_{s}=\frac{\Gamma}{\rho c_{p}}\sqrt{\frac{P}{\pi}}, \tag{4}\]
where \(P\) is the period of stellar flux forcing and \(\Gamma=\sqrt{k\rho c_{p}}\) is the thermal inertia. The depth of \(8l_{s}\) approximates the lower boundary, below which the periodic insolation changes hardly impact the substrate temperature (Spencer et al., 1989).
Pluto has two dominant periods. For the diurnal cycle, \(P_{day}\) = 6.4 Earth-days. If we assume the thermal inertia in the shallow subsurface is \(\Gamma_{day}\) = 20 MKS (J s\({}^{-\frac{1}{2}}\) m\({}^{-2}\) K\({}^{-1}\)) from Forget et al. (2017), the corresponding thermal diffusivity is \(k\) = 4\(\times 10^{-4}\) W m\({}^{-1}\) K\({}^{-1}\) and the lower boundary depth is \(8l_{s}\) = 0.067 m. For the seasonal cycle, \(P_{year}\) = 248 Earth-years. Thermal inertia of the deep subsurface \(\Gamma_{year}\) = 800 MKS (Forget et al., 2017) corresponds to \(k\) = 0.64 W m\({}^{-1}\) K\({}^{-1}\) and \(8l_{s}\) = 320 m, much deeper than the diurnal skin depth. Pluto is assumed to have homogeneous thermal inertia and conductivity; these values of \(\Gamma\) and \(k\) are used at all locations. Because our study focuses on rotational lightcurves, we only consider the shallow subsurface for the diurnal cycle in a self-rotating model, in line with Spencer et al. (1989). In Appendix A, we also test both diurnal and seasonal forcing using a two-layer, orbit-rotating model, which has a shallow subsurface with low thermal inertia overlaying a deep subsurface with high thermal inertia. The full-orbit simulations using this two-layer model show that the diurnal surface temperature variation in our self-rotating model is not greatly impacted by the seasonal cycle.
Evolution of the temperature is solved with a central difference explicit method and can be discretized by
\[T_{j}^{n+1}=T_{j}^{n}+dt\frac{k_{j}}{(\rho c_{p})_{j}}\frac{1}{(z_{j+1}-z_{j-1} )/2}(\frac{T_{j+1}^{n}-T_{j}^{n}}{z_{j+1}-z_{j}}+\frac{T_{j}^{n}-T_{j-1}^{n}}{ z_{j}-z_{j-1}}), \tag{5}\]
where the subscripts of \(j\) and \(n\) represent different levels and numerical time steps, respectively. Similar to Hayne et al. (2017), the boundary conditions with depth can be discretized by
\[\begin{cases}k_{0}\frac{-3T_{0}^{n+1}+4T_{1}^{n+1}-T_{2}^{n+1}}{2(z_{1}-z_{0}) }=\epsilon\sigma T_{0}^{n+1}{}^{4}-(1-A)F_{0}^{n+1},\\ T_{N}^{n+1}=T_{N-1}^{n+1},\end{cases} \tag{6}\]
where \(A\) is the albedo (bolometric hemispherical albedo for Pluto and disk-resolved approximate Bond albedo for Charon), \(\epsilon\) is the bolometric emissivity, \(\sigma\) is the Stefan-Boltzmann constant, and \(F_{0}\) (W m\({}^{-2}\)) is the insolation that varies with time and location. For a given time and orbital phase, we first calculate the sub-solar point on Pluto using Equation (15) in Ohno & Zhang (2019) and then the value of \(F_{0}\) at any surface point based on its incident angle.
The model is divided into 15 uneven vertical layers, with the first two starting at a thickness of \(10^{-3}\) m, below which the thickness increases by a factor of 1.5 between adjacent layers. The total depth of the model is around 0.3 m, much deeper than \(8l_{s}\) (less than 0.1 m) for \(\Gamma\) ranging from 20 to 30, as suggested in Lellouch et al. (2011). The thermal conduction timescale near the surface is \(\sim\)625 s, increasing with depth due to larger layer thickness. To accurately resolve a Pluto day during rotation, we set 1800 time steps with a time step of \(\sim\)300 s. The initial temperature profile is
isothermal based on the surface's equilibrium temperature under diurnal-mean insolation. For each point on the surface, we run the model under diurnal insolation forcing until the diurnal variation of the surface temperature does not change between two subsequent rotations. Typically, five rotations are sufficient to reach convergence.
Given the albedo maps, emissivity, and thermal inertia, we can calculate the temporal and spatial distribution of the surface temperature of Pluto and Charon with our thermophysical model. However, the model does not consider the ice sublimation-condensation processes, which might impact the surface ice temperature through the latent heat exchange. Following Lellouch et al. (2011), we include additional treatment of N\({}_{2}\) and CH\({}_{4}\) ice temperature in our model. For the N\({}_{2}\) ice, given its high albedo (Figure 1) and bolometric emissivity of 0.5 (Lellouch et al., 2011), the N\({}_{2}\) ice radiative-equilibrium temperature is low and is mainly controlled by the sublimation-condensation process. We can just fix the N\({}_{2}\) ice temperature to its vapor-ice equilibrium in our model. Based on the surface pressure of Pluto in 2004 and 2012 from stellar occultation and modeling (Meza et al., 2019) and the saturation vapor pressure of the N\({}_{2}\) ice in the beta form (Fray and Schmitt, 2009), we estimate the N\({}_{2}\) ice temperature to be 36.35 K in 2004 and 36.86 K in 2012. For the CH\({}_{4}\) ice, it was found that the sublimation cooling effect becomes important when the CH\({}_{4}\) ice temperature exceeds \(\sim\)54 K (Stansberry et al., 1996). Below 54 K, the radiative process controls the surface temperature. Thus, we set a maximum temperature of 54 K for the CH\({}_{4}\) ice, which was also adopted in Lellouch et al. (2000, 2011).
### Outgoing Thermal Emission
To model the rotational lightcurves, we first define the sub-observer point (\(\alpha_{0}\), \(\delta_{0}\)), where \(\delta_{0}\) is the latitude and \(\alpha_{0}=\omega t\) is the longitude at time \(t\) with a rotation rate \(\omega=2\pi/P_{day}\). The outgoing surface emission (\(F_{s}\)) from Pluto and Charon at a given wavelength (\(\lambda\)), when viewed from the sub-observer point, can be calculated as follows:
\[F_{s,\lambda}(\alpha_{0}(t),\delta_{0})=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \int_{0}^{2\pi}\epsilon_{\lambda}(\alpha,\delta)B_{\lambda,T}(\alpha,\delta) \cos(\delta)\cos(\Theta)d\delta d\alpha, \tag{8}\]
where \(\epsilon_{\lambda}\) is the spectral thermal emissivity at wavelength \(\lambda\), and \(B\) (W m\({}^{-2}\) Sr\({}^{-1}\)\(\mu\)m\({}^{-1}\)) is the blackbody emission determined by the surface temperature at a surface location (\(\alpha\), \(\delta\)). \(\cos(\Theta)=\max[\sin(\delta_{0})\sin(\delta)+\cos(\delta_{0})\cos(\delta) \cos(\alpha-\alpha_{0}),0]\), where \(\Theta\) means the angular distance between the sub-observer point (\(\alpha_{0}\), \(\delta_{0}\)) and surface locations (\(\alpha\), \(\delta\)). We generate the rotational emission lightcurves by using a fixed sub-observer latitude (\(\delta_{0}\)) and varying the time \(t\) from 0 to \(2\pi/\omega\).
Previous thermal emission calculations only considered the contribution from Pluto's and Charon's surfaces (Lellouch et al., 2011) but neglected Pluto's haze flux, which might significantly affect the total outgoing emission. Pluto's haze extends several hundred kilometers above the surface, and the particles can scatter and absorb solar radiation in the optical wavelengths and emit in the mid-infrared. Due to the uncertainty in haze composition, the estimated haze infrared flux at 24 \(\mu\)m from atmospheric models differs by more than an order of magnitude, ranging from 0.1\(\times\) to a few mJy (Zhang et al., 2017; Lavvas et al., 2021; Wan et al., 2021). The latter is comparable to the surface emission of Pluto at the same wavelength (Lellouch et al., 2011). Thus, it is necessary to include Pluto's haze when analyzing the infrared spectra and thermal lightcurves of the Pluto-Charon system.
We need to consider several facts to account for the effects of Pluto's haze in the emission calculation. First, the haze might block part of the thermal emission from the surface, obscuring the surface features. However, the infrared optical depth of the haze is too small (\(\sim 10^{-3}\) at 24 \(\mu\)m, Zhang et al., 2017) to be important. Therefore, we can directly add the haze emission to the surface emission in our model.
Second, the haze emission contributes differently to the total outgoing flux at different wavelengths. The haze emission might be as important as the surface emission in the mid-infrared like 24 \(\mu\)m, but it is negligible in the far-infrared like 70 \(\mu\)m. The haze flux, calculated assuming Titan-like tholins in Zhang et al. (2017), is about 2 mJy at 70 \(\mu\)m, much smaller than the blackbody emission of Pluto's surface at this wavelength, which is over 100 mJy. As a result, in this study, we only consider the haze emission at shorter wavelengths such as 18 or 24 \(\mu\)m but not at longer wavelengths such as 70 \(\mu\)m.
Lastly, Pluto's global haze layers are not entirely spatially uniform (Cheng et al., 2017) as shown in the north-south brightness asymmetry in the LORRI images. 3D haze transport simulation on Pluto (Bertrand & Forget, 2017) also shows that more haze was seen at the North pole due to the larger incoming solar flux. However, Pluto's gases and haze are predicted to be quickly mixed over all longitudes by the zonal circulation, therefore justifying the approximation of a longitudinally uniform haze layer. Because this study focuses on the disk-averaged flux, we assume a constant haze emission when Pluto is rotating. The observed inverted temperature profile (Hinson et al., 2017) above Pluto's cold icy surface was classically understood as being primarily caused by thermal conduction and CH\({}_{4}\) heating/cooling radiative equilibrium, with some additional role due to CO, HCN, and C\({}_{2}\)H\({}_{2}\) cooling (Yelle & Lunine, 1989; Strobel et al., 1996; Strobel & Zhu, 2017). However, it has recently been shown that haze radiative heating/cooling (Zhang et al., 2017) and eddy heat transport (Wan et al., 2021) could dominate the region. If haze is the main coolant, the local surface temperature does not strongly influence the haze emission whose contribution mainly comes from 20 km above the surface where the atmospheric temperature maximizes (Hinson et al., 2017).
### Bayesian Inference Retrieval
Directly solving for the surface properties of Pluto and Charon, as well as Pluto's haze flux, is a challenging task due to the complexity of the thermophysical model and lightcurve calculations involved. We choose a Bayesian retrieval method, which has been widely used in planetary and exoplanet studies, to simultaneously constrain all parameters and assess their uncertainties. This method iteratively solves the inverse problem to find the best-fit model solution to the observed data. Various algorithms have been developed to fit observational data and determine the model parameters as well as their posterior distributions, such as Markov Chain Monte Carlo (Mackay, 2003) and Nested Sampling technique (Skilling, 2004). For an efficient and unbiased sampling of the parameter space, we employ the well-developed software package _PyMultiNest_(Buchner et al., 2014), which utilizes the Nested Sampling technique. By comparing the simulated flux with observations from Spitzer and Herschel in Lellouch et al. (2011, 2016), we are able to find the best-fit model parameters that minimize the difference between the simulated and observed lightcurves. In addition, the Nested Sampling approach can estimate the posterior distribution (i.e., uncertainty) of the retrieved parameters.
We list important input parameters of our model in Table 1, including the distance and locations of the Sun and observer to Pluto in 2004 and 2012 (Lellouch et al., 2011, 2016), the radii of Pluto
and Charon (Nimmo et al., 2017), and the thermal properties of ices. The retrieved parameters in our model include Pluto's thermal inertia (\(\Gamma_{Pluto}\)), emissivity of unit 2 (approximately CH\({}_{4}\) ice, \(\epsilon_{CH_{4}}\)), Charon's thermal inertia (\(\Gamma_{Charon}\)), as well as the haze flux at 24 \(\mu\)m (\(F_{haze}\)). We neglect the haze emission contribution at 70 \(\mu\)m as it is much smaller compared to the surface flux at that wavelength. Because the N\({}_{2}\) ice temperature is low, its contribution to the rotational lightcurves is negligible. Thus, we fix its bolometric and spectral emissivity using the values in Lellouch et al. (2011). On the other hand, Pluto's H\({}_{2}\)O ice/tholin mix unit (dark terrain) and Charon's surface generally have a high emissivity Lellouch et al. (2011), so we fix their emissivity as unity in our model. We assume the spectral emissivity of the CH\({}_{4}\) ice at 24 and 70 \(\mu\)m is the same as the bolometric emissivity. In other words, there is only one emissivity value for the CH\({}_{4}\) ice. It appears that one emissivity parameter of the CH\({}_{4}\) ice unit in our retrieval model is sufficient to explain the observed lightcurves. Even though the CH\({}_{4}\) emissivity might be wavelength-dependent (Stansberry et al., 1996a) and previous studies (Lellouch et al., 2011) tested wavelength-dependent emissivity, the current data could not constrain all three independent values.
In the Nested Sampling approach, live points are the set of samples being actively updated by the algorithm during the course of the computation. At each iteration of sampling, the point with the lowest likelihood value (i.e., the least probable point) is removed from the set of live points and replaced with a new point that has a higher likelihood value. This process continues until the desired level of precision is achieved or until the algorithm converges. Increasing the number of live points can lead to more accurate estimates of parameters and their posterior distributions, but also increases the computational cost. The number of live points should be at least 10 times and preferably 25 times the number of free parameters. We choose 1600 live points for our retrieval, although 800 also works and gives similar results.
## 3 Results
### Two Possible Scenarios to Explain Lightcurves
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline & Spitzer (2004) & Herschel (2012) & JWST (2023) \\ \hline \hline Sub-solar distance (AU) & 30.847 & 32.19 & 34.6 \\ \hline Sub-solar latitude (\({}^{\circ}\)) & 34.5 & 47.0 & 58.3 \\ \hline Sub-observer distance (AU) & 30.95 & 32.4 & 34.4 \\ \hline Sub-observer latitude (\({}^{\circ}\)) & 32.9 & 48.5 & 57.3 \\ \hline Planetary radius (km) & \multicolumn{3}{c|}{1188 for Pluto and 606 for Charon} \\ \hline Emissivity for N\({}_{2}\) ice & 0.5 for bolometric and 70 \(\mu\)m, 0.05 for others\({}^{*}\) \\ \hline Emissivity for H\({}_{2}\)O ice/tholin mix & \multicolumn{3}{c|}{1.0 for bolometric and all wavelengths} \\ \hline Emissivity for Charon & \multicolumn{3}{c|}{1.0 for bolometric and all wavelengths} \\ \hline Fixed temperature for N\({}_{2}\) ice (K)\({}^{\dagger}\) & 36.35 & 36.86 & 36.35 \\ \hline Max temperature for CH\({}_{4}\) ice (K) & \multicolumn{3}{c|}{54} \\ \hline \end{tabular}
\end{table}
Table 1: Input parameters of the thermophysical model. \({}^{*}\)Based on the N\({}_{2}\) emissivities in Lellouch et al. (2011), we adopted 0.5 for bolometric emissivity and the emissivity at 70 \(\mu\)m, and 0.05 for other shorter wavelengths between 18 and 30 \(\mu\)m. \({}^{\dagger}\)Due to the very low temperature and small spectral emissivity, this value does not affect Pluto’s total thermal emission.
We first present the results from the "free" retrieval, where we include the haze flux as a free parameter to constrain its value based on the observed lightcurves. The resulting posterior distribution of
Figure 2: The posterior distribution of retrieved parameters for the “free retrieval” scenario with haze (red) and “surface only” scenario without haze (cyan), including 24 \(\mu\)m haze flux \(F_{haze}\) (mJy), CH\({}_{4}\) ice emissivity \(\epsilon\), thermal inertia \(\Gamma\) (MKS) for both Pluto and Charon. Vertical dashed lines in the diagonal panels denote 2.5%, 50%, and 97.5% quantiles of the distributions. Off-diagonal panels show the two-dimensional parameter covariance plots, indicating a strong negative correlation between the haze flux and Charon’s thermal inertia (lower-left panel).
retrieved parameters and the fitted rotational lightcurves are presented in Figure 2 (red) and Figure 3 (upper panel), respectively.
Using the best-fit parameters, our model is able to explain the rotational lightcurves relatively well at both 24 and 70 \(\mu\)m. Both Pluto and Charon emit significantly more at 70 \(\mu\)m than at 24 \(\mu\)m, as shown in Figure 3. The corresponding temperature exhibits significant diurnal and spatial variation. We show the snapshots of their temperature maps at the noon time in Figure 4. We also provide animations of the diurnally-varying temperature distributions of Pluto and Charon from our best-fit models, which are available on the journal's website.
The lightcurves (black in Figure 3) at 24 and 70 \(\mu\)m exhibit two peaks, with the main peak occurring near 100\({}^{\circ}\) and a smaller peak near 300\({}^{\circ}\) in longitude. The main peak is primarily determined by the distribution of the dark and warm H\({}_{2}\)O ice/tholin mix (purple) on Pluto, which is unevenly distributed along the longitude at low latitude regions (unit 3 in Figure 1). One can also see the corresponding temperature map at noon time when during Spitzer's observations in 2004 (Figure 4a). While the CH\({}_{4}\) ice, which covers a larger surface area in the northern latitudes (unit 2 in Figure 1), emits comparable flux to that of the H\({}_{2}\)O ice/tholin mix but shows less variation across longitude. This CH\({}_{4}\) ice unit contributes to the second but smaller emission peak near 300\({}^{\circ}\) in longitude. In contrast, the N\({}_{2}\) ice emission (green) is negligible due to its much lower temperature. As the lightcurve variation primarily arises from Pluto's surface, it provides strong constraints on parameters such as Pluto's thermal inertia and emissivity. Unlike Pluto, Charon's uniform but darker surface (blue) produces almost flat lightcurves of about 2.2 mJy at 24 \(\mu\)m and 128.5 mJy at 70 \(\mu\)m, which is smaller than Pluto's total surface emission (gray).
Figure 3: Rotational emission lightcurves of Pluto and Charon from our best-fit models (lines) and observations (dots) at Spitzer 24 \(\mu\)m and 70 \(\mu\)m (first two columns), Herschel 70 \(\mu\)m (third column). Contributions from Pluto’s different components are separately plotted for the “free retrieval” scenario with haze (upper panel) and “surface only” scenario (lower panel).
In the best-fit model, Pluto's haze that was ignored in previous work (Lellouch et al., 2000, 2011, 2016), emits comparably to Charon's emission at 24 \(\mu\)m. The haze emission of 1.63 mJy at 24 \(\mu\)m is similar to that in the atmospheric model calculations using Titan-like tholins in Zhang et al. (2017). However, the posterior distribution of this parameter indicates that the retrieved haze flux is \(1.63^{+0.43}_{-0.49}\) mJy, given by live points distributed within the 2.5% and 97.5% quantiles of the posterior by the Nesting Sampling technique (Figure 2). The lower value of 1.14 mJy at 24 \(\mu\)m might correspond to the situation of less absorbing particles in Zhang et al. (2017). Note that, in our model the haze contribution is negligible at 70 \(\mu\)m (see Section 2.3).
The wide range of retrieved haze flux on Pluto is due to the inherent degeneracy between Pluto's haze emission and Charon's surface emission. This is because Spitzer and Herschel telescopes cannot resolve Charon from Pluto and both Pluto's haze and Charon's surface are spatially uniform. Even though we have tried to include the haze emission at 24 \(\mu\)m but not at 70 \(\mu\)m, and Charon contributes to both wavelengths, the current data is insufficient to break the degeneracy. As illustrated in the
Figure 4: Surface temperature distributions of Pluto (left) and Charon (right) in 2004 from our best-fit models in the “free retrieval” scenario with haze (upper panel) and the “surface only” scenario (lower panel). We plotted the maps at noon time when the sub-solar point at Pluto is 180\({}^{\circ}\) East Longitude. The animations of diurnal temperature evolution in 2004 and 2012 are available on the journal’s website.
corner plot that shows the joint distribution of the parameters (lower-left panel of Figure 2), Pluto's haze emission exhibits a strong negative correlation with Charon's thermal inertia. For instance, when the haze flux is approximately 1.14 mJy at 24 \(\mu\)m, the retrieved Charon's thermal inertia is about 13 MKS, yielding about 2.87 mJy. If the haze flux is as high as 2.06 mJy, the corresponding derived Charon's thermal inertia is 45 MKS, corresponding to a Charon surface flux of about 1.75 mJy. This correlation implies that the total emission from Pluto's haze and Charon's surface is roughly constant in all solutions, which is about 4 mJy at 24 \(\mu\)m (Figure 3).
Moreover, in any retrieval process, the goodness of fit is influenced not only by data quality but also by the accuracy of the forward model. Our thermophysical model has made several assumptions, including dividing Pluto's surface into three geological units and a single thermal inertia value for the entire surface. Due to these model assumptions and incomplete information regarding the input albedo distribution (and neglecting its temporal evolution), our best-fit model does not perfectly match all data points in the observations, particularly those with minimal error bars (e.g., Figures 3b and 3e). Recognizing the potential biases in the forward model's construction and the substantial uncertainty range in the retrieved haze emission flux, we should be careful when interpreting the posterior distribution of the retrieved parameters. As an extreme scenario, it would be interesting to see the best fit using a model without haze. Therefore, we conduct an additional retrieval in a "surface-only" scenario, where the haze flux is excluded, and only surface properties are retrieved. This approach resembles the one in Lellouch et al. (2011), albeit with updated albedo maps from New Horizons.
The parameters and their posterior distributions from the "surface only" retrieval are shown in Figure 2 (cyan). The fitted rotational lightcurves are shown in Figure 3 (lower panel). Interestingly, our results are consistent with that in Lellouch et al. (2011). The "surface only" scenario using the New Horizons albedo maps can also fit the rotational lightcurves relatively well without any haze contribution. As expected, the "surface only" retrieval yields low thermal inertia of Charon (5.5\(\pm\)0.5 MKS), much lower than the "free retrieval" results and previous estimates when using a different albedo map of Pluto (Lellouch et al., 2011, 2016). It is still acceptable because trans-Neptunian objects (TNOs) have even smaller values of 2.5\(\pm\)0.5 MKS (Lellouch et al., 2013). The contribution of Charon's emission flux at 24 \(\mu\)m is about 4.2 mJy in the "surface only" case, roughly equal to the total emission of both Pluto's haze and Charon's surface in the "free retrieval" case. Given that the "free retrieval" scenario prefers a non-zero haze flux and that the "surface only" scenario can also provide a decent fit, we conclude that the current data cannot constrain Pluto's haze flux. It also means that the current data cannot distinguish whether Pluto's haze is Titan-like tholins (Zhang et al., 2017) or ice particles (Lavvas et al., 2021; Wan et al., 2021).
Radio observations of the Pluto-Charon system have a high spatial resolution that allows for the separation of Charon from Pluto. For example, SMA observed the dayside brightness temperature of Charon at 1.36 mm, yielding a temperature of 43\(\pm\)13 K in 2005. Our best-fit models give Charon's surface brightness temperature of 51.1 K for the "free retrieval" scenario with haze and 54.0 K for the "surface only" scenario without haze, respectively. Both temperatures are within the uncertain range of the SMA results. ALMA observations at 0.86 mm showed a temperature of 43.9\(\pm\)0.3 K for Charon in 2015, whereas our models predict a temperature of 51.8 K for the "free retrieval" scenario with haze and 54.3 K for the "surface only" scenario without haze. These derived temperatures are greater in the scenario absent of haze due to smaller thermal inertia from retrievals, however, temperatures
from both scenarios are larger than the corresponding radio observations. Similar results are also found for other observations, such as SMA at 1.10 mm in 2010 and VLA at 9.00 mm in 2011 (see Table 10 in Bird et al., 2019), New Horizons dayside-staring at 4.17 cm in 2015 (Bird et al., 2019), and Herschel at 500 \(\mu\)m in 2012 (Lellouch et al., 2016). This discrepancy is expected because radio observations do not directly probe the surface temperature but rather a mix of contributions from the surface and subsurface, which could be colder than the surface on the dayside. Furthermore, the radio emissivity of surfaces is not unity (Lellouch et al., 2016). Therefore, current radio observations of Charon's surface cannot distinguish the "free retrieval" scenario with haze versus the "surface only" scenario.
Observations of near-infrared spectra can also be used to probe the surface ice temperature, such as using the absorption bands of H\({}_{2}\)O ice as thermometers (Grundy et al., 1999; Protopapa et al., 2021). Buie & Grundy (2000) inferred a disk-average temperature of 60\(\pm\)20 K for Charon's surface via this method. Recently, Holler et al. (2017) revisited this effort on the 1.65 \(\mu\)m absorption band of H\({}_{2}\)O ice and reported Charon's mean surface temperature as 45\(\pm\)14 K in 2015. Our modeled surface temperature for both scenarios (49.5 K with haze and 48.9 K without haze in 2012) falls within this large uncertainty range, implying that the current near-infrared "thermometer" is still too large to precisely constrain Charon's surface temperature to separate the two scenarios in this study.
While we cannot well constrain the properties of Pluto's haze and Charon's surface, we can determine their total emission flux at 24 \(\mu\)m in 2004, which is about 4 mJy in both haze and haze-free scenarios. In other words, the emission from Pluto's surface can be constrained. Indeed, Pluto's surface properties appear well-determined from the shape of lightcurves. The surface temperature distributions of Pluto in the best-fit models appear very similar between the "free retrieval" scenario with haze (Figure 4a) and the "surface only" scenario without haze (Figure 4c). In both scenarios, the retrieved bolometric emissivity of Pluto's CH\({}_{4}\) ice is about 0.4, suggesting that the CH\({}_{4}\) ice surface might have small grains according to Figure 5 in Stansberry et al. (1996a). Our retrieved CH\({}_{4}\) emissivity is smaller than that from Lellouch et al. (2011, 2016), which yields the bolometric emissivity of 0.7-0.9, spectral emissivity of 1.0 at 24 \(\mu\)m and 0.9 at 70 \(\mu\)m. The derived thermal inertia of Pluto is 10-20 MKS in Lellouch et al. (2011) and 16-26 MKS in Lellouch et al. (2016), higher than our optimal value (8-10 MKS). The reason has yet to be explored but is likely related to the difference in albedo maps in the two studies.
In summary, incorporating haze flux into retrievals is consistent with all existing observations. However, the current data quality is insufficient to distinguish between the contributions of Charon and haze, and cannot directly constrain the haze emission of Pluto. In the next section, we suggest that future observations using the JWST at additional wavelengths may assist in resolving this issue.
### Predictions for JWST
Future observations with JWST can help distinguish between the two scenarios and provide valuable information on the haze and surface properties of Pluto. To achieve this, we have selected three typical channels of the JWST/MIRI: 18, 21, and 25.5 \(\mu\)m, with sensitivities (Glasse et al., 2015) of 4.87, 9.15, and 30.8 \(\mu\)Jy, respectively. These channels are chosen as we primarily focus on the thermal emission from the Pluto-Charon system, and shorter wavelengths may be contaminated by reflected light. To support the planned JWST observations of Pluto in 2022-2023 1, we calculate the rotational
lightcurves based on a sub-solar latitude of 58.3\({}^{\circ}\), a sub-observer latitude of 57.3\({}^{\circ}\), a distance of 34.6 AU to the Sun, and a distance of 34.4 AU to the observer of JWST. In this configuration, it is possible to resolve Charon from Pluto and break the degeneracy between Pluto's haze emission and Charon's surface emission. Two limiting cases without haze and with thick haze (assuming Titan-like tholins) are presented in Figure 5 using the best-fit parameters in the "surface only" and "free retrieval" scenarios in Section 3.1, respectively. We assume the spectral emissivity of the surface units at these three wavelengths is the same as that at 24 \(\mu\)m from the retrieval.
In the "surface only" scenario (lower panel in Figure 5), the surface emission from both Pluto and Charon increases with wavelength as it approaches the blackbody peak. At 18 \(\mu\)m, Pluto's surface emission is 0.1 mJy, which increases to 0.5 mJy at 21 \(\mu\)m and to 3.0 mJy at 25.5 \(\mu\)m. Similarly, Charon's surface emission increases from 0.25 mJy at 18 \(\mu\)m to 1.0 mJy at 21 \(\mu\)m and to around 4.5 mJy at 25.5 \(\mu\)m. Notably, Charon's surface emission is approximately twice larger than that of Pluto's surface for all three wavelengths. The contributions of different surface ices to Pluto's surface emission look dramatically different from those in 2004. This is because the sub-solar and sub-observer latitudes have moved much further north, from \(\sim\)34\({}^{\circ}\) in 2004 to \(\sim\)58\({}^{\circ}\) in 2023. Since most of the H\({}_{2}\)O ice/tholin mix is concentrated near the equator and Pluto's northern hemisphere is predominantly covered with CH\({}_{4}\) ice, the emission of Pluto's surface (gray) mainly comes from the CH\({}_{4}\) ice (orange) rather than the H\({}_{2}\)O ice/tholin mix (purple) in 2023 (see Figure 5). Therefore, the total lightcurves (black) are less variable compared to that in 2004 and 2012 (Figure 3) and peak at 300\({}^{\circ}\) in 2023 rather than 100\({}^{\circ}\) in longitude.
Figure 5: Rotational emission lightcurves of Pluto and Charon simulated for the JWST observations at 18 \(\mu\)m, 21 \(\mu\)m, and 25.5 \(\mu\)m. Results of the “free retrieval” scenario with haze (upper panel) and the “surface only” scenario without haze (lower panel) are plotted with contributions from different geologic units and haze on Pluto and the surface of Charon. The haze emission fluxes at multiple wavelengths are calculated using the atmospheric model from Zhang et al. (2017) assuming Titan-like tholins.
Taking Pluto's haze into consideration would make it more complicated to interpret the data, but it also offers an opportunity to constrain Pluto's haze emission. If the haze is made of absorbing Titan-like tholins, calculations using atmospheric model (Zhang et al., 2017) predict that it could emit much more than Pluto's surface at 18 and 21 \(\mu\)m (upper panel in Figure 5). JWST observations at these two wavelengths can be used to directly constrain the haze emission. On the other hand, if the haze emission is smaller, the surface emission and haze emission cannot be easily disentangled. As an extreme example, if the haze is made of brighter ice particles as suggested by the chemical models (Lavvas et al., 2021), the haze radiative cooling could be ten times smaller than that from the Titan-like tholins (Wan et al., 2021). For instance, if the tholin haze opacity is reduced by 20 times in the lower atmosphere as suggested by Wan et al. (2021), such haze emission in the mid-infrared is comparable to the surface emission at 18 \(\mu\)m (0.1 mJy), a few times smaller than the surface at 21 \(\mu\)m, but negligible compared with the surface at 25.5 \(\mu\)m. Thus it is best to use 18 \(\mu\)m to constrain the haze properties. Yet, the emission from icy haze could also be large within its solid-state absorption band, for instance, within the 10-15 micron range for nitrile ices (see Figure 4 in Zhang et al., 2017). Future spectra from the JWST may be capable of identifying these spectral structures. In any case, the rotational lightcurves from JWST in 2023 appear much flatter than the Spitzer observations as sub-observer latitude moves further north in absence of contribution from the H\({}_{2}\)O ice/tholin mix. It might be difficult to distinguish the CH\({}_{4}\) ice emission at high latitudes and the haze emission. Combining with previous data from Spitzer or Herschel might be important to break the degeneracy, although assumptions need to be made such as that the ice distributions and haze temperature do not change significantly with time from 2004 to 2023.
Figure 6: Variation of rotational emission lightcurves (in units of \(\mu\)Jy) simulated for JWST’s observations at 18 \(\mu\)m, 21 \(\mu\)m, and 25.5 \(\mu\)m. Results of the “free retrieval” scenario with haze (upper panel) and the “surface only” scenario without haze (lower panel) are plotted. Dashed lines indicate the 10\(\sigma\) faint-source detection limits in a 10\({}^{4}\)-second integration for corresponding channels of the JWST/MIRI.
Interestingly, although the lightcurves of both Pluto and Charon exhibit nearly flat profiles in 2023, their variations can be characterized using JWST. In contrast to the lightcurves observed in 2004 and 2012, where Pluto's variation was much greater than that of Charon (Figure 3), our predictions indicate that Pluto's lightcurve variation during the JWST era will be comparable to that of Charon (Figure 6). At wavelengths of 18, 21, and 25.5 \(\mu\)m, the peak-to-trough flux amplitudes are approximately 20, 100, and 400 \(\mu\)Jy, respectively (Figure 6), all of which exceed the JWST/MIRI instrument's detection limit. Pluto's lightcurve exhibits a distinct "U" shape, primarily influenced by the distribution of ice concentrated near the 180\({}^{\circ}\) mark in the northern hemisphere. On the other hand, Charon's lightcurve displays a maximum-flux phase shift from Pluto's, likely influenced by slightly darker materials in its northern high latitudes (Figure 1d) and the thermal phase lag (note that both Pluto and Charon are retrograde rotators with an obliquity larger than 90 degrees). Although distinguishing between best-fit models with and without haze in the lightcurve variation may be challenging (Figure 6), the shape of the lightcurve can serve as a useful constraint for determining the thermal inertia and ice albedo distributions on both bodies during the JWST era.
## 4 Summary and Discussions
In this paper, we investigate the properties of Pluto's surface and haze using thermophysical modeling and Bayesian retrieval methods. We re-analyze the rotational lightcurves of the Pluto-Charon system observed by Spitzer and Herschel, incorporating Pluto's haze emission and the latest surface albedo maps from New Horizons. Our main findings are as follows:
1. Including Pluto's haze emission is consistent with all current observations. The best-fit model from our retrieval with haze yields about 1.63 mJy haze flux at 24 \(\mu\)m. In this case, Pluto's haze emission could be significant in the mid-infrared (e.g., 18-25.5 \(\mu\)m) compared with the surface emission of Pluto or Charon.
2. However, the haze flux has a large uncertainty due to the degeneracy of 24 \(\mu\)m thermal emission between Pluto's haze and Charon's surface. Even without haze emission, the rotational lightcurves can be explained by the surface emission from Pluto and a low-thermal-inertia Charon. Therefore, the current data cannot constrain the haze flux well, ranging from 0 to 1.63 mJy, with corresponding values of Charon's diurnal thermal inertia from 5 to 23 MKS.
3. The lightcurve shape can provide useful constraints on Pluto's surface properties. Our best-fit model indicates a diurnal thermal inertia of 8-10 MKS on Pluto's surface and a low emissivity of CH\({}_{4}\) ice consistent with the small grain case.
4. JWST mid-infrared observations can separate Pluto from Charon and break the degeneracy of emission between Pluto's haze and Charon's surface. We predict that the rotational lightcurve of Pluto at 18 \(\mu\)m observed by JWST in 2022-2023 will be relatively flat, which could directly assess the existence of haze thermal emission as Pluto's surface emission is only around 0.1 mJy assuming the same spectral emissivity at 18 \(\mu\)m as at 24 \(\mu\)m.
5. The lightcurve variations of both Pluto and Charon in the JWST era range from tens of \(\mu\)Jy at 18 \(\mu\)m to hundreds of \(\mu\)Jy at 25.5 \(\mu\)m. Their variations can be characterized with JWST to constrain the thermal inertia and ice albedo distributions on both bodies.
Previous atmospheric models that explained the cold temperature of Pluto predict a wide range of haze emission at 24 \(\mu\)m. The haze flux is approximately 2.48 mJy if its particles are composed of Titan-like tholins. However, if the particles are composed of brighter ice particles, the flux could decrease by more than an order of magnitude outside the ice absorption band. The current obser
vations cannot distinguish between the two scenarios, and thus cannot constrain the cooling rate in Pluto's atmosphere. At 18 \(\mu\)m, the haze flux is 1.5 (1.2) mJy for Titan-like tholins in 2004 (2023). Outside the ice absorption band, the flux could be smaller than 0.1 mJy for brighter ice particles. It is thus possible that JWST observations could help to distinguish between the two scenarios and provide more information about the cooling rate in Pluto's atmosphere. Specifically, the mid-IR spectra from JWST should be useful in identifying the solid-state features of ice particles, thereby aiding the differentiation between the two potential scenarios.
It is important to note the assumptions made in this study. First, we assume that Pluto's surface map has not changed significantly in the past two decades when calculating the lightcurves of Spitzer in 2004 and Herschel in 2012 (Figure 3), and JWST in 2023 (Figure 5), based on the derived map from New Horizons in 2015 (Figure 1). Although a time-invariant map in our model can explain all observations, Pluto's surface is expected to change over time. As the surface maps of Pluto and Charon are crucial inputs when modeling rotational emission lightcurves, a future study should include a seasonal change of the surface ice and albedo distribution. That would require a combined analysis of the New Horizons data revealing the limb-darkening information, multi-decadal ground-based observations of Pluto and Charon (Buratti et al., 2003, 2015), and an atmosphere-surface evolution model (Toigo et al., 2015; Forget et al., 2017; Young, 2017; Bertrand et al., 2020).
Second, emissivity and thermal inertia are also important factors in determining surface temperature besides albedo. Due to limited information, we follow the approach of Lellouch et al. (2011) and divide Pluto's surface into three component units based on the albedo map and assume the same emissivity within each unit. However, recent principal component analysis using the LEISA observations has revealed several more ice units (Schmitt et al., 2017; Gabasova et al., 2021; Emran et al., 2023). Different mixtures of ice could have varying porosity and grain size, which would affect emissivity (Laped & Stoll, 1991; Stansberry et al., 1996). Additionally, we assume the same thermal inertia for the entire globe, but it could vary with location and depth due to changes in composition. Future work with better data may relax these assumptions to retrieve the thermal inertia and emissivity of individual geological units and connect them with the ice compositions from spectroscopic measurements.
Finally, our model only accounts for the diurnal variation of surface and subsurface temperatures, as we focus on rotational lightcurves within a day. We assume that the heat flux from the deepest substrate in our model is zero, which is not entirely accurate due to the large thermal inertia in the deep layers of Pluto on a seasonal timescale. Although this assumption seems working well for the infrared data, seasonal changes have been observed in radio observations to probe the deep subsurface (e.g., Gurwell et al., 2011; Butler et al., 2017; Bird et al., 2019). Future work should consider both diurnal and seasonal changes like Lellouch et al. (2016), and combine both infrared and radio data to finally understand the short-term and long-term evolution of the surface, subsurface, and atmospheric haze on Pluto.
## 5 Acknowledgments
This work is supported by NASA Solar System Workings Grant 80NSSC19K0791. We thank the reviewer for their valuable and constructive comments. We thank Dr. Bonnie Buratti for the
discussions about albedo data from New Horizons. Retrievals are performed on lux supercomputer at UC Santa Cruz, funded by NSF MRI grant AST 1828315.
|
2305.19781 | On a class of elliptic equations with Critical Perturbations in the
hyperbolic space | We study the existence and non-existence of positive solutions for the
following class of nonlinear elliptic problems in the hyperbolic space
$$
-\Delta_{\mathbb{B}^N} u-\lambda u=a(x)u^{p-1} \, + \, \varepsilon u^{2^*-1}
\,\;\;\text{in}\;\mathbb{B}^{N}, \quad
u \in H^{1}{(\mathbb{B}^{N})},
$$
where $\mathbb{B}^N$ denotes the hyperbolic space, $2<p<2^*:=\frac{2N}{N-2}$,
if $N \geqslant 3; 2<p<+\infty$, if $N = 2,\;\lambda < \frac{(N-1)^2}{4}$, and
$0< a\in L^\infty(\mathbb{B}^N).$ We first prove the existence of a positive
radially symmetric ground-state solution for $a(x) \equiv 1.$ Next, we prove
that for $a(x) \geq 1$, there exists a ground-state solution for $\varepsilon$
small. For proof, we employ ``conformal change of metric" which allows us to
transform the original equation into a singular equation in a ball in $\mathbb
R^N$. Then by carefully analysing the energy level using blow-up arguments, we
prove the existence of a ground-state solution. Finally, the case $a(x) \leq 1$
is considered where we first show that there is no ground-state solution, and
prove the existence of a \it bound-state solution \rm (high energy solution)
for $\varepsilon$ small. We employ variational arguments in the spirit of
Bahri-Li to prove the existence of high energy-bound-state solutions in the
hyperbolic space. | Debdip Ganguly, Diksha Gupta, K. Sreenadh | 2023-05-31T12:17:06Z | http://arxiv.org/abs/2305.19781v1 | # On a class of elliptic equations with critical perturbations in the hyperbolic space
###### Abstract.
We study the existence and non-existence of positive solutions for the following class of nonlinear elliptic problems in the hyperbolic space
\[-\Delta_{\mathbb{B}^{N}}u-\lambda u=a(x)u^{p-1}\,+\,\varepsilon u^{2^{*}-1}\ \ \text{in}\ \mathbb{B}^{N},\ \ u\in H^{1}(\mathbb{B}^{N}),\]
where \(\mathbb{B}^{N}\) denotes the hyperbolic space, \(2<p<2^{*}:=\frac{2N}{N-2}\), if \(N\geqslant 3;2<p<+\infty\), if \(N=2,\ \lambda<\frac{(N-1)^{2}}{4}\), and \(0<a\in L^{\infty}(\mathbb{B}^{N}).\) We first prove the existence of a positive radially symmetric ground-state solution for \(a(x)\equiv 1.\) Next, we prove that for \(a(x)\geq 1\), there exists a ground-state solution for \(\varepsilon\) small. For proof, we employ "conformal change of metric" which allows us to transform the original equation into a singular equation in a ball in \(\mathbb{R}^{N}\). Then by carefully analysing the energy level using blow-up arguments, we prove the existence of a ground-state solution. Finally, the case \(a(x)\leq 1\) is considered where we first show that there is no ground-state solution, and prove the existence of a _bound-state solution_ (high energy solution) for \(\varepsilon\) small. We employ variational arguments in the spirit of Bahri-Li to prove the existence of high energy-bound-state solutions in the hyperbolic space.
Key words and phrases:Hyperbolic space, hyperbolic bubbles, Palais-Smale decomposition, semilinear elliptic problem 2010 Mathematics Subject Classification: Primary: 35J20, 35J60, 58E30
## 1. Introduction
In this paper, we investigate the existence of solutions for the following class of critical elliptic problem in the hyperbolic space \(\mathbb{B}^{N}\)
\[-\Delta_{\mathbb{B}^{N}}u-\lambda u=a(x)u^{p-1}+\varepsilon u^{2^{*}-1}\ \text{in}\ \mathbb{B}^{N},\ u>0\ \text{in}\ \mathbb{B}^{N},\ u\in H^{1}(\mathbb{B}^{N}),\]
where \(2<p<2^{*}:=\frac{2N}{N-2}\), if \(N\geqslant 3;2<p<+\infty\), if \(N=2,\ \lambda<\frac{(N-1)^{2}}{4}\), \(\varepsilon\) is a real parameter, \(H^{1}\left(\mathbb{B}^{N}\right)\) denotes the Sobolev space on the disc model of the hyperbolic space \(\mathbb{B}^{N}\), \(\Delta_{\mathbb{B}^{N}}\) denotes the Laplace Beltrami operator on \(\mathbb{B}^{N}.\) Further, let \(\mu\) denotes the hyperbolic volume measure, and \(d(x,0)=\log\left(\frac{1+|x|}{1-|x|}\right)\) is the hyperbolic distance of \(x\) from \(0\). Moreover, we investigate the existence of solutions for appropriately chosen \(\varepsilon\) under the following hypotheses separately:
\[(\mathbf{A}_{1}):\ a(x)\geq 1\ \forall x\in\mathbb{B}^{N},\ \ \mu(\{x:a(x)\not \equiv 1\})>0,\ \ a\in L^{\infty}(\mathbb{B}^{N})\ \text{and}\ a(x)\to 1\]
\[\text{as}\ d(x,0)\to\infty.\]
\[(\mathbf{A}_{2}):\ a(x)\in(0,1]\ \ \forall x\in\mathbb{B}^{N},\ \ \mu(\{x:a(x)\not \equiv 1\})>0,\ \ \inf_{x\in\mathbb{B}^{N}}a(x)>0,\ \text{and}\]
\[a(x)\to 1\ \text{as}\ d(x,0)\to\infty.\]
\[(\mathbf{A}_{3}):\ a(x)\equiv 1\ \ \forall x\in\mathbb{B}^{N}.\]
Further, let us prescribe the following assumption on the parameter \(\lambda:\)
\[\lambda\in\begin{cases}\left(-\infty,\frac{2(p+1)}{(p+3)^{2}}\right],&N=2,\\ \left(-\infty,\frac{(N-1)^{2}}{4}\right),&N\geq 3.\end{cases} \tag{1.1}\]
Here, \(\frac{(N-1)^{2}}{4}\) is the bottom of the \(L^{2}-\) spectrum of \(-\Delta_{\mathbb{B}^{N}}.\)
We recall that the solutions of \((P_{\varepsilon})\) are the critical points of the corresponding energy functional \(E_{\varepsilon}:H^{1}(\mathbb{B}^{N})\to\mathbb{R}\) defined as
\[E_{\varepsilon}(u)=\frac{1}{2}\int_{\mathbb{B}^{N}}\left(|\nabla_{\mathbb{B}^ {N}}u|^{2}-\lambda u^{2}\right)\ \mathrm{d}V_{\mathbb{B}^{N}}-\frac{1}{p}\int_{\mathbb{B}^{N}}a(x)|u|^{p}\ \mathrm{d}V_{\mathbb{B}^{N}}-\frac{ \varepsilon}{2^{*}}\int_{\mathbb{B}^{N}}|u|^{2^{*}}\ \mathrm{d}V_{\mathbb{B}^{N}}. \tag{1.2}\]
Then \(E_{\varepsilon}\) is a well-defined \(C^{1}\) functional on \(H^{1}(\mathbb{B}^{N}).\) We use variational, refined energy estimates and blow-up arguments to prove the existence of solutions. The intriguing nature of the problem is related to the fact that the equation \((P_{\varepsilon})\) is non-compact, so standard variational methods fail. The problem studied in this article is in continuation to our study on scalar-field type equations on the hyperbolic space (see [28, 29]) where we only dealt with the purely subcritical problem, i.e., when \(\varepsilon=0,\) the unperturbed problem. In the subcritical case, the variational problem lacks compactness because of the _hyperbolic translation_ (see section 2 for more details), and so it cannot be solved by the standard minimization method. Moreover, in [11], a detailed analysis of the Palais-Smale decomposition is performed. One can easily see that if \(U\) is a solution of \((P_{\varepsilon}),\) with \(\varepsilon=0\) and \(a(x)\equiv 1,\) then
\[u:=U\circ\tau,\quad\text{for }\tau\in I(\mathbb{B}^{N}),\]
where \(I(\mathbb{B}^{N})\) is the group of isometries on the hyperbolic space, is also a solution. Hence if we define a sequence by varying \(\tau_{n}\in I(\mathbb{B}^{N}),\) then for a Palais-Smale (PS) sequence \(u_{n},\)\(u_{n}\circ\tau_{n}\) is also a Palais-Smale (PS) sequence for \((P_{\varepsilon})\) with \(\varepsilon=0\) and \(a(x)\equiv 1\). In fact, it was shown in [11, Theorem 3.3] that in the subcritical case, i.e., when \(2<p<\frac{2N}{N-2},\) non-compact PS sequences are made of finitely many sequences of the form \(U\circ\tau_{n}.\)
The problem we considered in this article also has a critical nonlinearity for \(\varepsilon>0\). The Palais-Smale decomposition established in [11, Theorem 3.3] reveals that for critical exponent problems in the hyperbolic space, loss of compactness can happen along two different profiles, one along the hyperbolic translations and the other along concentration of Aubin-Talenti bubble (locally). This makes the problem \((P_{\varepsilon})\) very fascinating. There will be an interplay between subcritical and critical nonlinearity. Indeed, some concentration phenomena can happen owing to critical nonlinearity. Although we will only consider \(\varepsilon\) small enough so that the critical nonlinearity can be seen as a perturbation of the subcritical problem. Before analysing the difficulties and methodology we adopt to restore compactness, we first discuss the "state of the art" of such problems when posed in the Euclidean space.
There has been intensive research over the past few decades on \((P_{\varepsilon})\) when \(\varepsilon=0\) in the Euclidean space after the seminal papers by Berestycki-Lions [9, 10], Bahri-Berestycki [5], Bahri-Li [4], Bahri-Lions [6]. Many authors have contributed to a much deeper understanding of the problem in the framework of existence and multiplicity, we name a few, e.g.,[1, 2, 3, 15, 22, 26, 30, 32, 39, 42], and this list is far from being complete. The fundamental challenge in dealing with such problems in unbounded domains in \(\mathbb{R}^{N}\) is the lack of compactness, even in the subcritical case, thus preventing the typical variational approaches from succeeding. As a result, various authors have studied these equations by deploying
various conditions on \(a(x)\) and presented new tools and methodologies to overcome this difficulty. For example, if \(a(x)=a(|x|)\), the compactness of the embedding \(H_{r}\left(\mathbb{R}^{N}\right)\), the subspace of \(H^{1}\left(\mathbb{R}^{N}\right)\) consisting of radially symmetric functions into \(L^{p}\left(\mathbb{R}^{N}\right),\ p\in(2,2^{*})\) helps to restore the usage of standard variational arguments([9, 10]). However, if the symmetry restriction on \(a(x)\) is dropped, the problem becomes more exciting and complex. In particular, the authors in [4, 6] have established the existence of positive solutions by considering the asymptotic condition on \(a(x)\), i.e., \(a(x)\to a_{\infty}\) as \(|x|\to\infty\) and appropriate decay estimates. They carefully examined the levels of failure of PS condition, then searched for high energy solutions when ground state solution did not exist and used delicate variational and topological arguments. The subject of the multiplicity of solutions has also been investigated in [17, 19, 34, 35, 45] under different circumstances like some suitable assumption on \(|a(x)-a_{\infty}|\) or some order relation between \(a(x)\) and \(a_{\infty}\), i.e., \(a(x)\) goes to \(a_{\infty}\) from above or below or a certain periodicity assumption on \(a\). Furthermore, in [16] a more general equation \(-\Delta u+\alpha(x)u=\beta(x)|u|^{p-1}u\) has been studied in \(\mathbb{R}^{N}\) where \(\alpha\) and \(\beta\) are positive functions such that \(\lim_{|x|\to\infty}\alpha(x)=a_{\infty}>0\) and \(\lim_{|x|\to\infty}\beta(x)=b_{\infty}>0\). They have proven the existence and non-existence of ground state solutions under a variety of hypotheses like \(\alpha(x)\to a_{\infty}\) from below and \(\beta(x)\to b_{\infty}\) from above and vice-versa, \(\alpha(x)\) decays faster or slower than \(\beta(x)\).
Further, the scalar field equations in \(\mathbb{R}^{N}\) involving the critical exponent provide an even greater mathematical challenge because of the loss of compactness in two profiles, translation in \(\mathbb{R}^{N}\) and the presence of the critical exponent. The following class of Dirichlet problems in different domains in \(\mathbb{R}^{N}\) with sufficiently smooth boundary conditions has been the focus of much research over the past several years
\[-\Delta u-\eta u=|u|^{2^{*}-2}u\ \ \ \mbox{in}\ \ \ \Omega\]
where \(\eta\) is a real parameter, and \(\Omega\) is a domain in \(\mathbb{R}^{N}.\) Brezis-Nirenberg, in their commendable work [14] have shown the existence and non-existence of positive solutions in bounded domains for \(\eta>0\). They identified the first critical level below which compactness can be restored with the help of Aubin-Talenti functions, popularly known as _bubbles_. On the other hand, for \(\eta\leq 0\), the shape of the domain comes into play, and it is well known using the Pohozaev identity that solutions cease to exist for star-shaped domains. Following that, attempts were made to discover solutions either by altering the domain's shape ([25, 38, 36, 41]) or experimenting with the lower order terms ([8, 40]). Also, see [13, 23, 24] and references therein.
Thereafter the authors in [31] studied a problem involving critical and subcritical nonlinearities (also, see [37]). This motivated us to study the related problem \((P_{\varepsilon})\) in the hyperbolic space. It makes sense that subcritical analysis ([28, 29, 33]) for \(\varepsilon=0\) cannot be applied given the loss of compactness in two profiles, one of which is caused by the presence of the critical exponent. Moreover, because of the hyperbolic translations, the other profile can be owed to the following limiting problems
\[-\Delta_{\mathbb{B}^{N}}u-\lambda u=|u|^{p-2}u\ \mbox{in}\ \mathbb{B}^{N},\ u \in H^{1}\left(\mathbb{B}^{N}\right)\]
and
\[-\Delta_{\mathbb{B}^{N}}u-\lambda u=|u|^{p-2}u+\varepsilon|u|^{2^{*}-2}u\ \mbox{in}\ \mathbb{B}^{N},\ \ u\in H^{1}\left(\mathbb{B}^{N}\right).\]
Since the PS decomposition for the problems of the type \((P_{\varepsilon,\infty})\) is yet to be discovered, we can no longer employ the standard tools and techniques.
### Methodologies and strategy
The Nehari set for a functional \(J\) defined on a function space \(X\) is defined as
\[\mathsf{N}=\{u\in X,\;J^{\prime}(u)[u]=0\}.\]
It is easy to show that this set is a manifold for a large class of functionals associated with elliptic problems such as \((P_{\varepsilon})\), \((P_{\varepsilon,\infty})\), \((P_{\infty})\). Additionally, it can also be proven that the functionals are bounded below on this Nehari manifold. Suppose \(E_{\infty}\) and \(\mathcal{N}_{\infty}\) are the functional and Nehari manifold, respectively, associated with the problem \((P_{\infty})\). Then the minimization problem (see [33])
\[m=\inf_{\mathcal{N}_{\infty}}E_{\infty}\]
has a solution, and \(m\) is achieved by some \(w\in\mathcal{N}_{\infty}\), thus solving \((P_{\infty})\). To be precise, authors in [33] established that in the subcritical case, and for \(p>2\), if \(N=2\) and \(2<p<2^{\star}\) if \(N\geq 3\), the problem \((P_{\infty})\) has a positive solution if and only if \(\lambda<\frac{(N-1)^{2}}{4}.\) These positive solutions are also shown to be unique up to hyperbolic isometries, except possibly for \(N=2\) and \(\lambda>\frac{2(p+1)}{(p+3)^{2}}.\)
The above discussion and the hypotheses \((\mathbf{A}_{1})\) and \((\mathbf{A}_{2})\) indicate that the corresponding limiting problems will play a vital role in studying \((P_{\varepsilon})\). We shall recall from [11] that the solutions to the following problem can be attributed to the loss of compactness due to the critical exponent problem
\[-\Delta V=|V|^{2^{\star}-2}V,\quad V\in D^{1,2}\left(\mathbb{R}^{N}\right).\]
We know that \((CP_{\infty})\) and \((P_{\infty})\) have been thoroughly and extensively studied in [14, 33], respectively. However, to our knowledge, \((P_{\varepsilon,\infty})\) still needs to be explored. So we first establish the existence of its solutions, particularly the ground state solution (refer Section 3) using the standard variational methods. For this, we define the functional \(E_{\varepsilon,\infty}:H^{1}\left(\mathbb{B}^{N}\right)\to\mathbb{R}^{N}\) corresponding to \((P_{\varepsilon,\infty})\) as
\[E_{\varepsilon,\infty}(u)=\frac{1}{2}\int_{\mathbb{B}^{N}}\left(|\nabla_{ \mathbb{B}^{N}}u|^{2}-\lambda u^{2}\right)\;\mathrm{d}V_{\mathbb{B}^{N}}- \frac{1}{p}\int_{\mathbb{B}^{N}}|u|^{p}\;\mathrm{d}V_{\mathbb{B}^{N}}-\frac{ \varepsilon}{2^{\star}}\int_{\mathbb{B}^{N}}|u|^{2^{\star}}\;\mathrm{d}V_{ \mathbb{B}^{N}},\]
and \(\mathcal{N}_{\varepsilon,\infty}\) denotes the associated Nehari manifold. We address the following minimization problem
\[m_{\varepsilon}:=\inf_{N_{\varepsilon,\infty}}E_{\varepsilon,\infty},\]
and exploit the radial symmetry of \((P_{\varepsilon,\infty})\), then make use of the compactness of embedding and finally establish the existence of solution using the Ekeland Variational principle. The solution thus obtained by solving such a minimization problem is referred to as a _ground-state solution_. Then we move on to search for the solutions to our main problem \((P_{\varepsilon})\). Also, as the uniqueness and the decay estimates on the solutions of the problem at infinity \((P_{\varepsilon,\infty})\) are still unknown, we fail to use the techniques used in [4, 28, 29] to establish the solutions of \((P_{\varepsilon})\). However, we are able to recover the compactness under the hypothesis \((\mathbf{A}_{1})\) below a level. In restoring the compactness, we test the sequence of scaled solutions of \((CP_{\infty})\) on \(E_{\varepsilon,\infty}\), but this is not feasible since there is no scaling in the hyperbolic space. Thus to perform this blow-up analysis, we conformally transformed the problem \((P_{\varepsilon,\infty})\) to \(\mathbb{R}^{N}\) (refer Section 2). Then we obtained a series of estimates for the required integrals in \(\mathbb{R}^{N}\), which resulted in the restoration below the level \(m_{\varepsilon}\) (Proposition 3.4). Additionally, we want to point out to the readers that these estimations are valid for \(N\geq 4\)
and \(\frac{N(N-2)}{4}<\lambda<\frac{(N-1)^{2}}{4}\). Further, note that we obtained the following two estimates on \(m_{\varepsilon}\)
\[m_{\varepsilon}<\frac{1}{N}S^{N/2}\left(\frac{1}{\varepsilon}\right)^{\frac{N-2 }{2}}\text{ for }\varepsilon\text{ small}\]
where \(S\) is the best Sobolev constant that occurs in the Sobolev inequality in \(\mathbb{R}^{N}\), and
\[\lim_{\varepsilon\to 0}m_{\varepsilon}=m.\]
Also, in (3.1), we deduce that \(m_{\epsilon}\leq m\). Therefore, in a way, we can say that we retrieved the compactness (for small \(\varepsilon\)) below the level where we have managed to avoid all the anticipated _bubbles_. Finally, with the help of the solution of \(\left(P_{\varepsilon,\infty}\right)\) as determined in Theorem 1.1 and the restored compactness, we find the ground state solution of \(\left(P_{\varepsilon}\right)\) (Theorem 1.2) under the hypothesis \(\left(\mathbf{A}_{1}\right)\). After that, we look for positive solutions of \(\left(P_{\varepsilon}\right)\) assuming \(\left(\mathbf{A}_{2}\right)\). However, we prove the non-existence of ground state solution in this case (Proposition 4.1). Thus we look for high-energy solutions by assuming a decay estimate on \(a(x)\) (Theorem 1.3). We call such a solution as a _high energy-bound-state solution_ because it has an energy level above the level of the ground state and is located in an interval. The extra assumption on \(a(x)\) helps regain the compactness locally (Proposition 4.3) using the problem studied in our previous work [28], i.e., \(\left(P_{\varepsilon}\right)\) when \(\varepsilon=0.\) For this, the crucial step is to define an appropriate barycentric map ([7, 12]) to prove some auxiliary lemmas for the existence of bound state high-energy solutions. The usual barycentric map in \(\mathbb{R}^{N}\) enjoys a nice property under the action of translation. However, the highly non-linear nature of hyperbolic translation makes it difficult to achieve such a characteristic in our context. We conclude this article by proving the existence of bound state high-energy solutions by delicately applying energy estimates, barycentric maps and topological degree arguments.
### Main results
Now we shall discuss and state our main results in this article. We shall prove the existence of solutions under the assumptions \(\left(\mathbf{A}_{1}\right),\left(\mathbf{A}_{2}\right)\) and \(\left(\mathbf{A}_{3}\right).\) First, we start with the simplest case, i.e., when \(a(x)\) satisfies \(\left(\mathbf{A}_{3}\right).\)
**Theorem 1.1**.: _Let \(a(x)\) satisfies \(\left(\mathbf{A}_{3}\right),\) i.e., \(a(x)\equiv 1\) for all \(x\in\mathbb{B}^{N}.\) Then there exists \(\varepsilon_{0}>0\) such that for any \(\varepsilon\in\left(0,\varepsilon_{0}\right)\) the problem \(\left(P_{\varepsilon,\infty}\right)\) has a positive radially symmetric ground-state solution \(w_{\varepsilon}\)._
Then, assuming \(a(x)\geq 1,\) we prove the following result concerning the existence of a ground-state solution:
**Theorem 1.2**.: _Let \(a(x)\in C\left(\mathbb{B}^{N}\right)\cap L^{\infty}(\mathbb{B}^{N})\) satisfies \(\left(\mathbf{A}_{1}\right).\) Assume \(N\geq 4\) and \(\frac{N(N-2)}{4}<\lambda<\frac{(N-1)^{2}}{4}.\) Then there exists \(\varepsilon^{\prime}>0\) such that the problem \(\left(P_{\varepsilon}\right)\) has a ground-state solution for every \(\varepsilon\in\left(0,\varepsilon^{\prime}\right)\)._
Now an obvious question arises whether a ground-state solution does exist under the assumption \(\left(\mathbf{A}_{2}\right)\) for the problem \(\left(P_{\varepsilon}\right)\). In fact, in Proposition 4.1, we prove the non-existence of ground-state solutions for \(\left(P_{\varepsilon}\right)\). Furthermore, when \(a(x)\leq 1,\) a bound-state solution exists in this case, particularly a high-energy solution in the spirit of Bahri-Li. We shall borrow the ideas of Bahri-Li in their seminal paper [4] to establish the existence of a bound-state solution. However, we shall describe later the many nontrivial difficulties that arise in the hyperbolic space to achieve solutions compared to the Euclidean case. In particular, we prove the following theorem:
**Theorem 1.3**.: _Let \(a(x)\in C\left(\mathbb{B}^{N}\right)\) satisfies \((\mathbf{A}_{2})\,.\) In addition, assume that \(a(x)\) also satisfies_
\[a(x)\geqslant 1-\mathrm{C}\exp(-\delta\,d(x,0))\quad\forall x\in\mathbb{B}^{N}, \tag{1.3}\]
_for some positive constants \(C\) and \(\delta.\) Then there exists \(\widehat{\varepsilon}>0\) such that for any \(0<\varepsilon<\widehat{\varepsilon}\) the problem \((P_{\varepsilon})\) has at least one positive solution, that is a high energy-bound-state solution._
The paper is organized as follows: In Section 2, we introduce some of the notations, geometric definitions, and preliminaries concerning the hyperbolic space and derive a conformal equivalent problem on the Euclidean ball. Section 3 begins with the proof of Theorem 1.1 and further establishes auxiliary propositions, which include an upper estimate on \(m_{\varepsilon}\), Palais-Smale decomposition and finally completes the proof of Theorem 1.2. Finally, Section 4 is devoted to the proof of Theorem 1.3.
## 2. Notations and Functional Analytic Preliminaries
In this section, we will introduce some of the notations and definitions used in this paper and also recall some of the embeddings related to the Sobolev space on the hyperbolic space.
We will denote by \(\mathbb{B}^{N}\) the disc model of the hyperbolic space, i.e., the unit disc equipped with the Riemannian metric \(g_{\mathbb{B}^{N}}:=\sum\limits_{i=1}^{N}\left(\frac{2}{1-|x|^{2}}\right)^{2} \,\mathrm{d}x_{i}^{2}\). The Euclidean unit ball \(B(0,1):=\{x\in\mathbb{R}^{N}:|x|<1\}\) equipped with the Riemannian metric
\[\mathrm{d}s^{2}=\left(\frac{2}{1-|x|^{2}}\right)^{2}\,\mathrm{d}x^{2}\]
constitute the ball model for the hyperbolic \(N\)-space, where \(\mathrm{d}x\) is the standard Euclidean metric and \(|x|=\left(\sum_{i=1}^{N}x_{i}^{2}\right)^{1/2}\) is the standard Euclidean length. To simplify our notations, we will denote \(g_{\mathbb{B}^{N}}\) by \(g\). The corresponding volume element is given by \(\,\mathrm{d}V_{\mathbb{B}^{N}}=\left(\frac{2}{1-|x|^{2}}\right)^{N}\!\mathrm{ d}x,\) where \(\mathrm{d}x\) denotes the Lebesgue measure on \(\mathbb{R}^{N}.\)
### Hyperbolic distance on \(\mathbb{B}^{N}\)
The hyperbolic distance between two points \(x\) and \(y\) in \(\mathbb{B}^{N}\) will be denoted by \(d(x,y).\) For the hyperbolic distance between \(x\) and the origin we write
\[\rho:=\,d(x,0)=\int_{0}^{r}\frac{2}{1-s^{2}}\,\mathrm{d}s\,=\,\log\frac{1+r}{ 1-r},\]
where \(r=|x|\), which in turn implies that \(r=\tanh\frac{\rho}{2}.\) Moreover, the hyperbolic distance between \(x,y\in\mathbb{B}^{N}\) is given by
\[d(x,y)=\cosh^{-1}\left(1+\frac{2|x-y|^{2}}{(1-|x|^{2})(1-|y|^{2})}\right).\]
It easily follows that a subset \(S\) of \(\mathbb{B}^{N}\) is a hyperbolic sphere in \(\mathbb{B}^{N}\) if and only if \(S\) is a Euclidean sphere in \(\mathbb{R}^{N}\) and contained in \(\mathbb{B}^{N},\) probably with a different centre and different radius, which can be computed. Geodesic balls in \(\mathbb{B}^{N}\) of radius \(r\) centred at the origin will be denoted by
\[B_{r}(y):=\{x\in\mathbb{B}^{N}:d(x,y)<r\}.\]
We also need some information on the isometries of \(\mathbb{B}^{N}\). Below we recall the definition of a particular type of isometry, namely the hyperbolic translation. For more details on the isometry group of \(\mathbb{B}^{N}\), we refer to [43].
**Hyperbolic translation.** For \(b\in\mathbb{B}^{N},\) define
\[\tau_{b}(x)=\frac{(1-|b|^{2})x+(|x|^{2}+2x.b+1)b}{|b|^{2}|x|^{2}+2x.b+1}, \tag{2.1}\]
then \(\tau_{b}\) is an isometry of \(\mathbb{B}^{N}\) with \(\tau_{b}(0)=b.\) The map \(\tau_{b}\) is called the hyperbolic translation of \(\mathbb{B}^{N}\) by \(b.\) It can also be seen that \(\tau_{-b}=\tau_{b}^{-1}.\)
The hyperbolic gradient \(\nabla_{\mathbb{B}^{N}}\) and the hyperbolic Laplacian \(\Delta_{\mathbb{B}^{N}}\) are given by
\[\nabla_{\mathbb{B}^{N}}=\left(\frac{1-|x|^{2}}{2}\right)^{2}\nabla,\quad\Delta _{\mathbb{B}^{N}}=\left(\frac{1-|x|^{2}}{2}\right)^{2}\Delta+(N-2)\frac{1-|x| ^{2}}{2}\,\langle x,\nabla\,\rangle.\]
**A sharp Poincare-Sobolev inequality.** (see [33])
**Sobolev Space :** We will denote by \(H^{1}(\mathbb{B}^{N})\) the Sobolev space on the disc model of the hyperbolic space \(\mathbb{B}^{N},\) equipped with norm \(\|u\|=\left(\int_{\mathbb{B}^{N}}|\nabla_{\mathbb{B}^{N}}u|^{2}\right)^{\frac{ 1}{2}},\) where \(|\nabla_{\mathbb{B}^{N}}u|\) is given by \(|\nabla_{\mathbb{B}^{N}}u|:=\langle\nabla_{\mathbb{B}^{N}}u,\nabla_{\mathbb{B }^{N}}u\rangle^{\frac{1}{2}}_{\mathbb{B}^{N}}.\)
For \(N\geq 3\) and every \(p\in\left(1,\frac{N+2}{N-2}\right]\) there exists an optimal constant \(S_{N,p}>0\) such that
\[S_{N,p}\left(\int_{\mathbb{B}^{N}}|u|^{p+1}\ \mathrm{d}V_{\mathbb{B}^{N}}\right)^{ \frac{2}{p+1}}\leq\int_{\mathbb{B}^{N}}\left[|\nabla_{\mathbb{B}^{N}}u|^{2}- \frac{(N-1)^{2}}{4}u^{2}\right]\ \mathrm{d}V_{\mathbb{B}^{N}},\]
for every \(u\in C_{0}^{\infty}(\mathbb{B}^{N}).\) If \(N=2,\) then any \(p>1\) is allowed.
A basic information is that the bottom of the spectrum of \(-\Delta_{\mathbb{B}^{N}}\) on \(\mathbb{B}^{N}\) is
\[\frac{(N-1)^{2}}{4}=\inf_{u\in H^{1}(\mathbb{B}^{N})\setminus\{0\}}\frac{ \int_{\mathbb{B}^{N}}|\nabla_{\mathbb{B}^{N}}u|^{2}\ \mathrm{d}V_{\mathbb{B}^{N}}}{\int_{\mathbb{B}^{N}}|u|^{2}\ \mathrm{d}V_{\mathbb{B}^{N}}}. \tag{2.2}\]
**Remark 2.1**.: A consequence of (2.2) is that if \(\lambda<\frac{(N-1)^{2}}{4},\) then
\[\|u\|_{H_{\lambda}}:=\|u\|_{\lambda}:=\left[\int_{\mathbb{B}^{N}}\left(|\nabla _{\mathbb{B}^{N}}u|^{2}-\lambda\,u^{2}\right)\ \mathrm{d}V_{\mathbb{B}^{N}}\right]^{\frac{1}{2}},\]
is a norm, equivalent to the \(H^{1}(\mathbb{B}^{N})\) norm and the corresponding inner product is given by \(\langle u,v\rangle_{H_{\lambda}}.\)
Also, throughtout the article, we use \(|\cdot|_{p}\) to denote the \(L^{p}(\mathbb{B}^{N})\) norm.
**Remark 2.2**.: It is interesting to note that there exists \(c>0\) independent of small \(\varepsilon\) such that
\[\|u\|_{\lambda}\geq c\quad\forall u\in\mathcal{N}_{\varepsilon,\infty}. \tag{2.3}\]
Indeed using the Poincare-Sobolev inequality on the hyperbolic space, we have
\[0=\|u\|_{\lambda}^{2}-|u|_{p}^{p}-\varepsilon|u|_{2^{*}}^{2^{*}}\geq\|u\|_{ \lambda}^{2}-c_{1}\|u\|_{\lambda}^{p}-c_{1}\varepsilon\|u\|_{\lambda}^{2^{*}},\quad\forall u\in\mathcal{N}_{\varepsilon,\infty}.\]
**Conformal change of metric.** We want to conformally transform the problem \((P_{\varepsilon,\infty})\) to the Euclidean Space. For that, define \(P_{1,\mathbb{B}^{N}}:=-\Delta_{\mathbb{B}^{N}}+\frac{(N-2)}{4(N-1)}S_{\mathbb{ B}^{N}}=-\Delta_{\mathbb{B}^{N}}-\frac{N(N-2)}{4}\) is the first order conformally invariant Laplacian operator where \(S_{\mathbb{B}^{N}}:=-N(N-1)\) is the
scalar curvature of \(\mathbb{B}^{N}\). Therefore, for a conformal change in metric \(\tilde{g}=e^{2\psi}g_{\mathbb{B}^{N}},\) we have \(P_{1,\tilde{g}}(u)=e^{-\left(\frac{N}{2}+1\right)\psi}P_{1,\mathbb{B}^{N}}\left( e^{\left(\frac{N}{2}-1\right)\psi}u\right)\) for every smooth function \(u\). As a consequence of the Poincare metric being conformal to the Euclidean metric with \(\psi(x)=\ln\left(\frac{1-|x|^{2}}{2}\right)\) we can transform \((P_{\varepsilon,\infty})\) to the Euclidean space as follows: Let \(H_{0}^{1}\left(\mathbb{B}^{N}\right)\) is the Sobolev space on \(\mathbb{B}^{N}\) characterized by zero traces on the boundary \(\partial\mathbb{B}^{N}\) where \(\mathbb{B}^{N}\) is the open Euclidean ball with centre at the origin and unit radius. Suppose \(u\) be a solution to \((P_{\varepsilon,\infty})\). Set \(\varphi:=\left(\frac{2}{1-|x|^{2}}\right)^{\frac{N-2}{2}}\), then \(v:=\varphi u\) solves
\[-\Delta v-b_{\lambda}(x)v=c_{p}(x)v^{p-1}+\varepsilon v^{2^{\star}-1},\quad v \in H_{0}^{1}\left(\mathbb{B}^{N}\right), \tag{2.4}\]
where \(b_{\lambda}(x)=\frac{4\lambda-N(N-2)}{\left(1-|x|^{2}\right)^{2}},\)\(c_{p}(x)=\left(\frac{2}{1-|x|^{2}}\right)^{N-\left(\frac{N-2}{2}\right)p}\). Observe that \(b_{\lambda}(x)>0\) in \(\mathbb{B}^{N}\) whenever \(\lambda>\frac{N(N-2)}{4},\) and \(N-\left(\frac{N-2}{2}\right)p>0\) for \(p<2^{\star}.\)
## 3. Existence of a ground-state solution
This section is devoted to the existence of solutions of \(\left(P_{\varepsilon}\right)\) when the potential \(a(x)\) satisfies \(\left(\mathbf{A}_{1}\right)\) and \(\left(\mathbf{A}_{3}\right).\) We first begin with the simplest case, when \(a(x)\equiv 1.\) The proof is a straightforward adaption of standard variational arguments ([31]) in the hyperbolic setting and restoring compactness for (hyperbolic) radial functions in the subcritical case. Let us recall a Strauss-type lemma in the hyperbolic space (see [11, Theorem 3.1]):
Let \(H_{r}^{1}(\mathbb{B}^{N})\) denotes the subspace,
\[H_{r}^{1}(\mathbb{B}^{N}):=\{u\in H^{1}(\mathbb{B}^{N}):\text{ $u$ is radial}\}.\]
**Lemma 3.1** ([11]).: _The embedding \(H_{r}^{1}(\mathbb{B}^{N})\hookrightarrow L^{p}(\mathbb{B}^{N})\) for \(2<p<2^{\star}\) is compact._
**Proof of Theorem 1.1.** First, we obtain that \(m_{\varepsilon}\leq m\)\(\forall\varepsilon>0\) as follows: let \(\tau_{\varepsilon}>0\) be such that \(\tau_{\varepsilon}w\in\mathcal{N}_{\varepsilon,\infty},\) then
\[m_{\varepsilon}\leq E_{\varepsilon,\infty}\left(\tau_{\varepsilon}w\right) \leq E_{\infty}\left(\tau_{\varepsilon}w\right)\leq E_{\infty}(w)=m, \tag{3.1}\]
where the first and second inequality follows from the definition of \(m_{\varepsilon},E_{\varepsilon,\infty},E_{\infty},\) and the last inequality follows from \(w\) being a unique (upto hyperbolic translations), positive radial solution of \(\left(P_{\infty}\right)\).
We confine our analysis to the following sets in order to solve the minimization problem for \(m_{\varepsilon}\)
\[H_{r}^{1}\left(\mathbb{B}^{N}\right),\text{ \ }\mathcal{N}_{r}^{\varepsilon}= \mathcal{N}_{\varepsilon,\infty}\cap H_{r}^{1}\left(\mathbb{B}^{N}\right).\]
Let \(\left\{u_{n}^{\varepsilon}\right\}_{n}\) in \(\mathcal{N}_{r}^{\varepsilon}\) be a minimizing sequence, i.e.,
\[\left\|u_{n}^{\varepsilon}\right\|_{\lambda}^{2}=\left|u_{n}^{\varepsilon} \right|_{p}^{p}+\varepsilon\left|u_{n}^{\varepsilon}\right|_{2^{\star}}^{2^{ \star}}, \tag{3.2}\]
\[E_{\varepsilon,\infty}\left(u_{n}^{\varepsilon}\right)=\left(\frac{1}{2}-\frac {1}{p}\right)\left\|u_{n}^{\varepsilon}\right\|_{\lambda}^{2}+\left(\frac{1}{p }-\frac{1}{2^{\star}}\right)\varepsilon\left|u_{n}^{\varepsilon}\right|_{2^{ \star}}^{2^{\star}}=m_{\varepsilon}+o(1). \tag{3.3}\]
Using the inequalities (3.1) and (3.3), we get
\[\left\|u_{n}^{\varepsilon}\right\|_{\lambda}^{2}\leq\left(\frac{1}{2}-\frac {1}{p}\right)^{-1}m_{\varepsilon}+o(1)\leq\left(\frac{1}{2}-\frac{1}{p}\right)^ {-1}m+o(1). \tag{3.4}\]
Notice that from \((\ref{eq:2.3}),(\ref{eq:2.3}),(\ref{eq:3.4})\) and the Sobolev embedding theorem, we get the existence of \(\varepsilon_{0}>0\) such that, for all \(n\in\mathbb{N}\),
\[|u_{n}^{\varepsilon}|_{p}^{p}\geq\|u_{n}^{\varepsilon}\|_{\lambda}^{2}-c\, \varepsilon\,\|u_{n}^{\varepsilon}\|_{\lambda}^{2^{*}}\geq\text{ constant }>0\quad\forall\varepsilon\in\left(0,\varepsilon_{0}\right). \tag{3.5}\]
Further, Lemma (3.1) implies \(H_{r}^{1}\left(\mathbb{B}^{N}\right)\) embeds compactly in \(L^{p}(\mathbb{B}^{N})\). Thus there exists \(w_{\varepsilon}\in H_{r}^{1}(\mathbb{B}^{N})\) such that, up to a subsequence,
\[u_{n}^{\varepsilon}\stackrel{{ n\to\infty}}{{\longrightarrow}}w_{ \varepsilon}\left\{\begin{array}{l}\text{ strongly in }L^{p}\left(\mathbb{B}^{N}\right)\text{ for }2<p<2^{*}\\ \text{ weakly in }H^{1}\left(\mathbb{B}^{N}\right)\text{ and in }L^{2^{*}}\left(\mathbb{B}^{N}\right). \end{array}\right. \tag{3.6}\]
Moreover, \(w_{\varepsilon}\not\equiv 0\) follows from (3.5). Ekeland's variational principle (see section 3 of [27]) allows choosing the minimizing sequence \(\left\{u_{n}^{\varepsilon}\right\}_{n}\) in \(\mathcal{N}_{r}^{\varepsilon}\) such that
\[E_{\varepsilon,\infty}^{\prime}\left(u_{n}^{\varepsilon}\right)\left[v \right]=\lambda_{n}G^{\prime}\left(u_{n}^{\varepsilon}\right)\left[v\right]+o (1)\|v\|\quad\forall v\in H_{r}^{1}\left(\mathbb{B}^{N}\right). \tag{3.7}\]
where, for all \(n\in\mathbb{N},\lambda_{n}\in\mathbb{R}\) is the Lagrange multiplier and \(G(u)=E_{\varepsilon,\infty}^{\prime}(u)[u]\). Thus \(G\left(u_{n}^{\varepsilon}\right)=0\) for all \(n\in\mathbb{N}\), and (3.7) implies
\[0=G\left(u_{n}^{\varepsilon}\right)=E_{\varepsilon,\infty}^{\prime}\left(u_{n }^{\varepsilon}\right)\left[u_{n}^{\varepsilon}\right]=\lambda_{n}G^{\prime} \left(u_{n}^{\varepsilon}\right)\left[u_{n}^{\varepsilon}\right]+o(1)\left\| u_{n}^{\varepsilon}\right\|_{\lambda}.\]
Hence, we get \(\lambda_{n}=o(1)\) using \(\left\|u_{n}^{\varepsilon}\right\|_{\lambda}\) is bounded and \(G^{\prime}\left(u_{n}^{\varepsilon}\right)\left[u_{n}^{\varepsilon}\right] \leq c<0\) on \(\mathcal{N}_{r}^{\varepsilon}.\) Choosing \(v=w_{\varepsilon}\) in (3.7), by (3.6), we can deduce \(w_{\varepsilon}\in\mathcal{N}_{\varepsilon,\infty}.\)
By utilising once more, we obtain
\[m_{\varepsilon}\leq E_{\varepsilon,\infty}\left(w_{\varepsilon}\right)\leq \liminf_{n\to\infty}\left[\left(\frac{1}{2}-\frac{1}{2^{*}}\right)\|u_{n}^{ \varepsilon}\|_{\lambda}^{2}-\left(\frac{1}{p}-\frac{1}{2^{*}}\right)\left|u_ {n}^{\varepsilon}\right|_{p}^{p}\right]=m_{\varepsilon}.\]
This means that the minimizing function we are seeking for is \(w_{\varepsilon}\). Consequently, \(w_{\varepsilon}\) solves
\[-\Delta_{\mathbb{B}^{N}}u-\lambda u=|u|^{p-2}u+\varepsilon|u|^{2^{*}-2}u\quad \text{ in }\mathbb{B}^{N}.\]
Furthermore, as a result of the maximum principle, \(w_{\varepsilon}\) is strictly positive.
### Blow-up argument
In this section, we shall find a ground-state solution to our aimed problem, i.e., \((P_{\varepsilon})\) under the assumption \(\left(\mathbf{A}_{1}\right).\) We prove the existence of a least energy positive solution \(\tilde{u}\) of \((P_{\varepsilon})\), i.e., \(\tilde{u}\in\mathcal{N}_{\varepsilon}\) satisfies \(E_{\varepsilon}(\tilde{u})=\inf_{\mathcal{N}_{\varepsilon}}E_{\varepsilon}\) where \(\mathcal{N}_{\varepsilon}\) denotes the Nehari manifold corresponding to \((P_{\varepsilon})\). The following auxiliary results are required to prove Theorem 1.2. We first begin with the blow-up argument, which controls \(m_{\varepsilon}\).
**Proposition 3.2**.: _The estimate mentioned below holds:_
\[m_{\varepsilon}\leq\frac{1}{N}S^{N/2}\left(\frac{1}{\varepsilon}\right)^{ \frac{N-2}{2}}\quad\forall\varepsilon>0 \tag{3.8}\]
_where \(S\) is the best Sobolev constant that occurs in the Sobolev inequality in \(\mathbb{R}^{N}\)._
Proof.: The energy functional corresponding to (2.4) is given by
\[J_{\varepsilon}(v) =\frac{1}{2}\int_{\mathbb{B}^{N}}\left[|\nabla v|^{2}-\left( \lambda-\frac{N(N-2)}{4}\right)\left(\frac{2}{1-|x|^{2}}\right)^{2}v^{2} \right]\mathrm{d}x \tag{3.9}\] \[-\frac{1}{p}\int_{\mathbb{B}^{N}}\left(\frac{2}{1-|x|^{2}}\right) ^{N-\left(\frac{N-2}{2}\right)p}|v|^{p}\mathrm{d}x-\frac{\varepsilon}{2^{*}} \int_{\mathbb{B}^{N}}|v|^{2^{*}}\mathrm{d}x. \tag{3.10}\]
Then for any \(u\in H^{1}\left(\mathbb{B}^{N}\right)\) if \(\tilde{u}\) is defined as \(\tilde{u}(x)=\left(\frac{2}{1-|x|^{2}}\right)^{\frac{N-2}{2}}u(x)\) then \(E_{\varepsilon}(u)=J_{\varepsilon}(v)\). Moreover, \(\langle E_{\varepsilon}^{\prime}(u),v\rangle=\langle J_{\varepsilon}^{\prime }(\tilde{u}),\tilde{v}\rangle\) where \(\tilde{v}\) is defined in the same way as \(\tilde{u}\).
Consequently, we obtain
\[m_{\varepsilon}:=\inf_{\mathcal{N}_{\varepsilon,\infty}}E_{\varepsilon,\infty }=\inf_{\mathcal{N}_{\tilde{J}}^{\varepsilon}}J_{\varepsilon}\]
where \(\mathcal{N}_{\tilde{J}}^{\varepsilon}\) denotes the Nehari manifold associated with the functional \(J_{\varepsilon}\).
Now to prove (3.8), we exhibit a sequence \(v_{n}\) in \(\mathcal{N}_{\tilde{J}}^{\varepsilon}\) such that \(J_{\varepsilon}\left(v_{n}\right)\) converges to \(\frac{1}{N}S^{N/2}\left(\frac{1}{\varepsilon}\right)^{\frac{N-2}{2}}.\) Before moving further, observe that the value \(\frac{1}{N}S^{N/2}\left(\frac{1}{\varepsilon}\right)^{\frac{N-2}{2}}\) can be characterised as
\[\begin{split}\frac{1}{N}S^{N/2}\left(\frac{1}{\varepsilon} \right)^{\frac{N-2}{2}}=\min&\left\{\frac{1}{2}\int_{\mathbb{R} ^{N}}|\nabla u|^{2}\mathrm{d}x-\frac{\varepsilon}{2^{*}}\int_{\mathbb{R}^{N}} |u|^{2^{*}}\mathrm{d}x:u\in\mathcal{D}^{1,2}\left(\mathbb{R}^{N}\right), \right.\\ &\left.\int_{\mathbb{R}^{N}}|\nabla u|^{2}\mathrm{d}x=\varepsilon \int_{\mathbb{R}^{N}}|u|^{2^{*}}\mathrm{d}x\right\}.\end{split} \tag{3.11}\]
We define the following sequence by scaling the Aubin-Talenti bubble:
\[U_{n}(x):=n^{\frac{N-2}{2}}U(nx)=(N(N-2))^{\frac{N-2}{4}}\frac{n^{\frac{N-2}{ 2}}}{(1+|nx|^{2})^{\frac{N-2}{2}}},\quad x\in\mathbb{R}^{N}\]
where \(U\in\mathcal{D}^{1,2}\left(\mathbb{R}^{N}\right)\) is a fixed radial function that realizes the minimum in (3.11).
Let \(\eta\in C_{c}^{\infty}\left(\mathrm{B}^{N}\right)\) be a cut-off function such that \(\eta(x)\equiv 1\) in a neighborhood of \(0\). Then consider \(u_{n}\in D^{1,2}(\mathbb{R}^{N})\) to be a sequence of test functions defined by \(u_{n}(x):=\eta(x)U_{n}(x)\) for \(x\in\mathrm{B}^{N}\). Further, define a sequence of functions \(v_{n}:=t_{n}u_{n},n\in\mathbb{N}\), where \(t_{n}\) is such that \(v_{n}\in\mathcal{N}_{\tilde{J}}^{\varepsilon}\), that is
\[\begin{split}&\int_{\mathrm{B}^{N}}\left[|\nabla u_{n}|^{2}- \left(\lambda-\frac{N(N-2)}{4}\right)\left(\frac{2}{1-|x|^{2}}\right)^{2}u_{n }^{2}\right]\mathrm{d}x\\ &=t_{n}^{p-2}\int_{\mathrm{B}^{N}}\left(\frac{2}{1-|x|^{2}}\right) ^{N-\left(\frac{N-2}{2}\right)p}|u_{n}|^{p}\mathrm{d}x+\varepsilon t_{n}^{2^{ *}-2}\int_{\mathrm{B}^{N}}|u_{n}|^{2^{*}}\mathrm{d}x.\end{split} \tag{3.12}\]
Now we further compute each of the above terms separately and let \(n\to\infty\).
\[\int_{\mathrm{B}^{N}}u_{n}^{2^{*}}\mathrm{d}x =\int_{B(0,\frac{1}{2})}u_{n}^{2^{*}}\mathrm{d}x+\int_{\mathrm{B }^{N}\setminus B(0,\frac{1}{2})}u_{n}^{2^{*}}\mathrm{d}x=\int_{B(0,\frac{1}{2} n)}U^{2^{*}}\ \mathrm{d}x+\int_{B(0,\frac{1}{2}n)^{c}}\eta\left(\frac{x}{n}\right)^{2^{*}}U^{2^{*}} \mathrm{d}x\] \[=\int_{\mathbb{R}^{N}}U^{2^{*}}\mathrm{d}x+O\left((1/n)^{N}\right). \tag{3.13}\]
\[\int_{\mathrm{B}^{N}}\left(\frac{1-|x|^{2}}{2}\right)^{\left(\frac {N-2}{2}\right)p-N}u_{n}^{p} \ \mathrm{d}x=\left(\int_{B(0,\frac{1}{2})}+\int_{\mathrm{B}^{N} \setminus B(0,\frac{1}{2})}\right)\left(\frac{2}{1-|x|^{2}}\right)^{N-\left( \frac{N-2}{2}\right)p}u_{n}^{p}\ \mathrm{d}x\] \[=\left(\frac{1}{n}\right)^{N-\left(\frac{N-2}{2}\right)p}\int_{B(0, \frac{1}{2}n)}\left(\frac{2}{1-|\frac{x}{n}|^{2}}\right)^{N-\left(\frac{N-2}{2 }\right)p}U^{p}\mathrm{d}x\] \[+\left(\frac{1}{n}\right)^{N-\left(\frac{N-2}{2}\right)p}\int_{B(0, \frac{1}{2}n)^{c}}\left(\frac{2}{1-|\frac{x}{n}|^{2}}\right)^{N-\left(\frac{N- 2}{2}\right)p}\eta\left(\frac{x}{n}\right)^{p}U^{p}\mathrm{d}x\]
\[=\left(\frac{1}{n}\right)^{N-\left(\frac{N-2}{2}\right)p}\int_{\mathbb{R}^{N}}U^{ p}\mathrm{d}x+O\left(\left(\frac{1}{n}\right)^{\left(\frac{N-2}{2}\right)p} \right). \tag{3.14}\]
Similarly, for the gradient term, we have the following estimate
\[\int_{\mathrm{B}^{N}}|\nabla u_{n}|^{2}\mathrm{d}x =\left(\int_{B(0,\frac{1}{2})}+\int_{\mathrm{B}^{N}\setminus B(0, \frac{1}{2})}\right)|\nabla u_{n}|^{2}\mathrm{d}x\] \[=\int_{B(0,\frac{1}{2}n)}|\nabla U|^{2}\ \mathrm{d}x+O\left(\left(1/n \right)^{N-2}\right)\] \[=\int_{\mathbb{R}^{N}}|\nabla U|^{2}\ \mathrm{d}x+O\left(\left(1/n \right)^{N-2}\right). \tag{3.15}\]
Further, for the lower-order terms, we can obtain the following as \(n\to\infty\)
\[\int_{\mathrm{B}^{N}}\left[\frac{4\lambda-N(N-2)}{\left(1-|x|^{2}\right)^{2}} \right]u_{n}^{2}\ \mathrm{d}x=\left\{\begin{array}{ll}\left(4\lambda-N(N-2)\right)\left( \frac{1}{n}\right)^{2}\left[\ \int_{\mathbb{R}^{n}}U^{2}\ \mathrm{d}x+o(1) \right]&\quad\text{for}\ \ N\geq 5,\\ \left(4\lambda-N(N-2)\right)\left(\frac{1}{n}\right)^{2}\log(n)\left[\omega_{N -1}+o(1)\right]&\quad\text{for}\ \ N=4.\end{array}\right. \tag{3.16}\]
Hence, taking \(n\to\infty\) in (3.12), applying the estimates (3.13) - (3.16) and using the fact that \(U\) realizes the minimum in (3.11), \(t_{n}\to 1\) follows and so
\[J_{\varepsilon}\left(v_{n}\right)-\left(\frac{1}{2}\int_{\mathbb{R}^{N}}| \nabla U|^{2}\ \mathrm{d}x-\frac{\varepsilon}{2^{*}}\int_{\mathbb{R}^{N}}U^{2^{*}} \mathrm{d}x\right)\overset{n\to\infty}{\longrightarrow}0.\]
Hence (3.8) follows since
\[\frac{1}{2}\int_{\mathbb{R}^{N}}|\nabla U|^{2}\ \mathrm{d}x-\frac{\varepsilon}{2 ^{*}}\int_{\mathbb{R}^{N}}U^{2^{*}}\mathrm{d}x=\frac{1}{N}\left(\frac{1}{ \varepsilon}\right)^{\frac{N-2}{2}}S^{N/2}.\]
In fact, it can be deduced that:
\[m_{\varepsilon}<\frac{1}{N}S^{N/2}\left(\frac{1}{\varepsilon}\right)^{\frac{N -2}{2}}\text{for $\varepsilon$ small}. \tag{3.17}\]
For this, observe that by conformally transforming the sequence found in Proposition (3.2) to \(\mathbb{B}^{N}\), we can exhibit a sequence \(\{\hat{u}_{n}\}\) radial functions in \(\mathcal{N}_{\varepsilon,\infty}\) that converges weakly to \(0\) in \(L^{2^{*}}(\mathbb{B}^{N})\), and such that \(E_{\varepsilon,\infty}\left(\hat{u}_{n}\right)\to\frac{1}{N}\left(\frac{1}{ \varepsilon}\right)^{\frac{N-2}{2}}S^{N/2}\). On the other hand, we have already shown in the proof of Theorem (1.1) that, up to a subsequence, any minimising sequence of radial functions weakly converges to a nonzero minimising function of \(E_{\varepsilon,\infty}\) on \(\mathcal{N}_{\varepsilon,\infty}\) for small \(\varepsilon\). Therefore, (3.17) follows.
The following lemma provides another, more accurate estimate of \(m_{\varepsilon}\) for small \(\varepsilon\) and examines its asymptotic behaviour.
**Lemma 3.3**.: _The relation \(m_{\varepsilon}\leq m\) is true for any \(\varepsilon>0\). \(m_{\varepsilon}\leq m\). Moreover, the following holds_
\[\lim_{\varepsilon\to 0}m_{\varepsilon}=m.\]
Proof.: We have already established the inequality \(m_{\varepsilon}\leq m\) in (3.1). Now, for \(\varepsilon\in(0,\varepsilon_{0})\) assume \(w_{\varepsilon}\) be the minimizing function found in Theorem 1.1 and \(t_{\varepsilon}>0\) be such that \(t_{\varepsilon}w_{\varepsilon}\in\mathcal{N}_{\infty}\), i.e.,
\[t_{\varepsilon}=\left(\frac{\|w_{\varepsilon}\|_{\lambda}^{2}}{|w_{ \varepsilon}|_{p}^{p}}\right)^{\frac{1}{p-2}}. \tag{3.18}\]
Besides, we have the following expression for \(E_{\varepsilon,\infty}\left(w_{\varepsilon}\right)\)
\[E_{\varepsilon,\infty}\left(w_{\varepsilon}\right)=\left(\frac{1}{2}-\frac{1} {p}\right)\|w_{\varepsilon}\|_{\lambda}^{2}+\left(\frac{1}{p}-\frac{1}{2^{*}} \right)\varepsilon\left|w_{\varepsilon}\right|_{2^{*}}^{2^{*}}=m_{ \varepsilon}\leq m\]
implying \(\left\|w_{\varepsilon}\right\|_{\lambda}\) is bounded, uniformly with respect to \(\varepsilon\in(0,\varepsilon_{0})\). Moreover, \(\left|w_{\varepsilon}\right|_{p}^{p}\geq c>0\) holds using (3.5). Consequently, (3.18) suggests that \(t_{\varepsilon}\) is bounded and
\[m \leq E_{\infty}\left(t_{\varepsilon}w_{\varepsilon}\right)=E_{ \varepsilon,\infty}\left(t_{\varepsilon}w_{\varepsilon}\right)+\frac{ \varepsilon}{2^{*}}\int_{\mathbb{B}^{N}}\left(t_{\varepsilon}w_{\varepsilon} \right)^{2^{*}}\ \mathrm{d}V_{\mathbb{B}^{N}}\] \[\leq E_{\varepsilon,\infty}\left(w_{\varepsilon}\right)+\frac{ \varepsilon}{2^{*}}\int_{\mathbb{B}^{N}}\left(t_{\varepsilon}w_{\varepsilon} \right)^{2^{*}}\ \mathrm{d}V_{\mathbb{B}^{N}}\] \[=m_{\varepsilon}+o(1).\]
With this, we have completed our proof.
In the subsequent proposition, we analyse the Palais-Smale sequences at a level \(c\left((\mathrm{PS})_{c}\right)\)-sequences) that will be beneficial in establishing the existence of a ground-state solution. We adapt the approach as described in [31, Proposition 3.2] in our setting.
**Proposition 3.4**.: _Let \(a(x)\in C\left(\mathbb{B}^{N}\right)\cap L^{\infty}(\mathbb{B}^{N})\) satisfies \(\left(\mathbf{A}_{1}\right).\) Let \(\varepsilon>0\) and \(\left\{u_{n}\right\}_{n}\) be a \((PS)_{c}\)-sequence for \(E_{\varepsilon}\) constrained on \(\mathcal{N}_{\varepsilon}\). If \(c<m_{\varepsilon}\), then \(\left\{u_{n}\right\}_{n}\) is relatively compact._
Proof.: The proof is divided into several steps.
**Step 1:**: Let us commence by noticing that the sequence \(\left\{\|u_{n}\|_{\lambda}\right\}_{n}\) is bounded above since it is a PS sequence, and is bounded away from \(0\) as it belongs to \(\mathcal{N}_{\varepsilon}\). Then, using the same approach as in Theorem 1.1, we can show that \(\left\{u_{n}\right\}_{n}\) is a \((PS)_{c}\)-sequence for the functional \(E_{\varepsilon}\) as well, that is, \(\forall v\in H^{1}(\mathbb{B}^{N})\)
\[\int_{\mathbb{B}^{N}}\nabla_{\mathbb{B}^{N}}u_{n}\cdot\nabla_{ \mathbb{B}^{N}}v\ \mathrm{d}V_{\mathbb{B}^{N}}-\lambda\int_{\mathbb{B}^{N}}u_{n}v\ \mathrm{d}V_{\mathbb{B}^{N}}-\int_{\mathbb{B}^{N}}a(x)\left|u_{n}\right|^{p-2}u _{n}v\ \mathrm{d}V_{\mathbb{B}^{N}}\] \[-\varepsilon\int_{\mathbb{B}^{N}}\left|u_{n}\right|^{2^{*}-2}u_{ n}v\ \mathrm{d}V_{\mathbb{B}^{N}}=o(1)\|v\|_{\lambda}, \tag{3.19}\]
and \(E_{\varepsilon}(u_{n})\to c\) as \(n\to\infty\).
In the subsequent steps, \(\left\{u_{n}\right\}_{n}\) will denote the sequence and its subsequences.
The boundedness of \(\left\{u_{n}\right\}_{n}\) in \(H^{1}(\mathbb{B}^{N})\) implies existence of a function \(\bar{u}\in H^{1}(\mathbb{B}^{N})\) such that
\[u_{n}\stackrel{{ n\to\infty}}{{\longrightarrow}}\bar{u}\quad \left\{\begin{array}{l}\text{weakly in $H^{1}(\mathbb{B}^{N})$ and in $L^{2^{*}}(\mathbb{B}^{N})$}\\ \text{strongly in $L^{p}_{\text{loc}}(\mathbb{B}^{N})$ and in $L^{2}_{\text{loc}}(\mathbb{B}^{N})$}\\ \text{a.e. in $\mathbb{B}^{N}$.}\end{array}\right. \tag{3.20}\]
Then using (3.20) and (3.19), we get \(\bar{u}\) is a weak solution of \((P_{\varepsilon})\), hence
\[\|\bar{u}\|_{\lambda}=|a^{1/p}\bar{u}|_{p}^{p}+\varepsilon|\bar{u}|_{2^{*}}^{ 2^{*}}. \tag{3.21}\]
**Step 2:**: We intend to prove that \(u_{n}\to\bar{u}\) in \(H^{1}(\mathbb{B}^{N})\).
We will argue by contradiction that if \(u_{n}\nrightarrow\bar{u}\) in \(H^{1}(\mathbb{B}^{N})\), then the sequence \(v_{n}:=u_{n}-\bar{u}\) verifies \(\left\|v_{n}\right\|_{\lambda}\geq\tilde{c}>0,\forall n\in\mathbb{N}\). Utilising (3.20) and the Brezis-Lieb Lemma,
\[E_{\varepsilon}\left(u_{n}\right)=E_{\varepsilon}(\bar{u})+E_{\varepsilon} \left(v_{n}\right)+o(1). \tag{3.22}\]
Additionally, because \(\bar{u}\) is a solution of \(\left(P_{\varepsilon}\right),\left\{v_{n}\right\}_{n}\) also becomes a (PS)-sequence for \(E_{\varepsilon}\). We now claim the following
\[\left|v_{n}\right|_{2^{*}}^{2^{*}}\geq\tilde{c}>0. \tag{3.23}\]
If not, \(u_{n}\to\bar{u}\) in \(L^{2^{*}}(\mathbb{B}^{N})\) and via interpolation in \(L^{p}(\mathbb{B}^{N})\), since \(\left\{u_{n}\right\}_{n}\) is bounded in \(L^{2}(\mathbb{B}^{N})\). Further, by \(\left\|u_{n}\right\|_{\lambda}=\left|a^{1/p}u_{n}\right|_{p}^{p}+\varepsilon \left|u_{n}\right|_{2^{*}}^{2^{*}}\) and (3.21), we have
\[\lim_{n\to\infty}\left\|u_{n}\right\|_{\lambda}=\left|a^{1/p}\bar{u}\right|_{ p}^{p}+\varepsilon|\bar{u}|_{2^{*}}^{2^{*}}=\|\bar{u}\|_{\lambda}\]
implying \(u_{n}\to\bar{u}\) in \(H^{1}(\mathbb{B}^{N})\), leading to a contradiction.
**Step 3:**: Now covering \(\mathbb{B}^{N}\) by balls of radius \(r\), in such a way that each point of \(\mathbb{B}^{N}\) is contained inside at most \(N_{0}\) balls. The fact that \(v_{n}\in L^{2^{*}}\left(\mathbb{B}^{N}\right)\) allows us to define
\[d_{n}=\sup_{y\in\mathbb{B}^{N}}\left|v_{n}\right|_{L^{2^{*}}(B_{r}(y))}\quad \forall n\in\mathbb{N}.\]
Using (3.23) and the boundedness of \(\left\{u_{n}\right\}_{n}\) in \(H^{1}(\mathbb{B}^{N})\), we get
\[\begin{split} 0<\tilde{c}\leq\left|v_{n}\right|_{2^{*}}^{2^{*}}& =\sum_{y\in\mathbb{B}^{N}}\left|v_{n}\right|_{L^{2^{*}}(B_{r}(y))} ^{2^{*}}\\ &\leq d_{n}^{2^{*}-2}\sum_{y\in\mathbb{B}^{N}}\left|v_{n}\right|_{ L^{2^{*}}(B_{r}(y))}^{2^{*}-2}\sum_{y\in\mathbb{B}^{N}}\left\|v_{n}\right\|_{H^{1} (B_{r}(y))}^{2}\\ &\leq c^{\prime}d_{n}^{2^{*}-2}.\end{split} \tag{3.24}\]
Thus \(d_{n}\geq\gamma\;\;\forall n\in\mathbb{N}\), where \(\gamma>0\). Consequently, we can find a sequence \(\left\{z_{n}\right\}\subset\mathbb{B}^{N}\) such that
\[\int_{B_{r}(z_{n})}\left|v_{n}\right|^{2^{*}}\;\mathrm{d}V_{\mathbb{B}^{N}} \geq\gamma>0. \tag{3.25}\]
Further, define
\[w_{n}(x):=v_{n}\left(\tau_{z_{n}}(x)\right)\]
where \(\tau_{z_{n}}\) is the hyperbolic translation of \(\mathbb{B}^{N}\) by \(z_{n}\). Since the sequence \(\left\{w_{n}\right\}_{n}\) is bounded in \(H^{1}(\mathbb{B}^{N})\), there exists some \(\bar{w}\in H^{1}\left(\mathbb{B}^{N}\right)\) such that
\[w_{n}\stackrel{{ n\to\infty}}{{\longrightarrow}}\bar{w}\quad \left\{\begin{array}{l}\text{weakly in }H^{1}\left(\mathbb{B}^{N}\right)\text{ and in }L^{2^{*}}\left(\mathbb{B}^{N}\right)\\ \text{strongly in }L^{p}_{\text{loc}}\left(\mathbb{B}^{N}\right)\text{ and in }L^{2}_{\text{loc}}\left(\mathbb{B}^{N}\right)\\ \text{a.e. in }\mathbb{B}^{N}.\end{array}\right. \tag{3.26}\]
Setting \(L_{0}=B_{1}(0)\), one of the following two scenarios can happen:
\[\left(a\right)\int_{L_{0}}\left|w_{n}(x)\right|^{p}\;\mathrm{d}V _{\mathbb{B}^{N}}(x)\geq c>0\] \[\left(b\right)\int_{L_{0}}\left|w_{n}(x)\right|^{p}\;\mathrm{d}V _{\mathbb{B}^{N}}(x)\stackrel{{ n\to\infty}}{{\longrightarrow}}0. \tag{3.27}\]
**Step 4:**: First of all, let's assume that (3.27)\((a)\) is true. Then, as \(v_{n}\to 0\) in \(L^{p}_{\rm loc}\left(\mathbb{B}^{N}\right)\) will imply \(\tau_{z_{n}}(0)\to\infty\) as \(n\to\infty\). Further, combining the facts that \(\left\{u_{n}\right\}_{n}\) is a \((PS)\)-sequence and \(\bar{u}\) is a solution of \((P_{\varepsilon})\), we can obtain the following
\[\int_{\mathbb{B}^{N}}\nabla_{\mathbb{B}^{N}}w_{n}\cdot\nabla_{ \mathbb{B}^{N}}\phi\ \mathrm{d}V_{\mathbb{B}^{N}}-\lambda\int_{\mathbb{B}^{N}}w_{n}\phi\ \mathrm{d}V_{\mathbb{B}^{N}}-\int_{\mathbb{B}^{N}}|w_{n}|^{p-2}\,w_{n}\phi\ \mathrm{d}V_{\mathbb{B}^{N}} \tag{3.28}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\varepsilon \int_{\mathbb{B}^{N}}|w_{n}|^{2^{*}-2}\,w_{n}\phi\ \mathrm{d}V_{\mathbb{B}^{N}}\] \[=\int_{\mathbb{B}^{N}}\left[a\left(\tau_{z_{n}}(\cdot)\right)-1 \right]|w_{n}|^{p-2}\,w_{n}\phi\ \mathrm{d}V_{\mathbb{B}^{N}}+o(1)\|\phi\|=o(1)\|\phi\|, \quad\forall\phi\in\mathcal{C}_{0}^{\infty}\left(\mathbb{B}^{N}\right).\]
As a result of (3.27)\((a)\), (3.28), (3.26), we are able to conclude that \(\bar{w}\) is a nonzero solution of \((P_{\varepsilon,\infty})\). Then \(\left\{w_{n}-\bar{w}\right\}_{n}\) is a \((PS)\)-sequence for \(E_{\varepsilon,\infty}\) and \(E_{\varepsilon,\infty}\left(w_{n}-\bar{w}\right)\geq o(1)\). Consequently, by using the Brezis-Lieb Lemma, we derive
\[c =E_{\varepsilon}\left(u_{n}\right)+o(1)=E_{\varepsilon}(\bar{u}) +E_{\varepsilon,\infty}(\bar{w})\] \[+E_{\varepsilon,\infty}\left(w_{n}-\bar{w}\right)+o(1)\geq E_{ \varepsilon,\infty}(\bar{w})+o(1)\geq m_{\varepsilon}+o(1)\]
that contradicts the assumption that \(c<m_{\varepsilon}\). Thus we can conclude that (3.27)\((a)\) can not hold.
**Step 5:**: Finally, we suppose that (3.27)\((b)\) holds and get a contradiction again. Notice that in this case, we can also assume the following
\[\tilde{d}_{n}=\sup_{y\in\mathbb{B}^{N}}|v_{n}|_{L^{p}(B_{r}(y))}=\sup_{y\in \mathbb{B}^{N}}|w_{n}|_{L^{p}(B_{r}(y))}\stackrel{{ n\to\infty}}{{\longrightarrow}}0. \tag{3.29}\]
Indeed, if it is false, we can land in the case (3.27)\((a)\) by replacing \(B_{r}(y)\) with a ball \(B_{r}(\tilde{y})\) that satisfies \(|v_{n}|_{L^{p}(B_{r}(\tilde{y}))}\geq c_{1}>0\). So, it is possible to assume (3.29). Furthermore, repeating the inequalities in (3.24) with the \(L^{p}\)-norm instead of the \(L^{2^{*}}\)-norm, and \(\tilde{d}_{n}\) instead of \(d_{n}\) yield
\[|v_{n}|_{p}=|w_{n}|_{p}\stackrel{{ n\to\infty}}{{\longrightarrow}}0. \tag{3.30}\]
Thus implying \(\bar{w}=0\). Also, observe that (3.30) gives
\[w_{n}\stackrel{{ n\to\infty}}{{\longrightarrow}}0\quad\text{in $L^{q}_{\rm loc}(\mathbb{B}^{N})$ for $q<p$}. \tag{3.31}\]
Now let's consider that \(\left\{z_{n}\right\}_{n}\) is bounded so that in the subsequent argument we can consider \(z_{n}=0,\forall n\in\mathbb{N}\). Further, fix \(R>0\) such that \(|a(x)-1|<\eta\ \forall x\in\mathbb{B}^{N}\backslash B_{R}(0)\), where \(\eta\) is a suitable small constant that will be chosen later.
Consider the functionals \(I_{\varepsilon}\): \(\mathcal{D}^{1,2}\left(\mathbb{B}^{N}\right)\to\mathbb{R}\) and \(J_{\epsilon,\infty}\): \(\mathcal{D}^{1,2}\left(\mathbb{R}^{N}\right)\to\mathbb{R}\) defined by
\[I_{\varepsilon}(u) =\frac{1}{2}\int_{\mathbb{B}^{N}}\left(|\nabla_{\mathbb{B}^{N}}u|^{ 2}-\lambda u^{2}\right)\ \mathrm{d}V_{\mathbb{B}^{N}}-\frac{\varepsilon}{2^{*}}\int_{\mathbb{B}^{N}}|u| ^{2^{*}}\ \mathrm{d}V_{\mathbb{B}^{N}},\] \[J_{\varepsilon,\infty}(u) =\frac{1}{2}\int_{\mathbb{R}^{N}}|\nabla u|^{2}\mathrm{d}x-\frac{ \varepsilon}{2^{*}}\int_{\mathbb{R}^{N}}|u|^{2^{*}}\mathrm{d}x.\]
Firstly, observe that the Euler Lagrange equation (say \((AP_{\infty})\)) corresponding to the functional \(I_{\varepsilon}\) is invariant under hyperbolic isometries. Thus for a solution \(U\) of \((AP_{\infty})\), if we define
\[u_{n}=U\circ\tau_{n} \tag{3.32}\]
where \(\tau_{n}\in I\left(\mathbb{B}^{N}\right)\) with \(\tau_{n}(0)\to\infty\) and \(I\left(\mathbb{B}^{N}\right)\) denotes the isometry group of \(\mathbb{B}^{N}\), then \(\left\{u_{n}\right\}_{n}\) is a PS sequence converging weakly to zero. Moreover, in the
case of critical exponent, another PS sequence can be exhibited emerging from the concentration phenomenon as follows ([11]).
Let \(V\) be a solution of the Euler-Lagrange equation corresponding to the functional \(J_{\epsilon,\infty}\). Fix \(x_{0}\in\mathbb{B}^{N}\) and \(\phi\in C_{c}^{\infty}\left(\mathbb{B}^{N}\right)\) such that \(0\leq\phi\leq 1\) and \(\phi\equiv 1\) in a neighborhood of \(x_{0}\). Define
\[v_{n}(x)=\left(\frac{1-|x|^{2}}{2}\right)^{\frac{N-2}{2}}\phi(x)\varepsilon_{n }^{\frac{2-N}{2}}V\left(\left(x-x_{0}\right)/\varepsilon_{n}\right) \tag{3.33}\]
where \(\varepsilon_{n}>0\) and \(\varepsilon_{n}\to 0\), then \(v_{n}\) is also a PS sequence.
Then using (3.19), (3.30), and (3.31), we can get that \(\left\{w_{n}\right\}_{n}\) is a (PS)-sequence for \(I_{\varepsilon}(u)\). So, recalling Theorem 3.3 of [11]: there exist \(n_{1},n_{2}\in\mathbb{N}\) and functions \(u_{n}^{j}\in H^{1}(\mathbb{B}^{N}),0\leq j\leq n_{1},v_{n}^{k}\in H^{1}( \mathbb{B}^{N}),0\leq k\leq n_{2}\) and \(w\in H^{1}\left(\mathbb{B}^{N}\right)\) such that upto a subsequence
\[w_{n}=w+\sum_{j=1}^{n_{1}}u_{n}^{j}+\sum_{k=1}^{n_{2}}v_{n}^{k}+o(1)\]
where \(I_{\varepsilon}^{\prime}(w)=0,u_{n}^{j},v_{n}^{k}\) are \(PS\) sequences of the form (3.32) and (3.33), respectively and \(o(1)\to 0\) in \(H^{1}(\mathbb{B}^{N})\). Moreover
\[I_{\varepsilon}(w_{n})=I_{\varepsilon}(w)+\sum_{j=1}^{n_{1}}I_{\varepsilon} \left(U_{j}\right)+\sum_{k=1}^{n_{2}}J_{\varepsilon,\infty}\left(V_{k}\right) +o(1) \tag{3.34}\]
where \(U_{j},V_{k}\) are the solutions Euler Lagrange equation associated with \(I_{\varepsilon}\) and \(J_{\varepsilon,\infty}\), respectively, corresponding to \(u_{n}^{j}\), and \(v_{n}^{k}\).
Moreover, (3.11) implies
\[J_{\varepsilon,\infty}\left(V_{k}\right)\geq\frac{1}{N}S^{N/2}\left(\frac{1}{ \varepsilon}\right)^{\frac{N-2}{2}} \tag{3.35}\]
where \(S\) is the best Sobolev constant that occurs in the Sobolev inequality in \(\mathbb{R}^{N}\).
Finally, by (3.22),(3.35),(3.30),(3.34) and Proposition 3.2, we have
\[E_{\varepsilon}\left(u_{n}\right) =E_{\varepsilon}(\bar{u})+E_{\varepsilon}\left(v_{n}\right)+o(1)\] \[\geq E_{\varepsilon}(\bar{u})+I_{\varepsilon}\left(w_{n}\right)- \frac{1}{p}\int_{\mathbb{B}^{N}\setminus B_{R}(0)}(a(x)-1)|v_{n}|^{p}\ \mathrm{d}V_{\mathbb{B}^{N}}+o(1)\] \[\geq E_{\varepsilon}(\bar{u})+\sum_{k=1}^{n_{2}}J_{\varepsilon, \infty}\left(V_{k}\right)-\eta\hat{c}+o(1)\] \[\geq\frac{1}{N}S^{N/2}\left(\frac{1}{\varepsilon}\right)^{\frac{ N-2}{2}}-\eta\hat{c}+o(1)\] \[\geq m_{\varepsilon}-\eta\hat{c}+o(1)\] \[>c\]
for \(n\) large and \(\eta\) sufficiently small enough. Therefore, it contradicts our assumption \(E_{\varepsilon}\left(u_{n}\right)\to c\). To conclude, let's take into account the case \(d(z_{n},0)\to\infty\). In this case, it is simpler to repeat the argument implemented in the case \(\left\{z_{n}\right\}_{n}\) bounded.
Proof of Theorem 1.2.: The following inequality will be put to use for establishing a solution
\[\inf_{\mathcal{N}_{\varepsilon}}E_{\varepsilon}<m_{\varepsilon}. \tag{3.36}\]
In order to prove the above inequality, take into account the minimising function \(w_{\varepsilon}\) provided by the Theorem 1.1, and \(t>0\) be such that \(tw_{\varepsilon}\in\mathcal{N}_{\varepsilon}\). Then
\[\inf_{\mathcal{N}_{\varepsilon}}E_{\varepsilon}\leq E_{\varepsilon}\left(tw_{ \varepsilon}\right)<E_{\varepsilon,\infty}\left(tw_{\varepsilon}\right)\leq E _{\varepsilon,\infty}\left(w_{\varepsilon}\right)=m_{\varepsilon}.\]
It is implied by (3.36) and the Proposition 3.4 that a minimising function \(\tilde{u}\) exists for the functional \(E_{\varepsilon}\) constrained on \(\mathcal{N}_{\varepsilon}\). Further, one can verify that \(\tilde{u}\) is a constant sign function that can be chosen strictly positive by employing the same argument used in the proof of Theorem 1.1.
## 4. bound-state solution: Existence and Non-existence
This section is devoted to the proof of Theorem 1.3 concerning the solutions of \((P_{\varepsilon})\) for \(a(x)\leq 1\). Firstly, we show the non-existence of a ground-state solution. Then we restore (local) compactness in a range of functional values and prove Theorem 1.3. In particular, we prove the following proposition, which led us to a non-existence result:
**Proposition 4.1**.: _Let \(\varepsilon\in[0,\varepsilon_{0})\). Assume \(a(x)\leq 1\), \(a(x)\not\equiv 1\), then_
\[\inf_{\mathcal{N}_{\varepsilon}}E_{\varepsilon}=m_{\varepsilon}. \tag{4.1}\]
_Furthermore, there is no solution to the minimization problem (4.1). Moreover, here \(E_{0}:=E,\,\mathcal{N}_{0}:=\mathcal{N},\;m_{0}:=m\)._
First, note that when \(\varepsilon=0\), \((P_{\varepsilon})\) becomes
\[-\Delta_{\mathbb{B}^{N}}u\,-\,\lambda u\,=\,a(x)u^{p-1}\text{ in }\mathbb{B}^{N}, \;\;u>0\text{ in }\mathbb{B}^{N},\;\;u\in H^{1}(\mathbb{B}^{N}),\] ( \[P\] )
and the corresponding energy functional is \(E:H^{1}(\mathbb{B}^{N})\to\mathbb{R}\) defined by
\[E(u)=\frac{1}{2}\int_{\mathbb{B}^{N}}\left(|\nabla_{\mathbb{B}^{N}}u|^{2}- \lambda u^{2}\right)\;\mathrm{d}V_{\mathbb{B}^{N}}-\frac{1}{p}\int_{\mathbb{B }^{N}}a(x)|u|^{p}\;\mathrm{d}V_{\mathbb{B}^{N}},\]
and \(\mathcal{N}\) denotes the corresponding Nehari Manifold.
Proof of Proposition 4.1.: Let \(u\in\mathcal{N}_{\varepsilon}\) and \(t_{u}\in\mathbb{R}\) be such that \(t_{u}u\in\mathcal{N}_{\varepsilon,\infty}\). Using the assumption that \(a(x)\leq 1\) a.e. in \(\mathbb{B}^{N}\), we have
\[m_{\varepsilon}\leq E_{\varepsilon,\infty}\left(t_{u}u\right)\leq E_{ \varepsilon}\left(t_{u}u\right)\leq E_{\varepsilon}(u),\]
which in turn implies, \(\inf_{\mathcal{N}_{\varepsilon}}E_{\varepsilon}\geq m_{\varepsilon}\). Now we shall exhibit a sequence \(\{v_{n}\}_{n}\in\mathcal{N}_{\varepsilon}\) such that \(E_{\varepsilon}\left(v_{n}\right)\to m_{\varepsilon}\) as \(n\to\infty.\) This will prove our desired result.
Define \(v_{n}(x)=t_{n}w_{\varepsilon}(\tau_{n}(x))\) where \(\tau_{n}\) is the hyperbolic translation with \(\tau_{n}(0)=b_{n}\), and \(b_{n}\in\mathbb{B}^{N}\) such that \(b_{n}\to\infty\), \(w_{\varepsilon}\) is the minimizing function (solution) established in Theorem 1.1 and \(t_{n}>0\) is such that \(v_{n}(x)=t_{n}w_{\varepsilon}(\tau_{n}(x))\in\mathcal{N}_{\varepsilon}\).
Now \(v_{n}\in\mathcal{N}_{\varepsilon}\) gives
\[\left\|w_{\varepsilon}(\tau_{n}(\cdot))\right\|_{\lambda}^{2}-t_{n}^{p-2}|a^{ \frac{1}{p}}w_{\varepsilon}(\tau_{n}(\cdot))|_{p}^{p}-\varepsilon t_{n}^{2^{*}- 2}\left|w_{\varepsilon}(\tau_{n}(\cdot))\right|_{2^{*}}^{2^{*}}=0, \tag{4.2}\]
that is
\[t_{n}^{p-2}|a^{\frac{1}{p}}w_{\varepsilon}(\tau_{n}(\cdot))|_{p}^{p}+ \varepsilon t_{n}^{2^{*}-2}\left|w_{\varepsilon}\right|_{2^{*}}^{2^{*}}=\left\| w_{\varepsilon}\right\|_{\lambda}^{2}.\]
Consequently, \(\left\{t_{n}\right\}_{n}\) is bounded and, up to a subsequence, \(t_{n}\to t\). Letting \(n\to\infty\) in (4.2) yields
\[\left\|w_{\varepsilon}\right\|_{\lambda}^{2}-t^{p-2}\left|w_{\varepsilon} \right|_{p}^{p}-\varepsilon t^{2^{\star}-2}\left|w_{\varepsilon}\right|_{2^{ \star}}^{2^{\star}}=0,\]
giving us \(tw_{\varepsilon}\in\mathcal{N}_{\varepsilon,\infty}\). But \(w_{\varepsilon}\in\mathcal{N}_{\varepsilon,\infty}\), therefore, \(t=1\). Then it is not difficult to see that \(E_{\varepsilon}\left(v_{n}\right)\to E_{\varepsilon,\infty}\left(w_{ \varepsilon}\right)=m_{\varepsilon}\) and we can conclude \(\inf_{\mathcal{N}_{\varepsilon}}E_{\varepsilon}\leq m_{\varepsilon}\).
Furthermore, we want to show that \(m_{\varepsilon}\) is not attained in \(\mathcal{N}_{\varepsilon}\). We argue by contradiction, let \(u\in\mathcal{N}_{\varepsilon}\) be such that \(E_{\varepsilon}(u)=m_{\varepsilon}\). Let \(t>0\) be such that \(tu\in\mathcal{N}_{\varepsilon,\infty}\), then
\[m_{\varepsilon}=E_{\varepsilon}(u)\geq E_{\varepsilon}(tu)>E_{\varepsilon, \infty}(tu)\geq m_{\varepsilon},\]
which leads to a contradiction, and hence the proof is complete.
### Bound-state solutions
Proposition 4.1 tells us that the problem \((P_{\varepsilon})\) for \(a(x)\leq 1\) does not admit a ground-state solution. This tempts us to look for high-energy solutions in the spirit of Bahri-Li. This idea has already been exploited for (purely) subcritical problems in the hyperbolic space, see e.g., [28, 29]. It hinges on delicate interaction estimates of two hyperbolic bubbles. Let us first recall some of the well-known facts from the previous papers.
We recall the following standard compactness result in the subcritical case as a consequence of ([28, Proposition 3.1]) for \(a(x)\) satisfying the conditions mentioned in Theorem 1.3.
**Proposition 4.2**.: _Assume \(\left\{v_{n}\right\}_{n}\) to be a \((PS)_{c}\)-sequence of \(E\) for \(c\in(m,2m)\), then \(\left\{v_{n}\right\}_{n}\) is relatively compact and, up to a subsequence, converges to a nonzero function \(\bar{v}\in H^{1}(\mathbb{B}^{N})\) such that \(E(\bar{v})\in(m,2m)\)._
The above proposition led us to the following compactness result corresponding to the critical perturbation problem.
**Proposition 4.3**.: _For every \(\delta\in(0,m/2)\) there corresponds \(\varepsilon_{\delta}>0\) satisfying the following property: \(\forall\varepsilon\in(0,\varepsilon_{\delta})\,,\forall c\in(m+\delta,2m-\delta)\), if \(\left\{u_{n}\right\}_{n}\) is a \((PS)_{c}-\) sequence of \(E_{\varepsilon}\) constrained on \(\mathcal{N}_{\varepsilon}\), then \(u_{n}\rightharpoonup\bar{u}\not\equiv 0\) weakly in \(H^{1}(\mathbb{B}^{N})\). Furthermore, \(\bar{u}\) is a critical point of \(E_{\varepsilon}\) on \(\mathcal{N}_{\varepsilon}\) and \(E_{\varepsilon}(\bar{u})\leq c\)._
Proof.: First of all, note that every \((\text{PS})_{c}\) - sequence for the constrained functional is also a \((\text{PS})_{c}\) - sequence for the free functional, and its weak limit is a critical point which can be seen by following similar arguments as in Proposition 3.4. Also, \((\text{PS})\) sequence is bounded in \(H^{1}(\mathbb{B}^{N})\), so it ensures the existence of a weak limit in \(H^{1}\left(\mathbb{B}^{N}\right)\).
Now to prove the Proposition in hand, we argue by contradiction. We can assume that there exist \(\bar{\delta}\in(0,m/2)\), a sequence \(\left\{c_{n}\right\}_{n}\) in \((m+\bar{\delta},2m-\bar{\delta})\), a sequence \(\left\{\varepsilon_{n}\right\}_{n}\) in \((0,+\infty)\), with \(\varepsilon_{n}\to 0\), and, for every \(n\in\mathbb{N}\), a sequence \(\left\{u_{k}^{n}\right\}_{k}\) in \(H^{1}(\mathbb{B}^{N})\) such that
\[E_{\varepsilon_{n}}\left(u_{k}^{n}\right)\stackrel{{ k\to\infty}}{{\longrightarrow}}c_{n},\quad E_{ \varepsilon_{n}}^{\prime}\left(u_{k}^{n}\right)\stackrel{{ k\to\infty}}{{ \longrightarrow}}0,\]
\[u_{k}^{n}\stackrel{{ k\to\infty}}{{\longrightarrow}}0\quad\text{ weakly in }H^{1}(\mathbb{B}^{N}).\]
Additionally, as \(p<2^{\star}\), we can also assume
\[u_{k}^{n}\stackrel{{ k\to\infty}}{{\longrightarrow}}0\quad\text{ in }L_{\text{loc}}^{p}\left(\mathbb{B}^{N}\right).\]
Up to a subsequence, \(c_{n}\rightarrow\bar{c}\in\left[m+\bar{\delta},2m-\bar{\delta}\right]\), and by using a diagonalization argument, we construct the sequence \(\left\{v_{n}\right\}_{n}:=\left\{u_{k_{n}}^{n}\right\}_{n}\) that verifies
\[E_{\varepsilon_{n}}\left(v_{n}\right)\overset{n\rightarrow\infty}{\longrightarrow }\bar{c},\quad E_{\varepsilon_{n}}^{\prime}\left(v_{n}\right)\overset{n \rightarrow\infty}{\longrightarrow}0,\quad v_{n}\overset{n\rightarrow\infty }{\longrightarrow}0\quad\text{ in }L_{\text{loc}}^{p}\,\left(\mathbb{B}^{N}\right). \tag{4.3}\]
Moreover, the following equality guarantees \(\left\{\left\|v_{n}\right\|_{\lambda}\right\}_{n}\) is bounded
\[E_{\varepsilon_{n}}\left(v_{n}\right)=\left(\frac{1}{2}-\frac{1}{p}\right) \left\|v_{n}\right\|_{\lambda}^{2}+\left(\frac{1}{p}-\frac{1}{2^{*}}\right) \varepsilon_{n}\left|v_{n}\right|_{2^{*}}^{2^{*}}=\bar{c}+o(1). \tag{4.4}\]
Thus we can deduce that
\[E\left(v_{n}\right)=E_{\varepsilon_{n}}\left(v_{n}\right)+\frac{ \varepsilon_{n}}{2^{*}}\int_{\mathbb{B}^{N}}\left|v_{n}\right|^{2^{*}}\mathrm{ d}x\overset{n\rightarrow\infty}{\longrightarrow}\bar{c}\] \[\left\|E^{\prime}\left(v_{n}\right)\right\|_{H^{-1}(\mathbb{B}^{ N})}\leq\left\|E_{\varepsilon_{n}}^{\prime}\left(v_{n}\right)\right\|_{H^{-1}( \mathbb{B}^{N})}+C\varepsilon_{n}\left\|v_{n}\right\|_{\lambda}^{2^{*}-1} \overset{n\rightarrow\infty}{\longrightarrow}0,\]
and the above steps imply \(\left\{v_{n}\right\}_{n}\) is a \((PS)_{\bar{c}}\) - sequence of \(E\), with \(\bar{c}\in(m,2m)\). Then, according to Proposition 4.2, \(\bar{v}\in H^{1}(\mathbb{B}^{N}),\bar{v}\not\equiv 0\), exists such that \(v_{n}\rightarrow\bar{v}\), in contrast to (4.3).
To conclude, if \(\left\{u_{n}\right\}_{n}\) is a \((\)PS\()_{c}\) - sequence for \(E_{\varepsilon}\) constrained on \(\mathcal{N}_{\varepsilon}\), and \(u_{n}\rightharpoonup\bar{u}\), then replacing \(\varepsilon_{n}\) with \(\varepsilon\), and \(\bar{c}\) with \(c\) in (4.4), we can obtain \(E_{\varepsilon}(\bar{u})\leq c\).
### Energy Estimates
In this subsection, we recall and establish some energy estimates for interacting hyperbolic bubbles. In addition, these functions are also used to analyse the sublevels of the functional \(E_{\varepsilon}\). Subsequently, we construct barycenter-type maps to study a few properties of these sublevels.
We first recollect the following energy estimate corresponding to \((P)\), purely subcritical problem ([28, Lemma 4.2]):
**Proposition 4.4**.: _Let \(a(x)\) satisfy the conditions of Theorem 1.3, and let \(w\) be the unique radial solution of \(\,(P_{\infty})\). Then, there exists a large number \(R_{0}\), such that for any \(R\geq R_{0}\), and for any \(x_{1},x_{2}\) satisfying_
\[d(x_{1},0)\geq R^{\alpha},\;d(x_{2},0)\geq R^{\alpha},\;R^{\alpha^{\prime}} \leq d\left(x_{1},x_{2}\right)\leq R^{\alpha^{\prime}-\alpha}\min\left\{d(x_{ 1},0),d(x_{2},0)\right\},\]
_where \(\alpha>\alpha^{\prime}>1\), it holds_
\[J\left(tu_{1}+(1-t)u_{2}\right)<S_{2,\lambda}, \tag{4.5}\]
_where \(0\leq t\leq 1,\) and \(u_{i}=w\left(\tau_{-x_{i}}(\cdot)\right),i=1,2\), \(J,J_{\infty}:H^{1}(\mathbb{B}^{N})\rightarrow\mathbb{R}\) are defined as_
\[J(u):=\frac{\|u\|_{\lambda}^{2}}{\left(\int_{\mathbb{B}^{N}}a(x)|u(x)|^{p}\, \,\mathrm{d}V_{\mathbb{B}^{N}}(x)\right)^{\frac{2}{p}}},J_{\infty}(u):=\frac{ \|u\|_{\lambda}^{2}}{\left(\int_{\mathbb{B}^{N}}|u(x)|^{p}\,\,\mathrm{d}V_{ \mathbb{B}^{N}}(x)\right)^{\frac{2}{p}}},\]
_and the energy levels as_
\[S_{1,\lambda}:=\inf_{u\in H^{1}(\mathbb{B}^{N})\backslash\left\{0\right\}}J_{ \infty}(u),\quad S_{2,\lambda}:=2^{\frac{p-2}{p}}S_{1,\lambda}.\]
With the above proposition in mind, let us introduce some notations. Fix \(x_{1}\in\mathbb{B}^{N}\) such that \(x_{1}\) satisfies the condition in Proposition 4.4. Precisely, choose \(x_{1}\) such that \(d(x_{1},0)=2R_{0}^{\alpha}\). Also, for all the notations as in the above proposition fix \(R^{\prime}=R_{0}^{\alpha^{\prime}}\).
Now set \(\Sigma=\partial B_{R^{{}^{\prime}}}\left(x_{1}\right)\). Further, for any \(R>0\) as in the above Proposition 4.4, define the map \(\psi_{R}:[0,1]\times\Sigma\longrightarrow H^{1}(\mathbb{B}^{N})\) by
\[\psi_{R}[s,y](x)=(1-s)w\left(\tau_{-x_{1}}(x)\right)+sw\left(\tau_{-y}(x) \right),\]
where \(w\) is the ground-state solution of \(\left(P_{\infty}\right)\). Further, let \(t_{R,s,y},t^{\prime}_{R,s,y}>0\) such that \(t_{R,s,y}\psi_{R}[s,y]\in\mathcal{N}_{\varepsilon}\) and \(t^{\prime}_{R,s,y}\psi_{R}[s,y]\in\mathcal{N}\).
We can deduce an energy estimate using Proposition 4.4 in our case here. In particular, we have the following proposition.
**Lemma 4.5**.: _There exists \(\bar{R}>0\) and \(\mathcal{A}\in(m,2m)\) such that for any \(R>\bar{R}\) and for any \(\varepsilon>0\)_
\[\mathcal{A}_{\varepsilon,R}=\max\left\{E_{\varepsilon}\left(t_{R,s,y}\psi_{R} [s,y]\right):s\in[0,1],y\in\Sigma\right\}<\mathcal{A}<2m.\]
Proof.: For the sake of simplicity, we omit \(s,y\) and write \(t_{R}=t_{R,s,y},t^{\prime}_{R}=t^{\prime}_{R,s,y}\) and \(\psi_{R}=\psi_{R}[s,y]\). Using \(t^{\prime}_{R}\psi_{R}\in\mathcal{N}\), we have
\[\left\|t^{\prime}_{R}\psi_{R}\right\|_{\lambda}^{2}=\left|a^{1/p}t^{\prime}_{R }\psi_{R}\right|_{p}^{p},\quad t^{\prime}_{R}=\left(\frac{\left\|\psi_{R} \right\|_{\lambda}^{2}}{\left|a^{1/p}\psi_{R}\right|_{p}^{p}}\right)^{1/p-2}\]
Now for every \(\varepsilon>0\), we have
\[E_{\varepsilon}\left(t_{R}\psi_{R}\right) \leq E\left(t_{R}\psi_{R}\right)\leq E\left(t^{\prime}_{R}\psi_{R}\right)\] \[=\left(\frac{1}{2}-\frac{1}{p}\right)\left(t^{\prime}_{R}\right) ^{2}\left\|\psi_{R}\right\|_{\lambda}^{2} \tag{4.6}\] \[=\left(\frac{1}{2}-\frac{1}{p}\right)\left(\frac{\left\|\psi_{R} \right\|_{\lambda}^{2}}{\left|a^{1/p}\psi_{R}\right|_{p}^{2}}\right)^{\frac{p} {p-2}}.\]
So to prove the lemma we are left to estimate the above ratio. Note that
\[\left(\frac{\left\|\psi_{R}\right\|_{\lambda}^{2}}{\left|a^{1/p}\psi_{R} \right|_{p}^{2}}\right)^{\frac{p}{p-2}}=\left(J(\psi_{R})\right)^{\frac{p}{p-2 }},\]
Moreover, using the following strict inequality in the last steps of the proof of Proposition 4.4,
\[\max\frac{t^{2}+(1-t)^{2}}{\left(t^{p}+(1-t)^{p}\right)^{\frac{2}{p}}}<2^{ \frac{p-2}{p}},\quad\forall\ t\in([0,1]\setminus N(\frac{1}{2}))\]
where \(N(\frac{1}{2})\) denotes a neighbourhood of \(\frac{1}{2}\), it can be easily shown that for \(R\) sufficiently large
\[\max\left\{J(\psi_{R}):s\in[0,1],y\in\Sigma\right\}<S_{2,\lambda}. \tag{4.7}\]
Also, from [33], it is known that \(S_{1,\lambda}\) is achieved by \(w\), which is a solution of \(\left(P_{\infty}\right)\). This in turn implies \(S_{1,\lambda}=\left(\left\|w\right\|_{\lambda}^{2}\right)^{\frac{p-2}{p}}\) and \(S_{2,\lambda}=(2\|w\|_{\lambda}^{2})^{\frac{p-2}{p}}\).
Further,
\[m=E_{\infty}(w)=\left(\frac{1}{2}-\frac{1}{p}\right)\|w\|_{\lambda}^{2}=\frac{ 1}{2}\left(\frac{1}{2}-\frac{1}{p}\right)S_{2,\lambda}^{\frac{p}{p-2}}.\]
Then by (4.6), (4.7) and using \(\left(\frac{1}{2}-\frac{1}{p}\right)S_{2,\lambda}^{\frac{p}{p-2}}=2m\), the lemma holds.
**Corollary 4.6**.: _There exist \(\bar{R},\bar{\varepsilon}>0\) such that for any \(R>\bar{R}\) and for any \(\varepsilon\in\left(0,\bar{\varepsilon}\right)\)_
\[\mathcal{A}_{\varepsilon,R}=\max\left\{E_{\varepsilon}\left(t_{R,s,y}\psi_{R} [s,y]\right):s\in[0,1],y\in\Sigma\right\}<2m_{\varepsilon}.\]
Proof.: It results directly from the Lemmas 3.3 and 4.5. Indeed, using Lemma 4.5, we have an existence of \(\bar{R}>0\) such that \(\mathcal{A}_{\varepsilon,R}<2m\) for any \(R>\bar{R}\). Also, from (3.1) we have \(m_{\varepsilon}\leq m\). Now if \(\mathcal{A}_{\varepsilon,R}>2m_{\varepsilon}\ \forall\varepsilon>0\), then \(\left|m-m_{\varepsilon}\right|>m-\frac{\mathcal{A}_{\varepsilon,R}}{2}\ \forall \varepsilon>0\) which contradicts Lemma 3.3.
### Barycentric map
We now introduce a barycentric type function as follows: for \(u\in H^{1}(\mathbb{B}^{N})\backslash\{0\}\), we set
\[\mu(u)(x)=\frac{1}{\left|B_{1}(0)\right|}\int_{B_{1}(x)}\left|u(y)\right|\, \mathrm{d}V_{\mathbb{B}^{N}}(y)\quad x\in\mathbb{B}^{N},\]
and we observe that \(\mu(u)\) is bounded and continuous, allowing us to introduce the function
\[\hat{u}(x)=\left[\mu(u)(x)-\frac{1}{2}\max\mu(u)\right]^{+}\quad x\in\mathbb{B }^{N}.\]
The function defined above is continuous and has compact support. Consequently, we can set \(\beta:H^{1}\left(\mathbb{B}^{N}\right)\backslash\{0\}\to\mathbb{R}^{N}\) as
\[\beta(u)=\frac{1}{\left|\hat{u}\right|_{1}}\int_{\mathbb{B}^{N}}\frac{x}{1+ \left|x\right|}\hat{u}(x)\ \mathrm{d}V_{\mathbb{B}^{N}}.\]
Further, define
\[C_{0}=\inf\{E(u):u\in\mathcal{N},\beta(u)=0\},\,C_{0,\varepsilon}=\inf\left\{ E_{\varepsilon}(u):u\in\mathcal{N}_{\varepsilon},\,\beta(u)=0\right\}.\]
**Lemma 4.7**.: _The following facts hold:_
* \(C_{0}>m\)_;_
* \(\lim_{\varepsilon\to 0}C_{0,\varepsilon}=C_{0}\)_._
Proof.: We shall first prove the inequality \((a)\). Clearly, Proposition 4.1 gives \(C_{0}\geq m\). If possible, let us assume that \(C_{0}=m\) and we will obtain a contradiction. Let \(\left\{u_{n}\right\}_{n}\) be a sequence in \(\mathcal{N}\) with \(\beta\left(u_{n}\right)=0\) such that \(E\left(u_{n}\right)\to m\) and \(t_{n}>0\) be such that \(t_{n}u_{n}\in\mathcal{N}_{\infty},\forall n\in\mathbb{N}\). Using the assumption that \(a(x)\leq 1\) a.e. in \(\mathbb{B}^{N}\), we deduce that
\[m\leq E_{\infty}\left(t_{n}u_{n}\right)\leq E\left(t_{n}u_{n}\right)\leq E \left(u_{n}\right)=m+o(1), \tag{4.8}\]
which in turn implies that \(\left\{t_{n}u_{n}\right\}_{n}\) is a minimizing sequence for \(E_{\infty}\) on \(\mathcal{N}_{\infty}\). Therefore there exists a sequence \(\left\{y_{n}\right\}_{n}\) in \(\mathbb{B}^{N}\) such that
\[t_{n}u_{n}(x)=w\left(\tau_{-y_{n}}(x)\right)+\phi_{n}(x),\quad\phi_{n}\to 0 \quad\text{ strongly in }H^{1}\left(\mathbb{B}^{N}\right),\]
where \(w\) denotes the solution of \(\left(P_{\infty}\right)\) and is radially symmetric with respect to origin. Moreover, this sequence \(\left\{y_{n}\right\}_{n}\) must be bounded because, if (up to a subsequence) \(\lim_{n\to\infty}d(y_{n},0)=+\infty\), i.e., \(\left|y_{n}\right|\to 1\) as \(n\to\infty\) then
\[\lim_{n\to\infty}\left|\beta\left(t_{n}u_{n}\right)-\frac{y_{n}}{1+\left|y_{n} \right|}\right|=0, \tag{4.9}\]
which will give us to a contradiction, as \(\beta\left(t_{n}u_{n}\right)=\beta\left(u_{n}\right)=0\) for all \(n\in\mathbb{N}\). To prove (4.9), consider
\[\left|\beta\left(t_{n}u_{n}\right)-\frac{y_{n}}{1+\left|y_{n}\right| \right| =\frac{1}{\left|\hat{w}\right|_{1}}\left|\int_{\mathbb{B}^{N}} \left(\frac{\tau_{y_{n}}(z)}{1+\left|\tau_{y_{n}}(z)\right|}-\frac{y_{n}}{1+ \left|y_{n}\right|}\right)\;\mathrm{d}V_{\mathbb{B}^{N}}(z)\right|+o(1)\] \[=\frac{1}{\left|\hat{w}\right|_{1}}\left|\int_{\mathbb{B}^{N}} \left(\frac{\tau_{y_{n}}(z)+\left|y_{n}\right|\tau_{y_{n}}(z)-y_{n}-y_{n}|\tau _{y_{n}}(z)\right|}{(1+\left|\tau_{y_{n}}(z)\right|)(1+\left|y_{n}\right|)} \right)\;\mathrm{d}V_{\mathbb{B}^{N}}(z)\right|+o(1). \tag{4.10}\]
Moreover, it follows from the expression of the hyperbolic translation (2.1) that \(\tau_{y_{n}}(x)=y_{n}+\circ(1)\), as \(\left|y_{n}\right|\to 1\), for each fixed \(x\in\mathbb{B}^{N}.\) Now using this along with dominated convergence theorem in (4.10), (4.9) follows.
Thus, up to a subsequence, \(y_{n}\rightarrow\bar{y}\) for some \(\bar{y}\in\mathbb{B}^{N}\); indeed, \(\bar{y}=0\) because \(w\) has radial symmetry with respect to origin. Hence \(t_{n}u_{n}\to w\) strongly in \(H^{1}\left(\mathbb{B}^{N}\right)\). Moreover, \(a(x)\not\equiv 1\) and considering (4.8), we obtain
\[m=E_{\infty}(w)<E(w)=\lim_{n\rightarrow\infty}E\left(t_{n}u_{n}\right)\leq \lim_{n\rightarrow\infty}E\left(u_{n}\right)=m,\]
hence a contradiction. This proves \((a)\).
We shall now prove \((b)\). Let \(\varepsilon>0\) be fixed and for every \(\eta>0\), let \(u_{\eta}\in\mathcal{N}\) be such that \(\beta\left(u_{\eta}\right)=0\) and \(E\left(u_{\eta}\right)\leq C_{0}+\eta\). Also, let \(s_{\eta}>0\) be such that \(s_{\eta}u_{\eta}\in\mathcal{N}_{\varepsilon}\). Then
\[C_{0,\varepsilon}\leq E_{\varepsilon}\left(s_{\eta}u_{\eta}\right)\leq E\left( s_{\eta}u_{\eta}\right)\leq E\left(u_{\eta}\right)\leq C_{0}+\eta.\]
Since \(\eta\) is arbitrary, we get
\[C_{0,\varepsilon}\leq C_{0}\quad\forall\varepsilon>0. \tag{4.11}\]
Let \(v_{\varepsilon}\in\mathcal{N}_{\varepsilon}\) be such that \(\beta\left(v_{\varepsilon}\right)=0\) and \(E_{\varepsilon}\left(v_{\varepsilon}\right)\leq C_{0,\varepsilon}+\varepsilon\), and let \(t_{\varepsilon}>0\) such that \(t_{\varepsilon}v_{\varepsilon}\in\mathcal{N}\). Thus
\[\begin{split} C_{0}&\leq E\left(t_{\varepsilon}v_{ \varepsilon}\right)=E_{\varepsilon}\left(t_{\varepsilon}v_{\varepsilon}\right) +\frac{\varepsilon}{2^{*}}\left|t_{\varepsilon}v_{\varepsilon}\right|_{2^{*}} ^{2^{*}}\\ &\leq E_{\varepsilon}\left(v_{\varepsilon}\right)+\frac{ \varepsilon}{2^{*}}\left|t_{\varepsilon}v_{\varepsilon}\right|_{2^{*}}^{2^{*}} \\ &\leq C_{0,\varepsilon}+\varepsilon+\frac{\varepsilon}{2^{*}}t_{ \varepsilon}^{2^{*}}\left|v_{\varepsilon}\right|_{2^{*}}^{2^{*}}.\end{split} \tag{4.12}\]
Furthermore, taking into account (4.11) gives
\[E_{\varepsilon}\left(v_{\varepsilon}\right)=\left(\frac{1}{2}-\frac{1}{p} \right)\left\|v_{\varepsilon}\right\|_{\lambda}^{2}+\varepsilon\left(\frac{1} {p}-\frac{1}{2^{*}}\right)\left|v_{\varepsilon}\right|_{2^{*}}^{2^{*}}\leq C_{0,\varepsilon}+\varepsilon\leq C_{0}+\varepsilon\]
yielding \(\left|v_{\varepsilon}\right|_{2^{*}}^{2^{*}}\) is bounded. Moreover, taking into account \(a(x)\leq 1\) a.e. in \(\mathbb{B}^{N}\) and \(v_{\varepsilon}\in\mathcal{N}_{\varepsilon}\), and applying Sobolev inequalities, we can deduce
\[\left\|v_{\varepsilon}\right\|_{\lambda}^{2}\left[1-S_{N,p}^{-p}\left\|v_{ \varepsilon}\right\|_{\lambda}^{p-2}-\varepsilon S_{N,2^{*}}^{-2^{*}}\left\|v_ {\varepsilon}\right\|_{\lambda}^{2^{*}-2}\right]\leq 0.\]
Then we can get a \(c>0\) independent of small \(\varepsilon\) such that \(\left\|v_{\varepsilon}\right\|_{\lambda}\geq c>0\). Thus we obtain \(\left\|v_{\varepsilon}\right\|_{\lambda}\not\to 0\). Hence, performing similar calculations as to obtain (3.5), we can get that \(\left|a^{1/p}v_{\varepsilon}\right|_{p}\not\to 0\). Further, \(t_{\varepsilon}v_{\varepsilon}\in\mathcal{N}\implies t_{\varepsilon}=\left( \frac{\left\|v_{\varepsilon}\right\|_{\lambda}}{\left|a^{1/p}v_{\varepsilon} \right|_{p}}\right)^{\frac{1}{p-2}}\) giving \(\{t_{\varepsilon}\}\) is bounded. Thus, using (4.12), we get \(\liminf_{\varepsilon\to 0}C_{0,\varepsilon}\geq C_{0}\) that, along with (4.11), gives \((b)\)
**Corollary 4.8**.: _There exists \(\widetilde{\varepsilon}>0\) such that the inequality \(C_{0,\varepsilon}>\frac{C_{0}+m}{2}\) is true for every \(\varepsilon\in(0,\widetilde{\varepsilon})\)_
Proof.: The statement can be easily proved by appropriately using \((a)\) and \((b)\) of Lemma 4.7.
**Lemma 4.9**.: _Let \(\mathcal{A}_{\varepsilon,R}\) be as in Lemma 4.5. Then \(\widehat{R}>0\) exists such that \(C_{0,\varepsilon}\leq\mathcal{A}_{\varepsilon,R}\;\;\forall R>\widehat{R}, \forall\varepsilon>0\)._
Proof.: Using the similar calculations performed in (4.10), we can deduce that for \(y\in\Sigma\),
\[\beta(w\left(\tau_{-y}(\cdot)\right))=\frac{y}{1+|\tau_{-x_{1}}(y)|}+o(1),\]
where \(o(1)\to 0\) as \(R^{\prime}\to\infty\). Then following the arguments similar to in [28, Section 5] and defining \(J:\tilde{B}_{R^{\prime}}(x_{1})\to\mathbb{R}^{N}\) by
\[K(x):=\frac{x}{1+\tanh\frac{R^{\prime}}{2}},\]
we have for \(R^{\prime}\) large enough, and \(\forall y\in\Sigma=\partial B_{R^{\prime}}\left(x_{1}\right)\), \(\;\beta\circ\psi_{R^{\prime}}[1,y]=K(y)\). Therefore, applying the invariance of the topological degree by homotopy, and the solution property of degree, we can ensure the existence of some \(\left(s_{R},y_{R}\right)\in[0,1]\times\Sigma\) such that \(\beta(\psi_{R}[s_{R},y_{R}])=0\). Thus \(\beta\left(t_{R,s_{R},y_{R}}\psi_{R}\left[s_{R},y_{R}\right]\right)=0\). Since \(t_{R,s_{R},y_{R}}\psi_{R}\left[s_{R},y_{R}\right]\in\mathcal{N}_{\varepsilon}\), the assertion follows.
**Lemma 4.10**.: _Assume \(\tilde{\varepsilon}\) as in Corollary 4.8 and \(\varepsilon\in(0,\tilde{\varepsilon})\). There exists \(\tilde{R}>0\) such that for any \(R>\tilde{R}\)_
\[\mathcal{B}_{\varepsilon,R}:=\max\left\{E_{\varepsilon}\left(t_{R,1,y}\psi_{R }[1,y]\right):y\in\Sigma\right\}<C_{0,\varepsilon}.\]
Proof.: For simplicity denote \(t_{R}=t_{R,1,y}\) and \(\psi_{R}=\psi_{R}[1,y]\). We argue by contradiction so let us assume that there exist \(R_{n}\to\infty\) and \(y_{n}\in\Sigma\) such that \(E_{\varepsilon}\left(t_{R_{n}}\psi_{R_{n}}\right)\geq C_{0,\varepsilon}\) for every \(n\in\mathbb{N}\). Since \(t_{R_{n}}\psi_{R_{n}}\in\mathcal{N}_{\varepsilon}\) we can write
\[\begin{split}& E_{\varepsilon}\left(t_{R_{n}}\psi_{R_{n}}\right)= \left(\frac{1}{2}-\frac{1}{p}\right)\left\|t_{R_{n}}\psi_{R_{n}}\right\|_{ \lambda}^{2}+\varepsilon\left(\frac{1}{p}-\frac{1}{2^{*}}\right)\left|t_{R_{n }}\psi_{R_{n}}\right|_{2^{*}}^{2^{*}}\\ &=\left(\frac{1}{2}-\frac{1}{p}\right)t_{R_{n}}^{2}\left\|w \left(\tau_{-y_{n}}(\cdot)\right)\right\|_{\lambda}^{2}+\varepsilon\left( \frac{1}{p}-\frac{1}{2^{*}}\right)t_{R_{n}}^{2^{*}}\left|w\left(\tau_{-y_{n}} (\cdot)\right)\right|_{2^{*}}^{2^{*}}.\end{split} \tag{4.13}\]
We can notice that in our setting \(0<m\leq C_{0,\varepsilon}\leq E_{\varepsilon}\left(t_{R_{n}}\psi_{R_{n}} \right)\leq\mathcal{A}_{\varepsilon,R}<2m\) and that \(0<c\leq\left\|w\left(\tau_{-y_{n}}(\cdot)\right)\right\|_{\lambda}^{2}\leq C <\infty,\forall n\in\mathbb{N}\). Thus \(0<c_{1}\leq t_{R_{n}}\leq C_{1}<\infty\) follows from (4.13). So, up to a subsequence, we can assume \(t_{R_{n}}\to t>0\). Since \(R_{n}\to\infty\), the same estimates provided in the energy estimates of [28] help us prove \(E_{\varepsilon}\left(t_{R_{n}}\psi_{R_{n}}\right)\to E_{\varepsilon,\infty}( tw)\), and we get
\[\begin{split} C_{0,\varepsilon}&\leq E_{\varepsilon, \infty}(tw)=E_{\infty}(tw)-\frac{\varepsilon}{2^{*}}|tw|_{2^{*}}^{2^{*}}\\ &\leq E_{\infty}(w)-\frac{\varepsilon}{2^{*}}|tw|_{2^{*}}^{2^{*}} \\ &=m-\frac{\varepsilon}{2^{*}}|tw|_{2^{*}}^{2^{*}}<m\end{split}\]
which gives rises to a contradiction taking into account Corollary 4.8 and Lemma 4.7\((a)\)
**Proof of Theorem 1.3** Firstly, we recollect all the values that have been stated and used in obtaining the previous few results:
\[\mathcal{A}_{\varepsilon,R}=\max\left\{E_{\varepsilon}\left(t_{R,s,y} \psi_{R}[s,y]\right):s\in[0,1],y\in\Sigma\right\},\] \[\mathcal{B}_{\varepsilon,R}=\max\left\{E_{\varepsilon}\left(t_{R,1,y}\psi_{R}[1,y]\right):y\in\Sigma\right\},\] \[C_{0,\varepsilon}=\inf\left\{E_{\varepsilon}(u):u\in\mathcal{N} _{\varepsilon},\beta(u)=0\right\}.\]
By Corollaries 4.6 and 4.8, and Lemmas 4.5, 4.7, 4.9 and 4.10, the following inequalities
\[\begin{cases}(a)&\mathcal{B}_{\varepsilon,R}<C_{0,\varepsilon}\leq\mathcal{A }_{\varepsilon,R}\\ (b)&m<\frac{\alpha_{0}+m}{2}<C_{0,\varepsilon}\leq\mathcal{A}_{\varepsilon,R }\leq\mathcal{A}<2m\\ (c)&\mathcal{A}_{\varepsilon,R}<2m_{\varepsilon}\end{cases} \tag{4.14}\]
hold true for all \(R>\max\{\bar{R},\widetilde{R},\widehat{R}\}\) and for all \(0<\varepsilon<\min\{\bar{\varepsilon},\widehat{\varepsilon}\}\). Assume \(\delta\) such that \(0<\delta<\min\left\{\frac{m}{2},2m-\mathcal{A},\frac{C_{0}-m}{2}\right\}\). Further, we take \(\varepsilon_{\delta}\) according to Proposition 4.3.
Finally, to prove Theorem 1.3, we claim that \(E_{\varepsilon}\) constrained on \(\mathcal{N}_{\varepsilon}\) has a (PS)-sequence in \([C_{0,\varepsilon},\mathcal{A}_{\varepsilon,R}]\) for every \(0<\varepsilon<\widehat{\varepsilon}:=\min\left\{\varepsilon_{\delta},\bar{ \varepsilon},\widehat{\varepsilon}\right\}\). Having done this, the Proposition 4.3 will guarantee the existence of a non-zero critical point \(\bar{u}\) with \(E_{\varepsilon}(\bar{u})\leq\mathcal{A}_{\varepsilon,R}\).
We argue by contradiction, so let us suppose that there is no (PS)-sequence in the interval \([C_{0,\varepsilon},\mathcal{A}_{\varepsilon,R}]\). Then, standard deformation arguments give the existence of \(\eta>0\) such that the sublevel \(E_{\varepsilon}^{C_{0,\varepsilon}-\eta}:=\left\{u\in\mathcal{N}_{\varepsilon }:E_{\varepsilon}(u)\leq C_{0,\varepsilon}-\eta\right\}\) is a deformation retract of the sublevel \(E_{\varepsilon}^{\mathcal{A}_{\varepsilon,R}}:=\left\{u\in\mathcal{N}_{ \varepsilon}:E_{\varepsilon}(u)\leq\mathcal{A}_{\varepsilon,R}\right\}\), i.e., there exists a continuous function \(\sigma:E_{\varepsilon}^{\mathcal{A}_{\varepsilon,R}}\to E_{\varepsilon}^{C_{0, \varepsilon}-\eta}\) such that
\[\sigma(u)=u\quad\text{ for any }u\in E_{\varepsilon}^{C_{0,\varepsilon}-\eta}. \tag{4.15}\]
Moreover, taking into account the inequality 4.14\((a)\), \(\eta\) can be chosen so small that
\[C_{0,\varepsilon}-\eta>\mathcal{B}_{\varepsilon,R}. \tag{4.16}\]
Let us define the map \(\mathcal{H}:[0,1]\times\Sigma\to\mathbb{R}^{N}\) by
\[\mathcal{H}(s,y)=\beta\left(\sigma\left(t_{R,s,y}\psi_{R}[s,y]\right)\right). \tag{4.17}\]
Applying (4.16), (4.17), and the arguments similar to that established in Lemma 4.9, we can find a point \((\tilde{s},\tilde{y})\in[0,1]\times\Sigma\) for which
\[0=\mathcal{H}(\tilde{s},\tilde{y})=\beta\left(\sigma\left(t_{R,\tilde{s}, \tilde{y}}\psi_{R}[\tilde{s},\tilde{y}]\right)\right).\]
Then, \(E_{\varepsilon}\left(\sigma\left(t_{R,\tilde{s},\tilde{y}}\psi_{R}[\tilde{s}, \tilde{y}]\right)\right)\geq C_{0,\varepsilon}\) in contrast to \(\sigma\left(t_{R,s,y}\psi_{R}[s,y]\right)\in E_{\varepsilon}^{C_{0,\varepsilon }-\eta}\) for every \((s,y)\in[0,1]\times\Sigma\), so the claim must hold true.
Let the critical point, which we have found out, be \(\bar{u}\in E_{\varepsilon}^{\mathcal{A}_{\varepsilon,R}}\). In order to prove that \(\bar{u}\) is a constant sign function, assume, by contradiction, that \(\bar{u}=\bar{u}^{+}-\bar{u}^{-}\), with \(\bar{u}^{\pm}\not\equiv 0\). We determine that \(\bar{u}^{\pm}\in\mathcal{N}_{\varepsilon}\) by multiplying the equation in \((P_{\varepsilon})\) by \(\bar{u}^{\pm}\), so
\[E_{\varepsilon}(\bar{u})=E_{\varepsilon}\left(\bar{u}^{+}\right)+E_{\varepsilon }\left(\bar{u}^{-}\right)\geq 2m_{\varepsilon},\]
contrary to 4.14(c).
**Acknowledgments.** D. Ganguly is partially supported by the INSPIRE faculty fellowship (IFA17-MA98). D. Gupta is supported by the PMRF.
**Conflict of interest:** All authors certify that there is no actual or potential conflict of interest about this article. |
2309.13178 | Controlling the Manifold of Polariton States Through Molecular Disorder | Exciton polaritons, arising from the interaction of electronic transitions
with confined electromagnetic fields, have emerged as a powerful tool to
manipulate the properties of organic materials. However, standard experimental
and theoretical approaches overlook the significant energetic disorder present
in most materials now studied. Using the conjugated polymer P3HT as a model
platform, we systematically tune the degree of energetic disorder and observe a
corresponding redistribution of photonic character within the polariton
manifold. Based on these subtle spectral features, we develop a more
generalized approach to describe strong light-matter coupling in disordered
systems that captures the key spectroscopic observables and provides a
description of the rich manifold of states intermediate between bright and
dark. Applied to a wide range of organic systems, our method challenges
prevailing notions about ultrastrong coupling and whether it can be achieved
with broad, disordered absorbers. | Aleesha George, Trevor Geraghty, Zahra Kelsey, Soham Mukherjee, Gloria Davidova, Woojae Kim, Andrew J Musser | 2023-09-22T20:50:04Z | http://arxiv.org/abs/2309.13178v1 | Controlling the Manifold of Polariton States Through Molecular Disorder
###### Abstract
Exciton polaritons, arising from the interaction of electronic transitions with confined electromagnetic fields, have emerged as a powerful tool to manipulate the properties of organic materials. However, standard experimental and theoretical approaches overlook the significant energetic disorder present in most materials now studied. Using the conjugated polymer P3HT as a model platform, we systematically tune the degree of energetic disorder and observe a corresponding redistribution of photonic character within the polariton manifold. Based on these subtle spectral features, we develop a more generalized approach to describe strong light-matter coupling in disordered systems that captures the key spectroscopic observables and provides a description of the rich manifold of states intermediate between bright and dark. Applied to a wide range of organic systems, our method challenges prevailing notions about ultrastrong coupling and whether it can be achieved with broad, disordered absorbers.
## 1 Introduction
Exciton-polaritons are quasiparticles that result from the strong interaction between light and matter, resulting in hybridization between well-defined photonic states - for instance in optical cavities, nanoparticle arrays, or nanostructured materials - and electronic excitations [1, 2, 3, 4]. Though originally studied in highly ordered, cryogenic inorganic semiconductors, exciton-polaritons can also be readily observed at room temperature in a wide array of organic systems [5, 6]. The foundational understanding of organic exciton-polaritons is derived from studies of materials characterized by minimal electronic disorder and narrow linewidths, such as cyanine dye J-aggregates [7, 8, 9, 10, 11, 12]. However, more recently there has been a paradigm shift towards organic materials with markedly higher disorder. Broad molecular absorbers have been harnessed to explore a wide array of polaritonic phenomena, including triplet harvesting [13], charge transfer within photovoltaic devices [14, 15, 16], long-range energy transport [17, 18, 19], and condensation [20, 21]. Despite this change in focus to disordered systems, there has been little explicit consideration of this material parameter and most analyses remain based on the models from well-ordered systems.
The conventional approach to strong coupling stems from the Tavis-Cummings (TC) model, which describes the interaction between \(N\) degenerate two-level systems and a single field quantum (photon) [22]. The Hamiltonian takes the form [23]:
\[H_{TC}=\sum_{k}^{N}\tfrac{1}{2}\hbar\omega\sigma_{k}^{+}\sigma_{k}^{-}+\hbar g \big{(}\sigma_{k}^{+}a+a^{\dagger}\sigma_{k}^{-}\big{)}+\hbar va^{\dagger}a \tag{1}\] |
2301.00040 | Optimization-based Sensitivity Analysis for Unmeasured Confounding using
Partial Correlations | Causal inference necessarily relies upon untestable assumptions; hence, it is
crucial to assess the robustness of obtained results to violations of
identification assumptions. However, such sensitivity analysis is only
occasionally undertaken in practice, as many existing methods only apply to
relatively simple models and their results are often difficult to interpret. We
take a more flexible approach to sensitivity analysis and view it as a
constrained stochastic optimization problem. This work focuses on sensitivity
analysis for a linear causal effect when an unmeasured confounder and a
potential instrument are present. We show how the bias of the OLS and TSLS
estimands can be expressed in terms of partial correlations. Leveraging the
algebraic rules that relate different partial correlations, practitioners can
specify intuitive sensitivity models which bound the bias. We further show that
the heuristic "plug-in" sensitivity interval may not have any confidence
guarantees; instead, we propose a bootstrap approach to construct sensitivity
intervals which perform well in numerical simulations. We illustrate the
proposed methods with a real study on the causal effect of education on
earnings and provide user-friendly visualization tools. | Tobias Freidling, Qingyuan Zhao | 2022-12-30T20:03:08Z | http://arxiv.org/abs/2301.00040v3 | # Sensitivity Analysis with the R\({}^{2}\)-calculus
###### Abstract
Causal inference necessarily relies upon untestable assumptions; hence, it is crucial to assess the robustness of obtained results to violations of identification assumptions. However, such sensitivity analysis is only occasionally undertaken in practice, as many existing methods only apply to relatively simple models and their results are often difficult to interpret. We take a more flexible approach to sensitivity analysis and view it as a constrained stochastic optimization problem. We focus on linear models with an unmeasured confounder and a potential instrument. We show how the \(R^{2}\)-calculus - a set of algebraic rules that relates different (partial) \(R^{2}\)-values and correlations - can be applied to identify the bias of the \(k\)-class estimators and construct sensitivity models flexibly. We further show that the heuristic "plug-in" sensitivity interval may not have any confidence guarantees; instead, we propose a boostrap approach to construct sensitivity intervals which perform well in numerical simulations. We illustrate the proposed methods with a real study on the causal effect of education on earnings and provide user-friendly visualization tools.
C 2021
Causal inference; Instrumental variables; \(k\)-class estimator; Linear models; Partial identification; Stochastic optimization
## 1 Introduction
In many scientific disciplines, provisional causal knowledge is predominantly generated from observational data as randomized controlled experiments are often infeasible or too costly. Because the treatment is not randomly assigned in an observational study, any causal conclusions must rely on untestable assumptions, such as absence of unmeasured confounders or validity of instrumental variables. Hence, the causal inference is inherently sensitive to violations of any identification and modelling assumptions, so reseachers are advised to investigate the robustness of their results.
The importance of sensitivity analysis has been emphasized in guidelines for designing and reporting observational studies (Vandenbroucke et al., 2007; PCORI Methodology Committee, 2021). For instance, the STROBE guidelines caution that "taking [observed] confounders into account is crucial in observational studies, but readers should not assume that analyses adjusted for [observed] confounders establish the 'causal part' of an association" (p. 1638). They recommend to conduct sensitivity analyses as they are "helpful to investigate the influence of choices made in the statistical analysis, or to investigate the robustness of the findings to missing data or possible biases" (p. 1647).
However, sensitivity analysis is still rarely being conducted in actual studies, leaving other researchers difficult to assess the robustness of their empirical findings. In medicine, Thabane et al. (2013) did a spot check on the January 2012 editions of major medical journals and found that only 26.7% (36 out of 135) of the articles that included some statistical analysis also performed sensitivity analysis. In nutrition research, de Souza et al. (2016) found that, in a representative sample of 100 articles from 2013 to 2015, merely 18% of them conducted some sensitivity analysis. In political science, Cinelli and Hazlett (2020) found that only 4 out of 64 observational studies published in three leading journals in 2017 conducted a formal sensitivity analysis beyond just some model specification checks.
There are several reasons for the hesitant uptake of sensitivity analysis in practice. First, it is not straightforward to define a reasonable model for sensitivity analysis, even for the familiar setting of one treatment variable, one outcome variable, and multiple baseline covariates that has been studied since the seminal work of Cornfield et al. (1959). For example, Lin et al. (1998) assume an unmeasured confounder \(U\) independent of the measured covariates \(X\) conditional on the treatment. However, Hernan and Robins (1999) point out that this cannot be generally true as conditioning on the treatment opens a collider path between \(U\) and \(X\). For more complicated settings such as instrumental variables (IV), specifying a good sensitivity model is even more difficult and the literature on sensitivity analysis is considerably smaller. Second, many methods for sensitivity analysis were developed under simple settings where closed-form solutions are available. This results in a limited scope of applicability. Finally, it is often not easy for practitioners to understand and communicate the results of a sensitivity analysis.
In general, (non-Bayesian) sensitivity analysis can be broadly categorized into point identified and partially identified approaches. The former requires a precise specification of the confounding mechanism, so that the causal effect of interest is still identified; see for instance Rosenbaum and Rubin (1983), Imbens (2003), and VanderWeele and Arah (2011) for the usual observational study design, Scharfstein et al. (1999) for longitudinal studies with dropouts, and Altonji et al. (2005) for instrumental variables. On the other hand, the partially identified approach considers the union of many point identified sensitivity models, so the causal effect is only partially identified. Examples include the first sensitivity analysis by Cornfield et al. (1959), the approach developed by Rosenbaum (1987, 2002) based on randomization tests, the E-value proposed by Ding and VanderWeele (2016) that generalizes the Cornfield bound, the generalization of Scharfstein et al. (1999) by Vansteelandt et al. (2006), bounds on the average treatment effect under Rosenbaum's sensitivity model by Yadlowsky et al. (2022) and the marginal sensitivity model studied in Zhao et al. (2019) and Dorn and Guo (2022).
In our experience, the partially identified approach is more flexible and usually aligns with practical demand better. This is why we adopt it in this article. We limit our discussion to linear regression and linear instrumental variable models, but the methodology we develope below is quite general and can potentially be extended to other models. Compared with previous work, a crucial distinction is that we do not require the partially identified region (or, as in Rosenbaum's sensitivity
analysis, an upper bound of the randomization p-value) to have a closed form solution. Instead, we leverage a novel perspective on sensitivity analysis through the lens of constrained stochastic optimization. This is elaborated next.
### A General Framework for Sensitivity Analysis
Consider an i.i.d. sample \(\left(V_{i},U_{i}\right)_{i=1}^{n}\) from some population, but only the variables \(\left(V_{i}\right)_{i=1}^{n}\) are observed. Denote the joint probability distribution of \(\left(V_{i},U_{i}\right)\) as \(\mathbb{P}=\mathbb{P}_{V,U}\). Depending on the assumptions on the data generating process, the distribution \(\mathbb{P}\) may be restricted to be within a parametric, semi-parametric or non-parametric family. The marginal distribution of \(V\) and the distribution of \(U\) conditional on \(V\) are denoted by \(\mathbb{P}_{V}\) and \(\mathbb{P}_{U|V}\), respectively.
We are interested in estimating and conducting inference for some functional \(\beta=\beta(\mathbb{P}_{V,U})\). For example, suppose \(V=(D,Y,X)\) includes a treatment variable \(D\), an outcome \(Y\), and some covariates \(X\). We may be interested in estimating the causal effect of \(D\) on \(Y\), which would be point identified if there are no other confounders given \(\left(X,U\right)\) and \(U\) is observed. However, since \(U\) is not observed, \(\beta\) may only be partially identified if we restrict the "strength of confounding" for \(U\) in some sense.
In many cases, \(\beta\) can be expressed as a function of two types of parameters, \(\theta=\theta(\mathbb{P}_{V})\) and \(\psi=\psi(\mathbb{P}_{V,U})\). The former only depends on the marginal distribution of \(V\) and can therefore be estimated from the observed variables; the latter additionally depends on the distribution of \(U\) and thus cannot be directly estimated. Adopting a Bayesian perspective, Gustafson (2005) and Daniels and Hogan (2008) advocate the use of a separable parameterization, meaning that \(\psi=\psi(\mathbb{P}_{U|V})\) only depends on the conditional distribution \(\mathbb{P}_{U|V}\). In this set-up, no information about \(\psi\) can be learnt from the observed data, which has several advantages in deriving bounds or making Bayesian inference. However, requiring a separable parameterization could be too restrictive in our experience and we will not make this assumption below.
Since \(U\) is unobserved, the parameter \(\psi\) and thus the functional \(\beta\) cannot be identified from the observed data. A point identified sensitivity analysis assumes that \(\psi\) is given, for example by eliciting the opinion of a domain expert. In this sense, the primary analysis can be viewed as a special case of a point identified sensitivity analysis, where \(\psi\) takes the value (conventionally 0) that corresponds to the unobserved variable \(U\) being "ignorable".
To assess the robustness of the primary analysis, a partially identified sensitivity analysis assumes that \(\psi\) belongs to a set \(\Psi=\Psi(\theta)\). Comparing to point identified models, this is appealing because it is much easier for domain experts to specify a possible range of \(\psi\) than a specific value. However, under the weaker condition \(\psi\in\Psi\), the functional \(\beta\) is only partially identified; we call the corresponding set of \(\beta\)-values the partially identified region (PIR):
\[\text{PIR}(\mathbb{P}_{V}):=\big{\{}\beta(\theta(\mathbb{P}_{V}),\psi)\colon \psi\in\Psi(\theta(\mathbb{P}_{V}))\big{\}}. \tag{1}\]
The condition \(\psi\in\Psi(\theta)\) in (1) implies a constraint on the joint distribution \(\mathbb{P}_{V,U}\)
For this reason, we will refer to \(\Psi\) as the sensitivity model.
In general, the partially identified region can be quite complicated and difficult to infer. However, this can be simplified in the case where \(\beta\) is real-valued and one-dimensional by seeking to solve the following optimization problems:
\[\min/\max\beta(\theta(\mathbb{P}_{V}),\psi),\qquad\text{subject to }\psi\in\Psi( \theta(\mathbb{P}_{V})), \tag{2}\]
where the distribution \(\mathbb{P}_{V}\) is fixed. As both the objective and the feasible set in (2) depend on the unknown \(\mathbb{P}_{V}\) we can sample from, this is an instance of stochastic optimization or stochastic programming (Shapiro et al., 2009). A natural, plug-in estimator of the optimal values of this problem can be obtained by solving
\[\min/\max\beta(\hat{\theta},\psi),\qquad\text{subject to }\psi\in\Psi(\hat{ \theta}), \tag{3}\]
where \(\hat{\theta}\) is an estimator of \(\theta\) based on the observed data. This can be viewed as a generalization of the sample average approximation (SAA) estimator in stochastic optimization (Shapiro et al., 2009, chap. 5). Thus, a general recipe for partially identified sensitivity analysis is the following:
1. The functional \(\beta\) of interest is expressed in terms of the identifiable parameters \(\theta=\theta(\mathbb{P}_{V})\) and the sensitivity parameters \(\psi=\psi(\mathbb{P}_{V,U})\);
2. The set of constraints \(\psi\in\Psi(\theta)\) is specified by consulting domain experts;
3. The optimal values of the stochastic program (2) are estimated either by first obtaining a closed-form solution to (2) and then estimating that quantity, or by directly solving the plug-in problem (3);
4. Suitable methods are then used to quantify the uncertainty of the estimators in the previous step.
In this article, we will focus on sensitivity analysis for linear regression and linear instrumental variables models in which \(\theta\) and \(\psi\) are low-dimensional. Nevertheless, the general framework outlined above may also be suitable for problems involving high- or infinite-dimensional parameters; see Section 8 for more discussion.
### Interpretable Sensitivity Models using the R\({}^{2}\)-Calculus
In practice, the usefulness of the partially identified region in (1) or the optimal values of (2) depends crucially on the interpretability of the sensitivity model \(\Psi\). This is where the \(R^{2}\)-calculus can be extremely useful. In short, the \(R^{2}\)-value \(R^{2}_{Y\sim X}\), also known as coefficient of determination, measures how much variance of \(Y\) can be explained by linear combinations of \(X\). An \(R^{2}\)-value close to 1 indicates that \(X\) can explain a large degree of the variance of \(Y\); on the other hand, values close to 0 indicate that the linear dependence between \(Y\) and \(X\) is weak. The partial \(R^{2}\)-value \(R^{2}_{Y\sim X|Z}\) naturally extends this idea and measures how much variance of \(Y\) can be explained by \(X\) given \(Z\).
Due to their straightforward interpretation, \(R^{2}\)- and partial \(R^{2}\)-values are widely used to help practitioners interpret the results of sensitivity analyses. For instance,
Imbens (2003) uses them in sensitivity analysis for regression models with a discrete treatment variable and this idea is recently extended by Veitch and Zaveri (2020); Small (2007) measures the amount of violations to the instrumental variable assumptions by using \(R^{2}\)-values. Cinelli and Hazlett (2020) take this idea further and parameterize the bias of the linear regression estimator by solely using \(R^{2}\)-values. Other parameterizations that are not fully based on \(R^{2}\)-values can be found in Hosman et al. (2010) and Oster (2019).
In this article, we extend this line of work and make several novel contributions:
* We use partial correlations (or \(R\)-values) instead of \(R^{2}\)-values (which are just squared \(R\)-values) to parameterize the sensitivity model, so the direction of the confounder effect is naturally captured. In contrast, previous works either use worst-case bounds implied by \(R^{2}\)-values (Cinelli and Hazlett, 2020) or directly specify the sign of the bias in an additional sensitivity parameter (Zhang and Ding, 2022).
* We provide a list of algebraic relations between \(R\)- and \(R^{2}\)- values. We give a proof of this \(R^{2}\)-calculus from a general Hilbert space perspective which may be of independent interest.
* We give a general bias formula for the family of \(k\)-class estimators which includes the ordinary least squares estimator and the two-stage least squares estimator. This allows us to provide a unified framework of sensitivity analysis for linear regression and instrumental variables models.
* Facilitated by the \(R^{2}\)-calculus and the general bias formula, we allow users to specify very flexible constraints \(\Psi\) on the sensitivity parameters. For example, we allow constraints that compare explanatory capability (in terms of the \(R^{2}\)-value) of some unmeasured confounder \(U\) with that of a measured covariate.
* We show that the simple method of fixing the sample \(R^{2}\)-values related to the unmeasured variable \(U\) in the sensitivity analysis, may not provide confidence statements in the frequentist sense. Instead, we propose a bootstrap approach.
* We provide a suite of user-friendly plots to visualize the results of the sensitivity analysis.
### Organization of the Paper
Section 2 describes the \(R^{2}\)-calculus, a collection of algebraic rules that relate (partial) \(R^{2}\)-values and correlations. Section 3 provides a general bias formula for the \(k\)-class estimator in presence of one unmeasured confounder and discusses extensions to multiple unmeasured confounders.
Section 4 uses the \(R^{2}\)-calculus to develop multiple ways for practitioners to specify the constraints in \(\Psi(\theta)\) based on domain knowledge. Specifically, we provide comparative bounds on the sensitivity parameters that correspond to deviations from the no unmeasured confounders and the instrumental variable assumptions. Section 5 reviews some approaches to construct sensitivity intervals that contains
\(\beta\) or the PIR with high probability. We show that directly specifying sample \(R^{2}\)-values as sensitivity parameter may not provide frequentist guarantees and propose an approach based on the bootstrap.
Section 6 applies our proposed sensitivity analysis method to a famous study in labour economics by Card (1993). We consider both the linear regression and instrumental variable estimators and compare the results obtained by imposing different sensitivity models. Section 7 introduces sensitivity contour plots that help to investigate how the choice of constraints affects the PIR. These plots are illustrated with the real data example. Finally, Section 8 concludes this article with a discussion of our method and an outlook on future research.
Readers who are more interested in applying the proposed method and interpreting its results may wish to skip Sections 2 to 5 initially. Proofs for some theoretical results in this article and a detailed description of the optimization algorithm can be found in the Appendix.
## 2 \(\mathsf{R^{2}}\)-calculus
We first give a summary of the \(R^{2}\)-calculus - a set of widely used algebraic rules which concern the coefficient of determination (also called \(R^{2}\)-value) and related quantities. Although these rules are often introduced together with the multivariate normal distribution (see e.g. Anderson, 1958, sec. 2.5), they are purely algebraic and rely on no distributional assumptions. In fact, this calculus not only applies to the \(R^{2}\)- and \(R\)-values in the population but also to their counterparts in the sample, which will be denoted by \(\hat{R}^{2}\) and \(\hat{R}\) below; see Appendix A.3. For brevity, we will only state the definitions and results for the population values.
Let \(Y\) be a random variable, let \(X\) and \(Z\) be two random vectors, and suppose they all have finite variances. Without loss of generality, we suppose that all random variables and vectors have mean equal to zero. Otherwise, we can replace them with their centred versions; see Appendix A.3. We use \(Y\perp\!\!\!\perp X\,|\,Z\) to denote that \(Y\) and \(X\) are independent conditional on \(Z\) as defined in Dawid (1979). Furthermore, the residual of \(Y\) after partialing/regressing out \(X\) is given by
\[Y^{\perp X}:=Y-X^{T}\operatorname{var}(X)^{-1}\operatorname{cov}(X,Y).\]
The variance of \(Y^{\perp X}\) equals that of the residual in the linear regression of \(Y\) on \(X\), which motivates the notation \(\sigma^{2}_{Y\sim X}=\operatorname{var}(Y^{\perp X})\); let \(\sigma_{Y\sim X}\) denote its square root.
**Definition 1**.: Suppose \(\sigma^{2}_{Y\sim Z}>0\). The \(R^{2}\)-value of \(Y\) on \(X\) is defined as
\[R^{2}_{Y\sim X}:=1-\frac{\sigma^{2}_{Y\sim X}}{\sigma^{2}_{Y}}.\]
The partial \(R^{2}\)-value and \(f^{2}\)-value of \(Y\) on \(X\) given \(Z\) are defined as
\[R^{2}_{Y\sim X|Z}:=\frac{R^{2}_{Y\sim X+Z}-R^{2}_{Y\sim Z}}{1-R^{2}_{Y\sim Z}} \quad\text{and}\quad f^{2}_{Y\sim X|Z}:=\frac{R^{2}_{Y\sim X|Z}}{1-R^{2}_{Y \sim X|Z}},\]
respectively. If \(X\) is one-dimensional and \(\sigma^{2}_{X\sim Z}>0\), the partial \(R\)- and \(f\)-value (Cohen, 1977) are defined as
\[R_{Y\sim X|Z}:=\operatorname{corr}(Y^{\perp Z},X^{\perp Z}),\quad\text{and} \quad f_{Y\sim X|Z}:=\frac{R_{Y\sim X|Z}}{\sqrt{1-R_{Y\sim X|Z}^{2}}}.\]
The marginal \(f^{2}\)-, \(R\)- and \(f\)-values can be further defined by using an "empty" \(Z\) in the definitions above; details are omitted.
The partial \(R^{2}\) takes values in \([0,1]\) and is a measure of how well the variables in \(X\) can be linearly combined to explain the variation in \(Y\) after already using linear combinations of \(Z\). Values close to \(1\) indicate high explanatory capability. This simple interpretation makes the \(R^{2}\)-value a popular tool to assess the goodness of fit of a linear model. The partial \(f^{2}\) is a monotone transformation of the partial \(R^{2}\) and takes values in \([0,\infty]\). The partial \(R\)-value captures not only the strength but also the direction of dependence between \(Y\) and \(X\) after partialing out \(Z\).
The next result justifies calling \(R^{2}_{Y\sim X|Z}\) a partial \(R^{2}\)-value and shows that the definitions of \(R^{2}\)- and \(R\)-value are consistent. It follows from the Gram-Schmidt orthogonalization.
**Lemma 1**.: _In the setting of Definition 1, \(R^{2}_{Y\sim X|Z}=R^{2}_{Y^{\perp Z}\sim X^{\perp Z}}\) holds true. Moreover, if \(X\) is one-dimensional, then \(R^{2}_{Y\sim X|Z}=(R_{Y\sim X|Z})^{2}\)._
The next Proposition collects several useful results about \(R^{2}\)-values.
**Proposition 1** (\(R^{2}\)-calculus).: _In the setting above, let \(W\) be another random vector. Assume \(\sigma^{2}_{Y\sim X+W+Z}>0\). Further, suppose \(\sigma^{2}_{X\sim W+Z}>0\) and \(\sigma^{2}_{W\sim X+Z}>0\) when \(X\) and/or \(W\) are one-dimensional. Then, the following rules hold:_
* Independence: _if_ \(Y\perp\!\!\!\perp X\)_, then_ \(R^{2}_{Y\sim X}=0\)_;_
* Independent additivity_: _if_ \(X\perp\!\!\!\perp W\)_, then_ \(R^{2}_{Y\sim X+W}=R^{2}_{Y\sim X}+R^{2}_{Y\sim W}\)_;_
* Decomposition of unexplained variance_:_ \[1-R^{2}_{Y\sim X+W}=(1-R^{2}_{Y\sim X})(1-R^{2}_{Y\sim W|X});\]
* Recursion of partial correlation_: _if_ \(X\) _and_ \(W\) _are one-dimensional, then_ \[R_{Y\sim X|W}=\frac{R_{Y\sim X}-R_{Y\sim W}R_{X\sim W}}{\sqrt{1-R^{2}_{Y\sim W }}\sqrt{1-R^{2}_{X\sim W}}};\]
* Reduction of partial correlation_: _if_ \(X\) _is one-dimensional and_ \(Y\perp\!\!\!\perp W\)_, then_ \[R_{Y\sim X|W}=\frac{R_{Y\sim X}}{\sqrt{1-R^{2}_{X\sim W}}};\]
* Three-variable identity_: if both \(X\) and \(W\) are one-dimensional, then_ \[f_{Y\sim X|W}\sqrt{1-R_{Y\sim W|X}^{2}}=f_{Y\sim X}\sqrt{1-R_{X\sim W}^{2}}-R_{Y \sim W|X}R_{X\sim W}.\]
_Remark 1_.: All of the relationships above also hold when \(Z\) is partialed out (i.e. if "\(|Z\)" is appended to the subscripts of all \(R\)-, \(R^{2}\)-, and \(f\)-values) and the independence assumptions are conditional on \(Z\). Rules [i], [ii] and [v] remain true if (conditional) independence condition is replaced by (partial) uncorrelatedness. A more succint sufficient condition for the positive partial variance requirements is that the covariance matrix of \((Y,X,Z,W)\) has full rank.
_Remark 2_.: Rule [vi] may appear unintuitive at first. To see how this identity may come up, consider three random variables \(Y\), \(X\) and \(W\). There are in total three marginal \(R\)-values, \(R_{Y\sim X}\), \(R_{Y\sim W}\) and \(R_{X\sim W}\), and three partial \(R\)-values, \(R_{Y\sim X|W}\), \(R_{Y\sim W|X}\) and \(R_{X\sim W|Y}\). Rule [iv] shows that the partial \(R\)-values can be determined by all the marginal values. In other words, there are only three degrees of freedom among the six \(R\)-values. This implies that there must be an equality constraint relating \(R_{Y\sim X}\), \(R_{X\sim W}\), \(R_{Y\sim X|W}\), and \(R_{Y\sim W|X}\).
_Remark 3_.: The (partial) \(R^{2}\)- and \(R\)-value can be defined in a more general Hilbert space setting. The corresponding rules of the \(R^{2}\)-calculus also hold true, yielding Proposition 1 as a corollary. See Appendix A.
## 3 Bias of the \(k\)-class Estimator
Our main goal in this article is to outline a unified approach to sensitivity analysis in linear structural equation models that leverages the \(R^{2}\)-calculus. To this end, we will focus on the case with a one-dimensional treatment \(D\) and a continuous outcome \(Y\). We would like to estimate the causal effect of \(D\) on \(Y\), which will be denoted as \(\beta\). We may also observe some covariates \(X\) and a potential instrumental variable \(Z\). Let \(V=(D,Y,X,Z)\) be the observed variables.
In a sensitivity analysis, we are worried about some unmeasured variables \(U\) that confound the causal effect of \(D\) on \(Y\). This can potentially be addressed by finding an instrumental variable \(Z\) for the treatment \(D\), but this instrumental variable may itself be invalid; readers who are unfamiliar with instrumental variables are referred to Section 4.2 for its definition in the context of linear models. Below we will derive a bias formula for the usual linear regression and instrumental variable estimators, which essentially determines the objective functional \(\beta(\theta,\psi)\) in the stochastic optimization problem (2).
### A Single Unmeasured Confounder
We start with the case of a one-dimensional unmeasured confounder \(U\) and work with the so-called \(k\)-class estimators as defined below.
**Definition 2**.: Suppose \(\operatorname{var}(D^{\perp X})>\operatorname{var}(D^{\perp X,Z})>0\). The \(k\)-class estimand is given by
\[\beta_{k}:=\begin{cases}\dfrac{\operatorname{cov}(D^{\perp X},Y^{\perp X})-k \operatorname{cov}(D^{\perp X,Z},Y^{\perp X,Z})}{\operatorname{var}(D^{\perp X })-k\operatorname{var}(D^{\perp X,Z})},&\text{if }-\infty<k\leq 1,\\ \dfrac{\operatorname{cov}(D^{\perp X,Z},Y^{\perp X,Z})}{\operatorname{var}(D^{ \perp X,Z})},&\text{if }k=-\infty.\end{cases}\]
The \(k\)-class estimator is defined by replacing variance/covariance and the residuals in the equation above by their sample counterparts.
The family of \(k\)-class estimators was introduced by Theil (1958) and Nagar (1959) to interpolate the ordinary least squares (OLS) estimator and the two-stage least squares (TSLS) estimator. It provides a convenient representation for a unified analysis. To see the interpolation, the OLS estimand that adjusts for \(X\) is given by
\[\beta_{Y\sim D|X}:=\frac{\operatorname{cov}(Y^{\perp X},D^{\perp X})}{ \operatorname{var}(D^{\perp X})}.\]
The TSLS estimand (also called the Wald ratio) that uses \(Z\) as an instrumental variable and \(X\) as exogenous covariates is given by
\[\beta_{D\sim Z|X,\,Y\sim Z|X}:=\frac{\operatorname{cov}(Y^{\perp X},Z^{\perp X })}{\operatorname{cov}(D^{\perp X},Z^{\perp X})}.\]
They are special cases of the \(k\)-class estimands according to the following result.
**Proposition 2**.: _In the setting of Definition 2,_
\[\beta_{1}=\beta_{D\sim Z|X,\,Y\sim Z|X},\quad\beta_{0}=\beta_{Y\sim D|X},\quad \text{and}\quad\lim_{k\to-\infty}\beta_{k}=\beta_{-\infty}=\beta_{Y\sim D|X,Z}.\]
_Remark 4_.: Another important estimator contained in the \(k\)-class is the limited information maximum likelihood of Anderson and Rubin (1949), where \(k\) needs to be estimated from the data. Other examples can be found in Davidson and MacKinnon (1993, p. 649) and Kolesar et al. (2015). Also related is the anchor regression estimator recently introduced by Rothenhausler et al. (2021) that aims to gain robustness under distributional shifts.
The target functional \(\beta=\beta(\mathbb{P}_{V,U})\) we consider is the OLS estimand \(\beta_{Y\sim D|X,Z,U}\) which adjusts for \(X\), \(Z\), and the unmeasured confounder \(U\). When \(Y\) is causally determined by a linear structural equation containing \(D\), \(X\), \(Z\) and \(U\), the causal effect of \(D\) on \(Y\) is precisely given by \(\beta=\beta_{Y\sim D|X,Z,U}\); see Figure 1 for an illustration of the data-generating process. When the true structural relationship is not linear, \(\beta_{Y\sim D|X,Z,U}\) may still be interpreted as a kind of weighted average treatment effect under additional assumptions (Angrist and Pischke, 2009, p. 75).
Because \(U\) is not observed, \(\beta\) cannot be consistently estimated without further assumptions on the relationship between \(U\) and \(V\). The difference between the estimand \(\beta_{k}\) and the target \(\beta\) is quantified by the next result.
**Theorem 1**.: _Suppose \(\sigma^{2}_{D\sim X}>\sigma^{2}_{D\sim X+Z}>\sigma^{2}_{D\sim X+Z+U}>0\) and let \(k\in(-\infty,1]\) be fixed. Then,_
\[\beta_{k}-\beta=\left[\frac{f_{Y\sim Z|X,D}\,R_{D\sim Z|X}}{1-k+k\,R^{2}_{D\sim Z |X}}+R_{Y\sim U|X,Z,D}\,f_{D\sim U|X,Z}\right]\frac{\sigma_{Y\sim X+Z+D}}{ \sigma_{D\sim X+Z}}. \tag{4}\]
_For \(k=-\infty\), equation (4) holds by taking the limit \(k\to-\infty\) on the right-hand side._
Equation (4) generalizes previous bias formulas for the OLS estimator to the entire family of \(k\)-class estimators; see Remark 5 below. Interestingly, this more general formula can be easily derived by applying the OLS bias formula twice; see equation (5) below. Because the bias of any \(k\)-class estimand can be written as a function of \(R_{Y\sim U|X,Z,D}\) and \(R_{D\sim U|X,Z}\), we will refer to them as the primitive sensitivity parameters.
Corollary 1 in the appendix contains specialized bias formulas for the common estimands in Proposition 2. The next proposition states the causal identification assumption under which these estimands are unbiased.
**Proposition 3**.: _In the setting of Theorem 1, the following statements are true:_
1. _If_ \(R_{D\sim U|X,Z}=0\) _or_ \(R_{Y\sim U|X,Z,D}=0\)_, then_ \(\beta=\beta_{Y\sim D|X,Z}\)_._
2. _If_ \(R^{2}_{D\sim U+Z|X}=0\) _or_ \(R^{2}_{Y\sim U+Z|X,D}=0\)_, then_ \(\beta=\beta_{Y\sim D|X,Z}=\beta_{Y\sim D|X}\)_._
3. _If_ \(R_{Z\sim U|X}=0\) _and_ \(R_{Y\sim Z|X,D,U}=0\)_, then_ \(\beta=\beta_{D\sim Z|X,\,Y\sim Z|X}\)_._
### Proof Sketch of Theorem 1
By expanding the difference between the \(k\)-class and the target estimands and applying the \(R^{2}\)-calculus to the first term, we deduce
\[\beta_{k}-\beta_{Y\sim D|X,Z,U} =\beta_{k}-\beta_{Y\sim D|X,Z}+\beta_{Y\sim D|X,Z}-\beta_{Y\sim D |X,Z,U} \tag{5}\] \[=\frac{\left(\beta_{Y\sim D|X}-\beta_{Y\sim D|X,Z}\right)}{1-k \left(1-R^{2}_{D\sim Z|X}\right)}+\left(\beta_{Y\sim D|X,Z}-\beta_{Y\sim D|X,Z,U}\right).\]
Equation (4) can then be derived by applying the following Lemma twice; see Appendix B.1.
**Lemma 2**.: _Let \(Y,D\) and \(W\) be random variables, \(X\) be a random vector, and suppose \(\sigma^{2}_{D\sim X+W}>0\). Then_
\[\beta_{Y\sim D|X}-\beta_{Y\sim D|X,W}=R_{Y\sim W|X,D}\,f_{D\sim W|X}\,\frac{ \sigma_{Y\sim X+D}}{\sigma_{D\sim X}}.\]
_Remark 5_.: To our knowledge, Lemma 2 first appeared in Cochran (1938) and was later generalized by Cox (2007). In the context of sensitivity analysis, it has already been used by Frank (2000), Hosman et al. (2010) and Cinelli and Hazlett (2020). The bias formula in the last paper can be obtained by taking \(k\to-\infty\) in (4).
_Remark 6_.: Heuristically, the true causal effect \(\beta\) should not depend on the choice of \(k\). This can also been seen from equation (5).
### Multiple Unmeasured Confounders
The assumption that the unmeasured confounder \(U\) is one-dimensional has kept the algebra tractable thus far. In order to obtain a bias formula with multiple confounders, a generalization of Lemma 2 is required. For instance, when \(W\) is \(l\)-dimensional, we can repeatedly apply Lemma 2 to the following telescoping series:
\[\beta_{Y\sim D|X}-\beta_{Y\sim D|X,W}=\sum_{j=1}^{l}\beta_{Y\sim D |X,W_{[j-1]}}-\beta_{Y\sim D|X,W_{[j]}} \tag{6}\] \[\quad=\sum_{j=1}^{l}R_{Y\sim W_{j}|X,D,W_{[j-1]}}f_{D\sim W_{j}|X,W_{[j-1]}}\sqrt{\frac{1-R_{Y\sim W_{[j-1]}|X,D}^{2}}{1-R_{D\sim W_{[j-1]}|X}^{2 }}}\frac{\sigma_{Y\sim X+D}}{\sigma_{D\sim X}},\]
where \([j]:=\{1,\ldots,j\}\) and \([0]:=\emptyset\). By using an expansion similar to (5), we may identify the bias in linear regression and instrumental variables models with multiple unmeasured confounders; of course, more sensitivity parameters will be required. Such extensions are explored in Appendix B.2.
Alternatively, Lemma 2 provides an upper bound on \(|\beta_{Y\sim D|X}-\beta_{Y\sim D|X,W}|\) that can be immediately generalized to multi-dimensional \(W\) as stated in the next result. Heuristically, this is because the confounding effects of several unmeasured variables can negate each other; see Cinelli and Hazlett (2020, sec. 4.5). To our knowledge, this result is first obtained by Hosman et al. (2010); we simplify their proof substantially using the \(R^{2}\)-calculus in Appendix B.2.
**Lemma 3**.: _Let \(Y\) and \(D\) be random variables, let \(X\) and \(W\) be random vectors. Assume that \(\sigma_{D\sim X+W}^{2}>0\) and that the covariance matrix \(\operatorname{var}(W^{\perp X,D})\) is positive definite. Then,_
\[\big{|}\beta_{Y\sim D|X}-\beta_{Y\sim D|X,W}\big{|}\leq\sqrt{R_{Y\sim W|D,X}^{ 2}\,f_{D\sim W|X}^{2}\frac{\sigma_{Y\sim X+D}^{2}}{\sigma_{D\sim X}^{2}}}. \tag{7}\]
Returning to the \(k\)-class estimator, when the unmeasured confounder \(U\) is multi-dimensional, we may still apply the expansion in equation (5). The first term on its right-hand side does not involve \(U\) and the second term is bounded by (7). This immediately implies a bound on the bias of the \(k\)-class estimand.
## 4 Interpretable and Flexible Constraints
Theorem 1 in the previous section has established the dependence of the objective \(\beta\) on two primitive sensitivity parameters: \(R_{D\sim U|X,Z}\) and \(R_{Y\sim U|X,Z,D}\). In this section, we develop different ways to specify interpretable constraints on these parameters by extending previous work, most notably the bounds Cinelli and Hazlett (2020) proposed for the linear regression setting.
The key idea is to compare the \(R^{2}\)-value of the unmeasured confounder with that of an observed covariate. To facilitate this comparison, we assume that the random vector \(X\in\mathbb{R}^{p}\) can be partitioned into
\[X=(\dot{X},\tilde{X}),\ \dot{X}\in\mathbb{R}^{\tilde{p}},\ \tilde{X}\in \mathbb{R}^{\tilde{p}}\ \text{such that}\ \dot{X}\perp\!\!\!\perp U\,|\,\tilde{X},Z. \tag{8}\]
In Figure 1, we give a causal graphical model that fulfills (8); other possibilities may be verified by the familiar d-sepration (Pearl, 2009). We further denote \([\dot{p}]:=\{1,\ldots,\dot{p}\}\). For \(I\subseteq[\dot{p}]\) and \(\dot{X}\in\mathbb{R}^{\dot{p}}\), define \(\dot{X}_{I}:=(\dot{X}_{i})_{i\in I}\) and \(I^{c}:=[\dot{p}]\setminus I\). Finally, let \(\dot{X}_{-j}:=\dot{X}_{\{j\}^{c}}\) for any \(j\in[\dot{p}]\).
Table 1 summarizes the constraints on sensitivity parameters considered in this work; it may be helpful to visualize the relations parameterized by these constraints using the causal diagram in Figure 1. In principle, the constraints in Table 1 can be combined arbitrarily. In particular, one may specify several comparative bounds using different sets of covariates, although specifying too many bounds may leave the sensitivity model infeasible. Next, we show how these bounds naturally arise from the sensitivity analysis for the OLS and TSLS estimators.
### Ordinary Least Squares
The OLS estimand \(\beta_{-\infty}=\beta_{Y\sim D|X,Z}\) identifies the causal effect \(\beta=\beta_{Y\sim D|X,Z,U}\) if the causal diagram in Figure 1 does not contain \(U\to D\) or \(U\to Y\), or equivalent, if \(R_{D\sim U|X,Z}=0\) or \(R_{Y\sim U|X,Z,D}=0\). Some of the sensitivity models in Table 1 directly bound them; others bound related \(R^{2}\)-values that can be linked to the primitive parameters by the \(R^{2}\)-calculus. Such relations are elaborated below.
#### 4.1.1 Constraints on \(U\to D\)
First of all, we may directly specify a bound on the primitive sensitivity parameter
\[R_{D\sim U|X,Z}\in[B^{l}_{UD},B^{u}_{UD}]\subseteq[-1,1]. \tag{9}\]
This constraint means that the correlation between \(D\) and \(U\), after accounting for linear effects of \(X\) and \(Z\), lies within the interval \([B^{l}_{UD},B^{u}_{UD}]\).
Figure 1: Causal diagram for regression and instrumental variables. Directed edges represent causal effects and bidirected edges represent dependence due to unmeasured common causes.
Alternatively (or in addition to the previous bound), we can specify the following comparative bound that is arguably more interpretable:
\[R^{2}_{D\sim U|\tilde{X},\dot{X}_{I},Z}\leq b_{\textit{UD}}R^{2}_{D\sim\dot{X}_{ J}|\tilde{X},\dot{X}_{I},Z},\ I\subset[\dot{p}],\ J\subseteq I^{c},b_{\textit{UD}} \geq 0.\]
This inequality means that the unmeasured confounder \(U\) can explain at most \(b_{\textit{UD}}\) times as much variance of \(D\) as \(\dot{X}_{J}\) does, after accounting for the effect of \((\tilde{X},\dot{X}_{I},Z)\) on \(D\). For practical purposes, a good choice of the comparison sets is \(J=\{j\}\) and \(I=J^{c}\). We can relate \(R_{D\sim U|\tilde{X},\dot{X}_{I},Z}\) in the last bound to \(R_{D\sim U|X,Z}\) via the \(R^{2}\)-calculus. By using \(\dot{X}_{I^{c}}\perp\!\!\!\perp U\,|\,\tilde{X},\dot{X}_{I},Z\) (which follows from the assumption in (8)) and applying the reduction of partial correlation with \(Y\equiv U\), \(X\equiv D\), \(Z\equiv(\tilde{X},\dot{X}_{I},Z)\) and \(W\equiv\dot{X}_{I^{c}}\), we have
\[R^{2}_{D\sim U|X,Z}\stackrel{{[\text{v}]}}{{=}}\frac{R^{2}_{D \sim U|\tilde{X},\dot{X}_{I},Z}}{1-R^{2}_{D\sim\dot{X}_{I^{c}}|\tilde{X},\dot{ X}_{I},Z}}\leq b_{\textit{UD}}\frac{R^{2}_{D\sim\dot{X}_{J}|\tilde{X},\dot{X}_{I},Z}}{ 1-R^{2}_{D\sim\dot{X}_{I^{c}}|\tilde{X},\dot{X}_{I},Z}}. \tag{10}\]
#### 4.1.2 Constraints on \(U\to Y\)
Similarly to \(U\to D\), we may specify a direct bound:
\[R_{Y\sim U|X,Z,D}\in[B^{l}_{UY},B^{u}_{UY}]\subseteq[-1,1]. \tag{11}\]
Alternatively, we may use comparative bounds. Here we consider two types of bounds depending on whether \(D\) is regressed out:
\[R^{2}_{Y\sim U|\tilde{X},\dot{X}_{I},Z} \leq b_{UY}R^{2}_{Y\sim\dot{X}_{J}|\tilde{X},\dot{X}_{I},Z}, \tag{12}\] \[R^{2}_{Y\sim U|\tilde{X},\dot{X}_{I},Z,D} \leq b_{UY}R^{2}_{Y\sim\dot{X}_{J}|\tilde{X},\dot{X}_{I},Z,D}, \tag{13}\]
\begin{table}
\begin{tabular}{l l l l} \hline Edge & & Sensitivity model & Optimization constraint \\ \hline
1 & \(U\to D\) & 1. & \(R_{D\sim U|X,Z}\in[B^{l}_{\textit{UD}},B^{u}_{\textit{UD}}]\) & (9) \\ & 2. & \(R^{2}_{D\sim U|\tilde{X},\dot{X}_{I},Z}\leq b_{\textit{UD}}R^{2}_{D\sim\dot{X}_ {J}|\tilde{X},\dot{X}_{I},Z}\) & (10) \\
2 & \(U\to Y\) & 1. & \(R_{Y\sim U|X,Z,D}\in[B^{l}_{UY},B^{u}_{UY}]\) & (11) \\ & 2. & \(R^{2}_{Y\sim U|\tilde{X},\dot{X}_{I},Z}\leq b_{UY}R^{2}_{Y\sim X\dot{X}_{J}| \tilde{X},\dot{X}_{I},Z}\) & (14), (15) \\ & 3. & \(R^{2}_{Y\sim U|\tilde{X},\dot{X}_{I},Z,D}\leq b_{UY}R^{2}_{Y\sim\dot{X}_{J}| \tilde{X},\dot{X}_{I},Z,D}\) & (13), (15), (16) \\ \hline
3 & \(U\leftrightarrow Z\) & 1. & \(R_{Z\sim U|X}\in[B^{l}_{\textit{UZ}},B^{u}_{\textit{UD}}]\) & (19) \\ & 2. & \(R^{2}_{Z\sim U|\tilde{X},\dot{X}_{-j}}\leq b_{\textit{UZ}}R^{2}_{Z\sim\dot{X}_ {j}|\tilde{X},\dot{X}_{-j}}\) & (20) \\
4 & \(Z\to Y\) & 1. & \(R_{Y\sim Z|X,U,D}\in[B^{l}_{\textit{ZY}},B^{u}_{\textit{ZY}}]\) & (21) \\ & 2. & \(R^{2}_{Y\sim Z|X,U,D}\leq b_{\textit{ZY}}R^{2}_{Y\sim\dot{X}_{j}|\tilde{X}, \dot{X}_{-j},Z,U,D}\) & (22), (23) \\ \hline \end{tabular}
\end{table}
Table 1: Specification of constraints: When the user specifies bounds on the sensitivity parameters, the corresponding constraints in the last column are added to the stochastic optimization \((2)\). When bounds on \(U\leftrightarrow Z\) and/or \(Z\to Y\) are chosen, the TSLS-related equality constraints \((17)\) and \((18)\) also need to be included.
where \(I\subset[\dot{p}],\ J\subseteq I^{c},b_{UY}\geq 0\). When comparing the explanatory capability of the variables \(U\) and \(\dot{X}_{J}\), it is natural to regress out all other variables. However, regressing out \(D\), a potential common child of \(X\) and \(U\), may introduce dependence between \(U\) and \(Y\); this is essentially the point made by Hernan and Robins (1999) in their criticism of Lin et al. (1998). Thus, we consider both the comparative bound (12) without \(D\) and the bound (13) with \(D\). For (12), we may apply rule [v] as in (10) and obtain
\[R_{Y\sim U|X,Z}^{2}\stackrel{{[\text{v}]}}{{=}}\frac{R_{Y\sim U| \tilde{X},\dot{X}_{I},Z}^{2}}{1-R_{Y\sim\dot{X}_{I^{c}}|\tilde{X},\dot{X}_{I},Z }^{2}}\leq b_{UY}\frac{R_{Y\sim\dot{X}_{J}|\tilde{X},\dot{X}_{I},Z}^{2}}{1-R_{Y \sim\dot{X}_{I^{c}}|\tilde{X},\dot{X}_{I},Z}^{2}}. \tag{14}\]
However, we cannot regress out \(D\) in (14) because \(D\) may be a collider in the path \(\dot{X}_{I^{c}}\to D\gets U\). Instead, we can link it to \(R_{Y\sim U|X,Z,D}\) via the \(R^{2}\)-calculus:
\[R_{Y\sim U|X,Z,D}\stackrel{{[\text{iv}]}}{{=}}\frac{R_{Y\sim U|X, Z}-R_{Y\sim D|X,Z}\,R_{D\sim U|X,Z}}{\sqrt{1-R_{Y\sim D|X,Z}^{2}}\sqrt{1-R_{D \sim U|X,Z}^{2}}}. \tag{15}\]
Hence, the first type of comparative bound can be represented as the inequality constraint (14) and the equality constraint (15) in the optimization problem (2).
The second type of comparative bounds partials out \(D\) and involves two additional sensitivity parameters: \(R_{Y\sim U|X,Z}\) and \(R_{Y\sim U|\tilde{X},\dot{X}_{I},Z,D}\). To link them to the primitive sensitivity parameters, we may use equation (15) and
\[R_{Y\sim U|X,Z}=\frac{1}{\sqrt{1-R_{Y\sim\dot{X}_{I^{c}}|\tilde{X},\dot{X}_{I},Z}^{2}}}\bigg{[}R_{Y\sim D|\tilde{X},\dot{X}_{I},Z}R_{D\sim U|X,Z}\sqrt{1-R_ {D\sim\dot{X}_{I^{c}}|\tilde{X},\dot{X}_{I},Z}^{2}}\]
\[+R_{Y\sim U|\tilde{X},\dot{X}_{I},Z,D}\sqrt{1-R_{Y\sim D|\tilde{X},\dot{X}_{I},Z}^{2}}\sqrt{1-R_{D\sim U|X,Z}^{2}(1-R_{D\sim\dot{X}_{I^{c}}|\tilde{X},\dot{ X}_{I},Z}^{2})}\bigg{]} \tag{16}\]
as an equality constraint. The derivation of (16) is deferred to Appendix C.1.
### Two-stage Least Squares
The method of instrumental variables (IV) is commonly used to overcome unmeasured confounding. Here we only provide a very brief introduction to it; the reader is referred to Wooldridge (2010) for a more comprehensive discussion.
A variable \(Z\) is called an instrument for \(D\) if (i) it is an independent predictor of \(D\), (ii) it is exogenous in the sense that \(Z\) is conditionally independent of the unmeasured confounder \(U\) and (iii) it has no direct effect on the outcome \(Y\) that is not mediated by \(D\). In linear models, these conditions can be expressed as
\[\text{(i) }R_{Z\sim D|X}\neq 0,\qquad\text{(ii) }R_{Z\sim U|X}=0,\qquad\text{(iii) }R_{Y\sim Z|X,U,D}=0.\]
Proposition 3(iii) suggests that under these conditions, the target \(\beta=\beta_{Y\sim D|X,Z,U}\) is identified by the TSLS estimand \(\beta_{1}=\beta_{D\sim Z|X,\,Y\sim Z|X}\).
As the last two conditions above involve the unmeasured confounder \(U\) and thus cannot be verified, a sensitivity analysis for TSLS would specify bounds on the sensitivity parameters \(R_{Z\sim U|X}\) and \(R_{Y\sim Z|X,U,D}\). To use the bias formula in Theorem 1, we need to link them to the primitive sensitivity parameters \(R_{D\sim U|X,Z}\) and \(R_{Y\sim U|X,Z,D}\). To achieve this, we apply the three-variable identity [vi] with \(Y\equiv Y\), \(X\equiv Z\), \(W\equiv U\) and \(Z\equiv(X,D)\) to obtain
\[f_{Y\sim Z|X,U,D}\sqrt{1-R_{Y\sim U|X,Z,D}^{2}}=f_{Y\sim Z|X,D}\sqrt{1-R_{Z\sim U |X,D}^{2}}-R_{Y\sim U|X,Z,D}R_{Z\sim U|X,D}, \tag{17}\]
and with \(Y\equiv U\), \(X\equiv Z\), \(W\equiv D\) and \(Z\equiv X\) to obtain
\[f_{Z\sim U|X,D}\sqrt{1-R_{D\sim U|X,Z}^{2}}=f_{Z\sim U|X}\sqrt{1-R_{D\sim Z|X} ^{2}}-R_{D\sim Z|X}R_{D\sim U|X,Z}. \tag{18}\]
These are then added to the stochastic program (2) as equality constraints.
#### 4.2.1 Constraints on \(U\leftrightarrow Z\)
The sensitivity parameter \(R_{Z\sim U|X}\) can be constrained by directly providing a range of plausible values, i.e.
\[R_{Z\sim U|X}\in[B_{UZ}^{l},B_{UZ}^{u}]\subseteq[-1,1]. \tag{19}\]
Alternatively, we allow practitioners to specify the following comparative bound
\[R_{Z\sim U|\tilde{X},\tilde{X}_{-j}}^{2}\leq b_{UZ}R_{Z\sim\tilde{X}_{j}| \tilde{X},\tilde{X}_{-j}}^{2},\ j\in[\dot{p}],\ b_{UZ}\geq 0.\]
Using the conditional independence assumption (8), this can be shown to be equivalent to (see Appendix C.2)
\[R_{Z\sim U|X}^{2}\leq b_{UZ}\,R_{Z\sim\tilde{X}_{j}|\tilde{X},\tilde{X}_{-j}} ^{2}\frac{1-R_{Z\sim\tilde{X}_{j}|\tilde{X},\tilde{X}_{-j}}^{2}}{1-b_{UZ}\,R_ {Z\sim\tilde{X}_{j}|\tilde{X},\tilde{X}_{-j}}^{4}}. \tag{20}\]
#### 4.2.2 Constraints on \(Z\to\,Y\)
We can bound the sensitivity parameter \(R_{Y\sim Z|X,U,D}\) by specifying the direct bound
\[R_{Y\sim Z|X,U,D}\in[B_{ZY}^{l},B_{ZY}^{u}]\subseteq[-1,1]. \tag{21}\]
Furthermore, we allow the following comparative bound
\[R_{Y\sim Z|X,U,D}^{2}\leq b_{ZY}R_{Y\sim\tilde{X}_{j}|\tilde{X},\tilde{X}_{-j },Z,U,D}^{2},\ j\in[\dot{p}],b_{ZY}\geq 0. \tag{22}\]
This last bound is unusual in the sense that the sets of variables that are regressed out are different in the two partial \(R^{2}\)-values. It is difficult to specify comparative bounds for the exclusion restriction as the corresponding sensitivity parameter \(R_{Y\sim Z|X,U,D}\) partials out \(U\). Therefore, we cannot directly compare \(U\) to an
observed covariate, e.g. \(\dot{X}_{j}\), and the right-hand side of the bound cannot be estimated. For this reason, we resort to the adjustment set in (22) because we can connect \(R_{Y\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z,U,D}\) to the primitive sensitivity parameters via the following equality constraint
\[f_{Y\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z,U,D}\sqrt{1-R_{Y\sim U |X,Z,D}^{2}}=\bigg{[}f_{Y\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z,D}\sqrt{1-R_{D \sim U|X,Z}^{2}}\\ +R_{Y\sim U|X,Z,D}\,R_{D\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z} \,R_{D\sim U|X,Z}\bigg{]}\Big{/}\sqrt{1-R_{D\sim U|X,Z}^{2}(1-R_{D\sim\dot{X}_{ j}|\tilde{X},\dot{X}_{-j},Z}^{2})}. \tag{23}\]
See Appendix C.2 for the derivation.
## 5 Sensitivity Intervals
So far, we have derived the objective function \(\beta=\beta(\theta,\psi)\) of the stochastic program (2) in Section 3 and a rich set of constraints \(\psi\in\Psi(\theta)\) in Section 4. As \(\theta\) only involves partial correlations and the standard deviation of regression residuals, we can plug in an empirical estimator of \(\theta\) to obtain a point estimator of the optimal value of (2). In other words, we only need to solve the optimization problem in (3) to estimate the lower and upper bounds of the partially identified region.
Complications arise when we would like to construct an interval estimator \(S\) of \(\beta\) with certain statistical guarantees. In the general setup presented in Section 1.1 and for a given \(0<\alpha<1\), we call \(S\) a \((1-\alpha)\)-sensitivity interval of \(\beta\) if
\[\mathbb{P}_{V}\big{(}\beta(\theta(\mathbb{P}_{V}),\psi)\in S\big{)}\geq 1- \alpha\quad\text{for all}\quad\mathbb{P}_{V}\text{ and }\psi\in\Psi(\theta(\mathbb{P}_{V})),\]
and \(S\) a \((1-\alpha)\)-sensitivity interval of the partially identified region if
\[\mathbb{P}_{V}\big{(}\operatorname{PIR}(\mathbb{P}_{V})\subseteq S\big{)} \geq 1-\alpha\quad\text{for all}\quad\mathbb{P}_{V}.\]
Obviously, the second notion of confidence is stronger. For a more detailed discussion on confidence statements in partially identified problems including issues with asymptotic sensitivity intervals, the reader is referred to Imbens and Manski (2004), Stoye (2009) and Molinari (2020).
Next we review several methods to construct sensitivity intervals. To quantify uncertainty in their sensitivity analysis of the OLS estimator, Cinelli and Hazlett (2020) suggest to use the ordinary confidence interval that treats \(U\) as observed
\[\left[\hat{\beta}_{Y\sim D|X,Z}+\left(-\,\hat{R}_{Y\sim U|X,Z,D}\,\hat{f}_{D \sim U|X,Z}\pm\frac{q_{\alpha}}{\sqrt{n}}\sqrt{\frac{1-\hat{R}_{Y\sim U|X,Z,D }^{2}}{1-\hat{R}_{D\sim U|X,Z}^{2}}}\,\right)\frac{\hat{\sigma}_{Y\sim X+Z+D}} {\hat{\sigma}_{D\sim X+Z}}\,\right],\]
where \(q_{\alpha}\) is the \((1-\alpha/2)\)-quantile of the standard normal distribution. Here it is assumed that a domain expert can specify \(\hat{\psi}=(\hat{R}_{Y\sim U|X,Z,D}\,\hat{R}_{D\sim U|X,Z})\) even though \(U\) cannot be observed. For the partially identified problem, a seemingly reasonable idea is to minimize/maximize the confidence bounds over \(\hat{\psi}\in\Psi(\hat{\theta})\).
However, a closer look at this heuristic approach shows that it achieves no obvious confidence guarantees. This is because the sensitivity parameter \(\hat{\psi}\) depends on the data and thus its value changes when another sample is drawn. If \(\hat{\psi}\) is almost certainly contained in \(\Psi(\hat{\theta})\), i.e. \(\mathbb{P}(\hat{\psi}\in\Psi(\hat{\theta}))=1\), this heuristic interval would actually be a sensitivity interval for \(\beta\). However, this is only possible if the sensitivity model \(\Psi\) is non-informative (e.g. \(R_{D\sim U|X,Z}\in[-1,1]\)). Numerical simulations in Appendix E confirm this intuitive argument; in particular, the heuristic interval has coverage \(50\%\) in one setting and above \(98\%\) in another, where the nominal coverage is \(1-\alpha=90\%\).
To account for the uncertainty in estimating the feasible set \(\Psi(\theta)\), Tudball et al. (2022) propose to solve the optimization problem (3) with a relaxed constraint \(\psi\in\tilde{\Psi}(\hat{\theta})\), where \(\tilde{\Psi}(\hat{\theta})\) is constructed to contain \(\Psi(\theta)\) with high probability. However, several technical difficulties prevent us from directly applying their method to our problem.
A third approach to construct sensitivity interval is to use the bootstrap (Efron and Tibshirani, 1994). More specifically, we can compute a collection of estimators \(\hat{\theta}\) using resamples of the observable data, solve the plug-in optimization problem (3) with \(\hat{\theta}=\hat{\tilde{\theta}}\), and then use the bootstrap distribution to construct one-sided confidence intervals \([\beta_{\min}^{l},\infty)\) and \((-\infty,\beta_{\max}^{u}]\) with level \((1-\alpha/2)\) for the minimal and maximal values, respectively. Different procedures may be employed in the last step. For instance, percentile bootstrap takes the \(\alpha/2\) and \(1-\alpha/2\) quantile of the bootstrap distribution to construct the respective confidence interval. Other options include the basic (or reverse percentile) bootstrap, studentized bootstrap, and bias-corrected and accelerated (BCa) bootstrap; see Davison and Hinkley (1997, chap. 5) for more detail. Finally, a sensitivity interval with nominal confidence level \((1-\alpha)\) may be constructed as \([\beta_{\min}^{l},\beta_{\max}^{u}]\).
For the sensitivity analysis problems described in this article, simulation studies in Appendix E suggest that the percentile and BCa bootstrap perform better than the basic boostrap. In two simulation studies with nominal confidence level \(90\%\), we found that the percentile and BCa bootstrap intervals cover the partially identified region around \(90\%\) and the true parameter, which equals the lower end of the PIR under the specified sensitivity model, around \(95\%\) of the time. By contrast, the empirical coverage of basic bootstrap intervals is \(5\) to \(10\%\) below the nominal level. Although a rigorous asymptotic analysis of the different bootstrap procedures is beyond the scope of this article, we offer some heuristics on why the percentile and BCa boostrap are expected to "work" here.
First, Shapiro (1991) provides an asymptotic theory for stochastic optimization and shows that the plug-in estimator of the optimal value of certain stochastic programs is asymptotically linear; see also Shapiro et al. (2009, chap. 5). Although our optimization problem (2) involves unknown parameters \(\theta\) in the constraints and thus does not fall in the class of problems considered by Shapiro (1991), one may hope that the theory there extends to the problem considered here.
Second, due to optimization over the sample, the plug-in estimator is always biased, even though the bias may be small asymptotically. With just a moderate
sample size, our simulations show that the bootstrap distribution of the optimal value estimators is biased and quite skewed; see Figure 7 in Appendix E. BCa bootstrap is designed to account for both phenomena and - as expected - achieves good empirical coverage. A closer examination of the correction for bias and skewness shows that these effects almost cancel. For this reason, percentile bootstrap, which corrects for neither, still performs well.
Finally, Zhao et al. (2019) provide an alternative justification for the percentile bootstrap in partially identified sensitivity analysis by using the generalized minimax inequality. However, their proof requires a fixed constraint set \(\Psi\) and thus cannot be directly applied to the problem here.
_Remark 7_.: Although the probability of the estimated constraint set \(\Psi(\hat{\theta})\) being empty should converge to zero as the sample size grows, this can occasionally occur with moderate sample sizes. Our implementation of the bootstrap procedures takes a conservative approach and sets the optimal value to \(\infty\) or \(-\infty\) depending on which end of the PIR is considered.
## 6 Data Example
We demonstrate the practicality of the proposed method using a prominent study of the economic return of schooling. The dataset was compiled by Card (1993) from the National Longitudinal Survey of Young Men (NLSYM) and contains a sample of 3010 young men at the age of 14 to 24 in 1966 who were followed up until 1981. Card uses several linear models to estimate the causal effect of education, measured by years of schooling and denoted as \(D\), on the logarithm of earnings, denoted as \(Y\). For brevity, we only consider the most parsimonious model used by Card which includes, as covariates for adjustment and denoted as \(X\), years of labour force experience and its square, and indicators for living in the southern USA, being black and living in a metropolitan area.
Card (1993) recognizes that many researchers are reluctant to interpret the established positive correlation between education and earnings as a positive causal effect due to the large number of potential unmeasured confounders. In our analysis, we will consider the possibility that an unmeasured variable \(U\), which represents the motivation of the young men, may influence both schooling and salary. To address this issue, Card suggests to use an instrumental variable, namely the indicator for growing up in proximity to a 4-year college; this is denoted as \(Z\) below. Nonetheless, proximity to college may not be a valid instrumental variable. For example, growing up near a college may be correlated with a higher socioeconomic status, more career opportunities, or stronger motivation. A more detailed discussion of the identification assumptions can be found in Card (1993).
For the purpose of sensitivity analysis, we assume that being black and living in the southern USA are not directly related with motivation and treat them as \(\dot{X}\); the remaining covariates are regarded as \(\tilde{X}\) in the sensitivity analysis. We assume that this partition satisfies the conditional independence in (8). In this example, we use comparative bounds to express our beliefs about the effects of the unmeasured confounder \(U\) on \(Y\) and \(D\). We assume that motivation can explain at most 4
times as much variation in the level of education as being black (denoted as \(\dot{X}_{j}\)) does after accounting for all other observed covariates, and that motivation can explain at most 5 times as much variation in log-earnings as being black does after accounting for the other covariates and education:
\[\text{(B1)}\quad R^{2}_{D\sim U|\tilde{X},\dot{X}_{-j},Z} \leq 4\,R^{2}_{D\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z},\] \[\text{(B2)}\ R^{2}_{Y\sim U|\tilde{X},\dot{X}_{-j},Z,D} \leq 5\,R^{2}_{Y\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z,D}.\]
The bounds (B1) and (B2) address deviations from the identification assumptions of a linear regression. Likewise, we can also specify deviations from the instrumental variable assumptions. We suppose that motivation \(U\) can explain at most half as much variation in \(Z\) (college proximity) as \(\dot{X}_{j}\) (black) can after accounting for the effects of \((\tilde{X},\dot{X}_{-j})\). Furthermore, we assume that college proximity \(Z\) can explain at most 10 % as much variance in log-earnings after excluding effects of \((X,U,D)\) as being black can explain log-earnings after excluding the effects of \((\tilde{X},\dot{X}_{-j},Z,U,D)\). These assumptions translate to
\[\text{(B3)}\ R^{2}_{Z\sim U|\tilde{X},\dot{X}_{-j}} \leq 0.5\,R^{2}_{Z\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j}},\] \[\text{(B4)}\ R^{2}_{Y\sim Z|X,U,D} \leq 0.1\,R^{2}_{Y\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z,U,D}.\]
When the bound (B1) is not part of the constraints, we additionally require
\[R_{D\sim U|X,Z}\in[-0.98,0.98]. \tag{24}\]
This ensures that \(R_{D\sim U|X,Z}\) is bounded away from \(-1\) and \(1\) and that the partially identified range has finite length.
Figure 2 shows the OLS estimates that adjust/do not adjust for \(Z\), the TSLS estimate, and their corresponding 95% confidence intervals. The same plot shows the estimated partially identified regions and 95% sensitivity intervals (obtained by BCa bootstrap) for five different sensitivity models that involve different combinations of the bounds (B1) to (B4).
Both the OLS and the TSLS estimates suggest a statistically significant positive effect of education on earnings. In the first sensitivity model in Figure 2, we relax the assumption of no unmeasured confounders, which would be required if the OLS estimate is interpreted causally, and assume that the effects of \(U\) on \(D\) and \(Y\) are bounded by (B1) and (B2), respectively. The sensitivity interval remains positive in this case. In other cases, the estimated partially identified regions and the sensitivity intervals become very wide whenever (B1) is not part of the constraints. This is because the other constraints, except the loose bound in (24), do not bound \(|R_{D\sim U|X,Z}|\) away from 1, so the association between \(D\) and \(Y\) may be entirely driven by the unmeasured confounder \(U\). In fact, the PIR would have an infinite length if (24) was not imposed. Therefore, just specifying deviations from the IV-assumptions, as in (B3) and (B4), is not sufficient to ensure that the PIR is finite in this dataset. Moreover, comparing the first and last sensitivity model in Figure 2, we notice that imposing the IV-related bounds (B3) and (B4) on top of (B1) and
(B2) does not shorten the estimated PIR and sensitivity intervals. These findings suggest that the results of Card (1993) are more robust towards deviations from the OLS than from the IV assumptions.
## 7 Sensitivity Contour Plots
This section presents graphical tools to further aid the interpretation of sensitivity analysis. The main idea is to plot the estimated lower or upper bound of the PIR against the sensitivity parameters or the parameters in the comparative bounds. Contour lines in this plot allow practitioners to identify the magnitude of unmeasured confounding (or violations of the instrumental variables assumptions) needed to alter the conclusion of the study qualitatively. This idea dates back at least to Imbens (2003); our method below extends the proposal in Cinelli and Hazlett (2020). The contour plots will be illustrated using the real data example in the previous section.
### \(b\)-contour Plot
For comparative bounds, the \(b\)-factor (such as \(b_{UD}\) in (10)) controls how tightly the corresponding sensitivity parameter is constrained. Hence, it is important to gain a practical understanding of \(b\). The \(b\)-sensitivity contour plot shows the estimated lower/upper end of the PIR on a grid of \(b\)-factors.
Figure 2: Three estimation strategies and five sensitivity models for the causal effect \(\beta\): Point estimates/estimates of the PIR (blue); 95% confidence/sensitivity intervals (black).
In Figure 3, we consider the sensitivity model with the bounds (B1) and (B2) and investigate our choice \((b_{UD},b_{UY})=(4,5)\) above. The plot shows that the estimated lower end of the PIR is still positive even for more conservative values such as \((b_{UD},b_{UY})=(6,10)\) or \((b_{UD},b_{UY})=(10,5)\). Thus, a substantial deviation from the OLS-related assumptions is needed to alter the sign of the estimate.
Figure 4 considers the sensitivity model using the constraints (B1) to (B4) with changing \((b_{UD},b_{ZY})\). This plot confirms our observation in Section 6 that imposing the IV-related bounds (B3) and (B4) does not change the estimated lower bound substantially when (B1) and (B2) are already present. In the terminology of constrained optimization, this means that the "shadow prices" for (B3) and (B4) are small.
### \(R\)-contour Plot
We may also directly plot the estimated lower/upper end of the PIR against the sensitivity parameters \(R_{D\sim U|X,Z}\) and \(R_{Y\sim U|X,Z,D}\). This idea has been adopted in several previous articles already (Imbens, 2003; Blackwell, 2014; Veitch and Zaveri, 2020).
For such \(R\)-contour plots, the key challenge is to benchmark or calibrate the \(R\)-values. This was often done informally. For example, one may use sensitivity contours parameterized by \(R_{D\sim U|X,Z}\) and \(R_{Y\sim U|X,Z,D}\) and add
\[\left(\sqrt{b}\,\hat{R}_{D\sim X_{j}|X_{-j},Z},\,\sqrt{b}\,\hat{R}_{Y\sim X_{j }|X_{-j},Z,D}\right), \tag{25}\]
for certain choices of \(b>0\) and \(j\in[p]\), to the plot, following Imbens' proposal. Thus, we may attempt to provide context for plausible values of the sensitivity parameters. However, this method of benchmarking is not entirely honest because different sets of covariates are conditioned on. Moreover, regressing out a potential collider \(D\) may leave \(\hat{R}_{Y\sim X_{j}|X_{-j},Z,D}^{2}\) difficult to interpret.
Here, we build upon Cinelli and Hazlett (2020)'s contour plots and derive rigorous benchmarking points using the comparative bounds of Section 4.1. To this end, we introduce the abbreviations \(\hat{R}_{D}:=\hat{R}_{D\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z},\hat{R}_{Y}:= \hat{R}_{Y\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z,D}\), where \(j\in[\dot{p}]\); the corresponding \(\hat{R}^{2}\)- and \(\hat{f}\)-values are defined accordingly.
First, we consider the sensitivity parameter \(R_{D\sim U|X,Z}\). Due to (10), if \(U\) explains \(b\) times as much variance of \(D\) as \(\dot{X}_{j}\) and the respective correlations of \(U\) and \(\dot{X}_{j}\) with \(D\) have the same sign, then
\[R_{D\sim U|X,Z}=\sqrt{b}\,\frac{R_{D\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z}} {\sqrt{1-R_{D\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z}^{2}}}=\sqrt{b}\,f_{D \sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z}. \tag{26}\]
For the second sensitivity parameter \(R_{Y\sim U|X,Z,D}\), we can use two different comparative bounds: one which does not condition on \(D\) (14) and one which does (13). In the former case, we assume that \(U\) explains \(b\) times as much variance in \(Y\) as \(\dot{X}_{j}\) does - conditional on \((\tilde{X},\dot{X}_{-j},Z)\) - and that the respective correlations have the same sign. According to (14), this implies
\[R_{Y\sim U|X,Z}=\sqrt{b}\,f_{Y\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z}.\]
To get a comparison value for the primitive sensitivity parameter \(R_{Y\sim U|X,Z,D}\), we plug this equation as well as (26) into (15) and obtain
\[R_{Y\sim U|X,Z,D}=\frac{1}{\sqrt{1-(1+b)R_{D\sim\dot{X}_{j}|\tilde{X},\dot{X}_ {-j},Z}^{2}}}\,\sqrt{b}\,f_{Y\sim\dot{X}_{j}|\tilde{X},\dot{X}_{-j},Z,D}.\]
Hence, in order to compare \(U\) to the explanatory capability of \(\dot{X}_{j}\) unconditional on \(D\), we can add the estimated point
\[\left(\sqrt{b}\hat{f}_{D},\,\frac{1}{\sqrt{1-(1+b)\hat{R}_{D}^{2}}}\,\sqrt{b} \,\hat{f}_{Y}\right) \tag{27}\]
to the sensitivity contour plot. The details of the derivation are given in Section F.1. In case we want to draw a comparison between the influence of \(U\) and \(\dot{X}_{j}\) on \(Y\) conditional on \(D\), we may add the estimated point
\[\left(\sqrt{b}\hat{f}_{D},\,\frac{\sqrt{1-(1+b)\hat{R}_{D}^{2}+b\hat{R}_{D}^{ 4}}+\hat{R}_{D}^{2}}{\sqrt{1-(1+b)\hat{R}_{D}^{2}}}\,\sqrt{b}\,\hat{f}_{Y}\right) \tag{28}\]
instead. The derivation is deferred to Section F.2. The corresponding comparison point for \(R^{2}\)- instead of \(R\)-values was suggested by Cinelli and Hazlett (2020). Moreover, we remark that the two proposed comparison points (27) and (28) are identical if \(b=1\).
To illustrate the proposal, Figure 5 shows the \(R\)-contour plot for the estimated lower end of the PIR and adds benchmarks corresponding to being black and living
in the southern USA. We observe that, even if the unmeasured confounder was five times as strong as being black in terms of its capability of explaining the variation of \(D\) and \(Y\), the estimator would still be positive. This holds true regardless of whether the comparison for the effect of \(U\) on \(Y\) was made conditional or unconditional on \(D\). Moreover, Figure 5 contrasts our comparison points with the non-rigorous benchmarks (25); in our experience, the difference between the methods usually does not change the qualitative conclusion of the sensitivity analysis.
Finally, we illustrate the utility of the \(R\)-contour plot as a way to visualize the feasible set \(\Psi\). Sensitivity analysis with multiple bounds often entails a non-intuitive, complex set of constraints. Consider the following sensitivity model
\[R_{Z\sim U|X},\,R_{Y\sim Z|X,U,D}\in[-r,r],\quad r\in\{0.01,0.02,0.03,0.04\},\] \[R_{Y\sim U|\tilde{X},\dot{X}_{-j},Z,D}^{2}\leq 5\,R_{Y\sim\dot{X}_{ j}|\tilde{X},\dot{X}_{-j},Z,D}^{2},\qquad R_{D\sim U|X,Z}\in[-0.99,0.99],\]
where \(r\) parameterizes the degree of deviation from the instrumental variables assumptions; the covariate \(\dot{X}_{j}\) is the indicator for black.
Figure 6 shows the estimated feasible set \(\Psi(\hat{\theta})\) for different values of \(r\). For \(r=0.01\), the feasible set is small and concentrated around the lines that correspond to \(R_{D\sim U|X,Z}=R_{Y\sim U|X,Z,D}=0\) (the instrumental variable is valid). As \(r\) increases, the feasible set becomes larger as expected. The curved shape of the region of feasible values is a result of the comparative bound on \(U\to Y\) and the associated constraints (15) and (16). Moreover, we observe that \(\beta\) assumes its most extreme values as \(R_{D\sim U|X,Z}\) approaches \(1\). This highlights the importance of bounding \(R_{D\sim U|X,Z}\) away from \(-1\) and \(1\) to ensure that the PIR has finite length.
Figure 5: \(R\)-sensitivity contours for the lower end of the estimated PIR: Comparison points unconditional on \(D\) (black dots), comparison points conditional on \(D\) (blue stars) and non-rigorous comparison points (green triangles).
## 8 Discussion and Outlook
Thus far, we have sidestepped the issue of numerically computing the solution to the constrained stochastic optimization problem (3). In fact, standard algorithms fail to reliably solve the problem due to the complexity of the constraints. Therefore, we develop a grid search algorithm which leverages the structure of the objective and the equality constraints. The details can be found in Appendix D.
Two insights underlie the methodological development in this article. First, sensitivity analysis (or more generally, any one-dimensional partially identified problem) may be viewed as a constrained stochastic program and we can leverage methods developed in stochastic optimization. Second, the \(R^{2}\)-calculus provides a parameterization of the bias of any \(k\)-class estimator and a systematic approach to specify interpretable sensitivity models.
Partial identification has attracted considerable attention in econometrics and causal inference since Manski (1990) and Balke and Pearl (1997); see Manski (2003); Imbens and Manski (2004); Vansteelandt et al. (2006); Chernozhukov et al. (2007); Aronow and Lee (2013); Richardson et al. (2014); Miratrix et al. (2018); Molinari (2020). Existing methods typically assume a closed-form solution to the stochastic program (2) (the lower/upper end of the PIR) and that the plug-in estimator is asymptotically normal. As such results are only known for relatively simple models, these methods only have limited utility in practice. The constrained optimization perspective of partial identification is only beginning to get embraced in the litera
Figure 6: \(R\)-sensitivity contours for the lower end of the estimated PIR: The red lines correspond to the values of \(R_{D\sim U|X,Z}\) and \(R_{Y\sim U|X,Z,D}\) that conform with the IV-assumptions.
ture (Kaido et al., 2019; Hu et al., 2021; Padh et al., 2022).
Our article further shows the need for a more complete, asymptotic theory of the optimal value of a general stochastic program. This may allow one to extend the methodology developed here to sensitivity models with high- or infinite-dimensional parameters. In particular, a theory for the bootstrap distribution of the optimal value estimator is required to clarify when and which bootstrap procedures provide asymptotically correct sensitivity intervals.
The \(R^{2}\)-values, \(R\)-values and generalizations thereof are popular for the calibration of sensitivity analysis. They have been recently used in the sensitivity analysis for linear models with multiple treatments (Zheng et al., 2021), mediation analysis (Zhang and Ding, 2022), missing values (Colnet et al., 2022) and models with factor-structured outcomes (Zheng et al., 2022). In these works, certain algebraic relationships about \(R^{2}\)-values and benchmarking techniques such as contour plots and robustness values are frequently used. Thus, the \(R^{2}\)-calculus summarized in this article may also benefit the calibration of other sensitivity models. Our proof of the \(R^{2}\)-calculus in general Hilbert spaces suggests that it may be useful in nonlinear models, too. See Chernozhukov et al. (2022) for related work in partially linear and semiparametric models using the Riesz-Frechet representation of certain causal parameters.
The rules of the \(R^{2}\)-calculus are purely algebraic and can therefore be applied in any linear structural equation model - with or without unmeasured variables. This raises the question: can sensitivity analysis be automated for reasonable sensitivity models defined by direct and comparative bounds? Such an algorithm would be immensely useful in applied research, but given the substantial amount of algebra needed for the relatively simple models considered here, the required work would be extremely challenging.
## Acknowledgments
Tobias Freidling is supported by a PhD studentship from GlaxoSmithKline Research & Development. Qingyuan Zhao is partly supported by the EPSRC grant EP/V049968/1.
|
2303.18076 | Machine learning applications for the study of AGN physical properties
using photometric observations | We investigate the physical nature of active galactic nuclei (AGNs) using
machine learning (ML) tools. We show that the redshift, $z$, bolometric
luminosity, $L_{\rm Bol}$, central mass of the supermassive black hole (SMBH),
$M_{\rm BH}$, Eddington ratio, $\lambda_{\rm Edd}$, and AGN class (obscured or
unobscured) can be reconstructed through multi-wavelength photometric
observations only. We trained a random forest regressor (RFR) ML-model on 7616
spectroscopically observed AGNs from the SPIDERS-AGN survey, which had
previously been cross-matched with soft X-ray observations (from ROSAT or XMM),
WISE mid-infrared photometry, and optical photometry from SDSS \textit{ugriz}
filters. We built a catalog of 21050 AGNs that were subsequently reconstructed
with the trained RFR; for 9687 sources, we found archival redshift
measurements. All AGNs were classified as either type 1 or type 2 using a
random forest classifier (RFC) algorithm on a subset of known sources. All
known photometric measurement uncertainties were incorporated via a
simulation-based approach. We present the reconstructed catalog of 21050 AGNs
with redshifts ranging from $ 0 < z < 2.5$. We determined $z$ estimations for
11363 new sources, with both accuracy and outlier rates within 2%. The
distinction between type 1 or type 2 AGNs could be identified with respective
efficiencies of 94% and 89%. The estimated obscuration level, a proxy for AGN
classification, of all sources is given in the dataset. The $L_{\rm Bol}$,
$M_{\rm BH}$, and $\lambda_{\rm Edd}$ values are given for 21050 new sources
with their estimated error. These results have been made publicly available.
The release of this catalog will advance AGN studies by presenting key
parameters of the accretion history of 6 dex in luminosity over a wide range of
$z$. | Sarah Mechbal, Markus Ackermann, Marek Kowalski | 2023-03-31T14:10:33Z | http://arxiv.org/abs/2303.18076v2 | Machine learning applications for the study of AGN physical properties using photometric observations
###### Abstract
Context:We investigate the physical nature of Active Galactic Nuclei (AGN) using machine learning (ML) tools.
Aims:We show that the redshift \(z\), the bolometric luminosity \(L_{\rm Bol}\), the central mass of the supermassive black hole (SMBH) \(M_{\rm BH}\), the Eddington ratio \(\lambda_{\rm Edd}\) as well as the AGN class (obscured or unobscured) can be reconstructed through multi-wavelength photometric observations only.
Methods:A Support Vector Regression (SVR) ML-model is trained on 7616 of spectroscopically observed AGN from the SPIDERS-AGN survey, previously cross-matched with soft X-ray observations (from ROSAT or XMM), WISE mid-infrared photometry, and optical photometry from SDSS \(ugriz\) filters. We build a catalogue of 21 364 AGN to be reconstructed with the trained SVR: for 9944 sources, we found archival redshift measurements. All AGN are classified as either Type 1/2 using a Random Forest (RF) algorithm on a subset of known sources. All known photometric measurement uncertainties are incorporated using a simulation-based approach.
Results:We present the reconstructed catalogue of 21 364 AGN with redshifts ranging from \(0<z<2.5\). \(z\) estimations are made for 11 420 new sources, with an outlier rate within 10%. Type 1/2 AGN can be identified with respective efficiencies of 88% and 93%: the estimated classification of all sources is given in the dataset. \(L_{\rm Bol}\), \(M_{\rm BH}\), and \(L_{\rm Edd}\) values are given for 16 907 new sources with their estimated error. These results have been made publicly available.
Conclusions:The release of this catalogue will advance AGN studies by presenting key parameters of the accretion history of 6 dex in luminosity over a wide range of \(z\). Similar applications of ML techniques using photometric data only will be essential in the future, with large datasets from eROSITA, JSWT and the VRO poised to be released in the next decade.
## 1 Introduction
Active Galactic Nuclei (AGN) are known to be the most luminous sources in the Universe. These systems consist of a central supermassive black hole (SMBH), around which an accretion disk is formed. Although much is yet to be learned about the feedback mechanisms linking the SMBHs growth and the evolution of their host galaxies, we already know that their mass \(M_{\rm BH}\) scales with a number of galaxy properties, like the stellar velocity dispersion \(\sigma\), bulge mass and luminosity (see review by Kormendy & Ho (2013)). Furthermore, the extreme energetics of these objects also makes them a favoured source of cosmic ray acceleration (Murase & Stecker 2022; Abbasi et al. 2022), as underlined by the recent discovery of neutrinos originating from the AGN NGC 1068 with IceCube (ICECUBE COLLABORATION et al. 2022).
Collecting physical parameters -- such as their redshifts \(z\), black hole mass \(M_{\rm BH}\), Eddington luminosity and ratio \(L_{\rm Edd}\) and \(\lambda_{\rm Edd}\) -- from a complete and unbiased sample of AGN, becomes an necessary challenge in order to study their accretion history. However, spectroscopic techniques are almost always needed to measure these variables, and although the number of spectroscopically observed AGN has undoubtedly grown in the last decade (Comparat et al. 2020), the discrepancy between photometrically identified AGN and those followed up with spectroscopic surveys remains large. Fortunately, AGN have been well covered by multi-wavelength surveys: X-ray telescopes have observed the extragalactic sky, revealing a pattern of stars, black holes in binary systems and AGN, while IR telescopes have allowed to distinguish the latter from the large stellar population.
Enlarging the sample and sky coverage of AGN observations with reliably estimated physical parameters is particularly important for multimessenger astronomy, where signals from individual sources are often weak. This limitation can be overcome by searching for correlations between a messenger, e.g., neutrinos or cosmic rays, and a population of AGN instead. However, the power of correlation searches increases with the sky coverage of the counterpart observations and, additionally, requires a model for the expected production of the messenger in question for each object included in the correlation study. Such models usually depend on physical parameters of the AGN.
Machine learning (ML) techniques have been applied in recent years to characterize AGN sources, mainly for classification tasks and redshift determination (Sadeh et al. 2016; Fotopoulou & Paltani 2018; Stevens et al. 2021). In this paper, we report
on a novel attempt to employ ML regression tasks to reconstruct fundamental parameters of AGN. We train a ML algorithm to estimate \(z\), \(L_{\rm x}\), \(L_{\rm Bol}\), the soft X-ray (SXR) and bolometric luminosities, as well as \(M_{\rm BH}\), \(L_{\rm Edd}\) and \(\lambda_{\rm Edd}\) on 21 364 AGN, all observed in the IR, optical and X-ray bands photometrically but not spectroscopically. To train the model, we use the recent SPIDERS-AGN spectroscopic survey (Clerc et al., 2016; Dwelly et al., 2017), which has compiled and released a sample of \(\sim\) 7600 Type 1 AGN (Coffey et al., 2019). We also train a ML classifier to identify Type 2 (or obscured), from Type 1 (unobscured) AGN.
The structure of the paper is as follows: we detail the catalogues used to expand and build both the training and to-be-reconstructed AGN data samples in Sect. 2, and describe the procedures to select AGN from stellar, galactic and blazar populations. We detail in Sect. 3 how the errors on the input parameters are incorporated in order to generate pseudo-sets for the classification, training and reconstruction of AGN, and underline the advantages of the simulation-based method, before classifying the unlabeled sources using ML tools them in Sect. 4). Sect. 5 drives into the details of the main ML regression task built to parametrize the core of AGN: comparison of several models and final results on the spectroscopic parameters predictions are discussed. We present in Sect. 6 the largest to date catalogue of AGN physical properties, stemming from the ML reconstruction of 21 364 sources, including 11 420 new \(z\) measurements, and \(L_{\rm Bol}\), \(M_{\rm BH}\), \(\lambda_{\rm Edd}\) values for all. The limits of the Type 2 AGN reconstruction are discussed. We turn to future studies the release of this dataset makes possible in Sect. 7, and discuss the role ML tools will play with the advent of future missions. The catalogue columns are described in Appendix A.
## 2 Data
The goal of our study is to compile as wide as possible a catalogue -- both in number of entries as well as astronomical features -- of non-blazar AGN sources in order to find photometric parameters (X-ray, optical and IR magnitudes, etc) that are highly correlated with features relating to the accretion activity, traditionally only accessible with spectroscopic follow-up surveys.
In this paper, we shall call feature any parameter or column from the catalogue: for instance, the W1 or u-band magnitude. Concurrently, we will refer to a catalogue entry as an AGN point source, or row of the catalogue. For matters pertaining to machine learning methods a few terms also need be clearly defined:
* by _training_ and _testing_ sample, we mean the dataset of 7616 sources built from the SPIDERS catalogue (Coffey et al., 2019), used to train the ML model tasked to learn correlations between photometric and spectroscopic parameters. The performance of the ML model is assessed by comparing the true and predicted values of target parameters (also referred in the ML literature as the _validation_ set),
* we call the _reconstructed_ or _full_ dataset the AGN catalogue we have built for which we make estimations on the target parameters using the previously trained ML model.
Our aim is then to find a compromise between the addition of new features and the completeness of the final catalogue: since any new parameter added to the training sample must also be available in the reconstructed dataset -- and null features are not admitted in machine learning regression tasks -- entries with null columns would then have to be sacrificed (for a more detailed treatment of null entries see Appendix C). We detail in the following section the multiwavelength observations used to construct our input parameters. Table 1 summarizes all features and catalogues of origin used. The analysis sequence to create the full reconstructed catalogue of AGN is described in Fig. 1.
### The SPIDERS-AGN catalogue
SPIDERS (SPectroscopic IDentification of ERosita Sources) is a completed SDSS-IV (Blanton, 2017) 5128.9 deg\({}^{2}\) survey over the SDSS footprint. The AGN sources were originally pre-selected based on the 1RXS and XMMSL1 (Savton et al., 2008) catalogues, which were then later updated once the 2RXS (Boller et al., 2016) and XMMSL2 2 were released. The details of the mission targeting and summary are documented in Dwelly et al. (2017) and Comparat et al. (2020), respectively. The spectroscopic data was made available in the 16th SDSS data release (DR16) (Ahumada, 2020) as a catalogue of Type 1 AGN containing X-ray fluxes, optical spectral and photometric measurements, black holes estimates and other derived quantities3. We refer the reader to Coffey et al. (2019) for a detailed description of the dataset and to Wolf et al. (2020) for a principal component analysis (PCA) of Type 1 AGN properties.
Footnote 2: [https://www.cosmos.esa.int/web/xmm-newton/xmmsl2-ug](https://www.cosmos.esa.int/web/xmm-newton/xmmsl2-ug)
Footnote 3: [https://data.sdss.org/datamodel/files/SPIDERS_ANALYSIS/spiders_quasar_bhmass.html](https://data.sdss.org/datamodel/files/SPIDERS_ANALYSIS/spiders_quasar_bhmass.html)
The survey probed the brightest X-ray sources in the sky, at the higher end of the luminosity distribution with \(41<\log_{10}({\rm L_{X}}/{\rm ergs}^{-1})<46\) for a mean redshift \(\overline{z}=0.47\). The bolometric luminosity \(L_{\rm Bol}\) was also derived from the monochromatic luminosity \(L_{\rm 3000\AA}\) and \(L_{\rm 5100\AA}\) using bolometric corrections. Fitting the H\(\beta\) and MgII emission lines, the SPIDERS-AGN study has derived \(M_{\rm BH}\), \(L_{\rm Edd}=1.26\times 10^{38}(\frac{M_{\rm BH}}{M_{\odot}})\) erg s\({}^{-1}\) and \(\lambda_{\rm Edd}=L_{\rm Bol}/L_{\rm Edd}\), with \(M_{\odot}\) being the solar mass. For 2337 sources, both the H\(\beta\) and MgII lines were observed, and two estimates of \(M_{\rm BH}\) and derived quantities were provided: in such cases, we select the values with the smallest associated error \(\sigma_{\rm Msat}\), which has a typical value of \(\sim\) 0.02 dex. Table 2 presents a list of the key properties found in the SPIDERS-AGN catalogue, with their respective range and median estimate error. Out of 7670 AGN, 7616 have complete spectroscopic information, which we use as the basis of our training sample, that we expand with several other astronomical catalogues.
### X-ray data
X-ray band observations are some of the most effective to identify AGN: emission is believed to come from above the accretion disk, from where photon scatter onto the hot corona gas and emit X-rays via inverse Compton. Although binary systems such as accreting neutron stars and stellar-mass black holes are also X-ray emitters, AGN are in general an order of magnitude more luminous (\(L_{\rm X}>10^{42}\) erg s\({}^{-1}\)) (Hickox & Alexander, 2018).
The _ROSAT_ telescope (Trumper, 1982) performed the first all-sky survey (RASS) between 1990 and 1991 in the 0.1-2.4 keV band: two catalogues, one for faint and another for bright sources were back then released (Voges et al., 2000). The data was reprocessed decades later, leading to a second data release, the 2RXS catalogue, comprising \(\sim\)135 000 sources (Boller et al., 2016). XMMSL2 is the second catalogue of X-ray sources found in six data taken by the _XMM-Newton_ European Photon Imaging Camera pn (EPIC-pn) in 3 bands: 0.2-12 keV (B8), 0.2-2 keV (B7), and 2-12 keV (B6). The B8 band is the most complete and the one of interest. The starting point of our reconstructed catalogue building is the work of Salvato et al. (2018): 106 573 X-ray sources from 2RXS and 17 665 sources from
XMMSL2 (with \(|\)\(b\)\(|\)\(>\)\(15\)\({}^{\circ}\)) were cross-matched with their AllWISE (Wright et al. 2010) and _Gaia_(Gaia Collaboration et al. 2018) counterparts using a newly developed Bayesian algorithm to overcome the large positional uncertainties of the X-ray observations. Two catalogues were released: 2RXS-AllWISE and XMMSL2-AllWISE4.
Footnote 4: [https://www.mpe.mpg.de/XraySurveys/2RXS_XMMSL2](https://www.mpe.mpg.de/XraySurveys/2RXS_XMMSL2)
In order to combine the two X-ray datasets, several steps must be taken, to match the different response functions and detection range of the instruments. We first convert the ROSAT fluxes from the original 0.1-2.4 keV into the classical soft X-ray band 0.5-2 keV (Dwelly et al. 2017):
\[\frac{F[E^{\prime}_{\rm max}:E^{\prime}_{\rm min}]}{F[E_{\rm max}:E_{\rm min}]} =\frac{E^{\prime 2-\Gamma}_{\rm max}-E^{\prime 2-\Gamma}_{\rm min}}{E^{2- \Gamma}_{\rm max}-E^{2-\Gamma}_{\rm min}}, \tag{1}\]
where \(\Gamma\) = 1.7 for 2RXS sources.
In Dwelly et al. (2017), \(\Gamma\) = 2.4 was chosen for XMMSL2 fluxes: however, the 2RXS/XMMSL2 datasets were kept separate. About a thousand sources from the SPIDERS AGN catalogue have been observed with both instruments: we use these as
Figure 1: Flowchart of the analysis.The starting point 2RXS/XMMSL2-AllWISE catalogue released in Salvato et al. (2018), leading to the catalogue of 21 364 reconstructed AGN sources presented in this work.
a control group to match the XMM to the converted RXS fluxes, by varying the \(\Gamma\) power-law index of Eq. 1 in order to match the peaks of the two X-ray flux distributions, as is shown in Fig. 2. Choosing \(\Gamma_{\rm XMM}\)=1.25, 94% of sources present in both datasets have a flux ratio \(\frac{\log\rm{F_{XMM_{0.5-2.2keV}}}}{\log\rm{F_{EXS_{0.5-2.2keV}}}}\) within 5% of one another. In the converted SRX band, the distribution of X-ray fluxes is contained between \(10^{-14}<F_{0.5-2keV}<10^{-9}\) erg cm\({}^{-2}\)s\({}^{-1}\). From the X-ray catalogues, we only keep the the X-ray fluxes and corresponding errors in the converted 0.5-2 keV band as as input to the ML-model. Whereas the hardness ratio and/or column density would have given great information about the class of AGN, both being a known proxy for the obscuration level of accretion disks, this parameter was neither complete, nor very accurate in the case of ROSAT observations, such that it had to be dropped.
### Infrared observation
AGN are bright in the mid-infrared (MIR, 3-30 \(\mu\)m). The dusty torus is responsible for this thermal emission, as it absorbs shorter-wavelength photons from the accretion disk and re-emits them in the MIR. Although star-forming galaxies are also bright in this band, their SED is cooler and can be distinguished from those of AGN (Padovani et al., 2017). The Wide-field Infrared Survey Explorer (WISE) (Wright et al., 2010) is a satellite launched in 2009. The missions was then later extended under a new application, NEOWISE (Mainzer et al., 2011). The combination of WISE and NEOWISE data was made available to the public with the release of the AllWISE catalogue (Cutri et al., 2021). The WISE survey scanned the sky at 3.4, 4.6, 12 and 22 \(\mu\)m (the bands designated as W1, W2, W3, and W4, respectively), at a depth at which the majority of the resolved 2RXS and XMMSL2 populations are to be detected (Salvato et al., 2018). In addition to the 4 MIR magnitudes and their associated errors, we explicitly record the relative magnitudes W1-W2, W2-W3, W3-W4. These values are readily available from Salvato et al. (2018), as previously mentioned.
### Optical data
#### 2.4.1 Photometry
SDSSThe Sloan Digital Sky Survey (SDSS) has in the course of its runs observed over 700 000 quasars in the optical band, most of them in broadband photometry in the instrument specific _ugriz_ (3543-9134 A) filters (Lyke et al., 2020). To pair the AGN sources from our training and unknown sets with their SDSS observations, the astroquery software tool (Ginsburg et al., 2019) is used: we cross-match the best AllWISE counterpart to the X-ray sources in our training and reconstructed samples with an optical counterpart from the DR17 photometric catalogue, setting a maximum radius of 5 arcsec. For the matched sources, we add as features the SDSS PSF magnitudes psfflag and their associated error psfflagErr for the 5 _ugriz_ bands. These values are most appropriate when studying the photometry of distant quasars. While logically, all 7616 SPIDERS sources have SDSS photometry counterparts, 47 739 (5985) 2RXS (XMMSL2) sources have
\begin{table}
\begin{tabular}{|c c c c c|} \hline Observation type & Instrument & Spectral band & Input provided & \(N_{\rm AGN}\)\({}^{1}\) & Reference \\ \hline Soft X-ray & _ROSAT_ & 0.1-2.4 keV & X-ray flux and err. & 19 896 & Boller et al. (2016) \\ \cline{2-5} & _XMM-Newton_ & 0.2-12 keV & & 1468 & Saxton et al. (2008) \\ \hline Mid-Infrared photometry & WISE & 3.4 - 22 \(\mu\)m & W1,W2,W3,W4 & 21 364 & Cutri et al. (2021) \\ \hline Optical photometry & SDSS I-IV & 3543-9134 Å & _ugriz_ mag. and err. & 21 364 & Blanton (2017) \\ & _Gaia_ & 330–1050 nm & Flux and err. & & Arenou et al. (2017) \\ \hline Optical spectroscopy & SDSS I-III & 380–920 nm & Classification & 9944 & Dwelly et al. (2017) \\ \cline{2-5} & see Ref. & & redshift & Véron-Cetty \& Véron (2010) \\ \hline \end{tabular}
\end{table}
Table 1: Catalogues and their references used to build the multiwavelength inputs to the machine learning algorithm.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Target parameter & Notes & Training range & Error on the estimate \\ \hline \(z\) & Redshift & [0.008,2.5] & - \\ \(\log L_{\rm X}\) & X-ray luminosity & [40.845.9] & \(-\)0.05 dex \\ \(\log L_{\rm X}\) & Bolometric luminosity & [42.947.6] & \(-\)0.02 dex \\ \(log\rm{\it Mass}/M_{\rm X}\) & Black hole mass & [62.120.4] & \(-\)0.02 dex \\ log \(\rm{\it z}_{\rm total}\) & Eddington ratio & [-3.20.54] & \(-\)0.01 dex \\ \hline \end{tabular}
\end{table}
Table 2: Target variables, their domain range and error estimate from the SPIDERS catalogue (Coffey et al., 2019). The variables are presented in the order of their sequential prediction by the ML algorithm, as executed by the chain regressor.
Figure 2: _Top_: 2RXS and XMMSL2 fluxes for the \(\sim\) 1000 SPIDERS-AGN sources observed with both instruments. The X-ray fluxes were converted to the soft X-ray band 0.5-2.0 keV using \(\Gamma\)=1.7 for 2RXS and \(\Gamma\)=1.25 for XMMSL2, value chosen to match the peaks of the two distributions. _Bottom_: Ratio of the converted flux logs as a function of the 2RXS fluxes for the same sources shown in the above panel.The dashed lines represent the \(\pm\) 5% level on the ratio, within which 94% of the converted fluxes are.
been observed photometrically, which we add as a requirement (see Fig. 1).
Gaia Salvato et al. (2018) also cross-matched the 2RXS/XMMSL2 sources to the first release catalogue of the _Gaia_ mission (Arenou et al., 2017). The astrometric instrument performs broadband photometry in Gaia's white-light G-band (330-1050 nm): we keep the mean flux and mean flux error for all sources, expressed in photo-electrons s\({}^{-1}\).
#### 2.4.2 Spectroscopy and redshift
Prior to the start of the SPIDERS mission, X-ray+AllWISE AGN targets were cross-matched with the already observed SDSS I-II-III runs (Dwelly et al., 2017): \(\sim\)12 000 ROSAT and \(\sim\)1500 XMM-Newton sources were found to already have been observed with spectroscopy. After performing a visual inspection of the optical spectra, two Value Added Catalogues (VAC) were released 5. These included redshift measurements, object main and sub-classification. Matching sources from the reconstructed catalogue by their X-ray name, we find 21 287 (2540) 2RXS (XMMSL2) sources previously observed spectroscopically: of these, 11 242 (1250) contain redshift information (but no black hole mass/Eddington ratio). As we shall see in Sect.5.3, this latter variable greatly improves the ML predictions. Furthermore, we will make use of the sub-sample of sources for which we have an AGN classification to correlate multiwavelength observations with the AGN obscuration level (see Sect. 4). Following the classification scheme of Comparat et al. (2020), we mark AGN as either Type 16 or 27.
Footnote 5: [https://data.sdss.org/datamodel/files/SPIDERS_ANALYSIS](https://data.sdss.org/datamodel/files/SPIDERS_ANALYSIS)
Additional spectroscopic classification and redshift measurements are found in the VERONCAT catalogue (Veron-Cetty & Veron, 2010), a collection of some \(\sim 150\,000\) quasars from multiple surveys. The X-ray positions of sources are then cross-matched with the optical or radio positions given by VERONCAT, using a maximum matching radius of 60 arcsec, a value taken from a past study (Abbasi et al., 2022). This allows us to get additional AGN classification and spectral class features. We collect some \(\sim 9000\) redshifts from VERONCAT, on top of the ones already found from previous SDSS surveys. To verify the accuracy of the cross-matching, we compare the redshift entries from the 6163 SPIDERS sources that are already present in VERONCAT, \(\Delta_{x}=|z_{\rm SPIDERS}-z_{\rm VERONCAT}|\). We found that 98% of the sources have \(\Delta_{x}<0.01\), comforting us in the adequateness of the cross-matching radius used.
### AGN selection
The multiwavelength data collected in the previous sections can now be used to select AGN from a larger sample comprising of blazars, galaxies, and stars using X-ray and IR colors observations.
Following the source characterization methods already established in Salvato et al. (2018) and references therein, we proceed to a first selection of AGN in the X-ray/MIR plane (see top panel of Fig.3). An empirical relationship has been found, which separates AGN from stars and galaxies.
\[W1\geq 1.625\times{\rm log}F_{0.5-2{\rm keV}}-8.8 \tag{2}\]
We can confirm the validity of such a selection by overlaying the confirmed AGN in the SPIDERS sample, which all clearly lie above the cut-off line.
We then use the AllWISE W1-W2, W2-W3, and W3-W4 relative magnitudes to isolate AGN from blazars, starbust and normal galaxies, as was developed in Assef et al. (2013): in the W1-W2 vs W2-W3 digram (middle panel of Fig. 3), stars and elliptical galaxies have colors near zero, located in the lower left quadrant, spiral galaxies are red in W2-W3 but not in W1-W2, while Ultra-Luminous InfraRed Galaxies (ULIRGS) are red in both colors, lying in the upper right quadrant of the diagram (Wright et al., 2010). We select the sources for which \(1.5<W2-W3<4.5\) and \(0.2<W1-W2<1.75\) (black square in middle panel of Fig. 3). Once again, we use the SPIDERS AGN sample to justify the selection criteria being made in the W1-W2 vs W2-W3 color-color space. Similarly, a visual cut is made on the W1-W2 vs W3-W4 plane following the SPIDERS-AGN locus (black line of bottom panel in Fig. 3) (Abbasi et al., 2022).
After these selections, 60 952 AGN are identified, from an original sample of 123 436 X-ray-AllWISE sources. The spatial distribution of the final catalogue is shown in Fig. 4: the sources follow the SDSS footprint, as the requirement to have been observed photometrically by SDSS marks the most stringent cut on the data.
## 3 Measurement uncertainties and pseudo-sets
Each photometric observation used as an input, presented in Table 1, comes with a measurement uncertainty of non-constant variance (also called "heteroscedastic" error). Properly taking them into account is an active area of study in astrostastictics (Feigelson et al., 2021). Considering the stochastic nature of both input features and ML models, we adopt the approach outlined in Shy et al. (2022): all measurement errors \(\sigma_{\rm err}\) are assumed to be gaussian, such that any photometric input for a single source is represented as a normal distribution, centered around the given value \(\mu_{\rm value}\), extending to \(\pm 3\sigma_{\rm err}\). Fig. 5 shows such an example of the W1 for a single training source with the measurement given by \(\mu_{\rm value}\) (black dashed line), and \(\mu_{\rm value}\pm 3\sigma_{\rm err}\) (blue dotted lines).
Drawing randomly from each independent "smeared" input distributions, we can thus create \(N\) pseudo-sets for each AGN source, where the photometric inputs differ within \(\pm\sigma_{\rm err}\) between each realization. We create \(N\)=200 pseudo-sets of both the training (for the regression) and the reconstructed datasets. Sect. 4 and 5 will develop how this simulation-based treatment of measurement errors helps characterize both the performance and reconstruction of unlabeled/unknown data in the context of ML classification and regression tasks.
## 4 Machine learning classifier for Type 2 AGN identification
Once AGN have been identified, a further step is needed: since our training sample exclusively contains Type 1 AGN (broad emission line, unobscured), we must distinguish Type 1 from Type 2 (narrow emission line, obscured) in our reconstructed sample, to study any potential biases in the spectroscopic predictions.
Obscured AGN, also called Type 2, are systems where the emission from the accretion disk gets absorbed and scattered by dust or gas surrounding it, masking some of the characteristic
signature of the AGN with respect to the line of sight of the observer. The impact of such a suppression is wavelength-dependent, and a reliable and complete identification method of this population remains challenging and important (for a review on obscured AGN see Hickox & Alexander (2018)). Distinction between Type 1 and Type 2 AGN can be done across multiple photometric and spectroscopic observation types. The most classical way to identify AGN class is through UV-NIR spectroscopy. Type 1 AGN, have broad emission lines showing velocity dispersion \(>\)1000 km s\({}^{-1}\), while Type 2 AGN have narrow emission lines only, with velocity dispersion \(<\) 1000 km s\({}^{-1}\). However, since the purpose of this study is to characterize AGN that have not been spectroscopically observed, we must circumvent the absence of such of information.
Our sources were originally selected based on their soft X-ray flux (Salvato et al. 2011): this already skews the sample towards a majority of Type 1 sources, as soft X-rays get absorbed by the high hydrogen column density \(N_{\rm H}\) around the accretion disk (Hasinger 2008), while harder X-ray are less suppressed in obscured AGN. These are also known to have stronger emission in the MIR than they do in other bands, as larger dust column reprocesses radiation from other bands. We thus expect weak UV/optical/NIR emission compared to that of the
Figure 4: Spatial distribution of sources in equatorial Mollweide projection for the for the selected AGN sample (in blue) and the SPIDERS AGN sample (yellow). The requirement for all sources to have been observed by SDSS constrain their distribution to the Northern Sky footprint. The galactic plane is shown as a gray line.
Figure 5: Distribution of W1 input smeared by the measurement uncertainty for a single source. Each point is drawn from a normal distribution centered at the given catalogue input feature \(\mu_{\rm radio}\) (black dashed line) and extending to \(\pm 3\sigma_{\rm err}\) (blue dotted lines), from the given photometric measurement error.
Figure 3: _Top_: Distribution of sources in the W1 band vs soft X-ray flux parameter space for the ALLWISE counterparts to 2RXS (pink) and XMMSL2 (blue). The confirmed SPIDERS AGN are represented in yellow. The cut defined in Eq. 2 is also shown. W1–W2 magnitude plotted against the W2-W3 (_middle_) and W3-W4 (_bottom_). The black lines show the cuts applied based on the SPIDERS AGN position.
MIR. As described in Sect. 2.4.2, we already know the AGN class for a sub-sample of sources which have a classification CLASS_BEST, such that the correlations between photometric observations and the object type can be studied. One could try to identify a single feature that best allows the distinction between obscured/unobscured AGN, the ratio of W2/W1 infrared emission for example. We develop this "classical" method in Appendix B following the method developed in Abbasi et al. (2022). However, a more judicious use of the multiwavelength information collected would be to train a classification machine learning model. Taking the 13 415 sources for which AGN type is known as a training sample, we add a new feature called "obscuration": its value is 0 for Type 1 AGN, and 1 for Type 2 AGN. We seek to characterize whether the remaining 15 533 are obscured or not.
### Imbalanced classification
Of the 13 415 labeled sources, 12 629 are unobscured -- including SPIDERS sources which are all Type 1 -- while 786 are obscured AGN, a ratio of 16:1. This classification task is thus an imbalanced one, a frequent situation where a classifier must learn to identify a minority case, although it is trained on a dataset over-represented by a majority case, leading to bias in the reconstructed sample. While many strategies exist to mitigate this issue, such as oversampling by synthetically creating minority cases (Chawla et al., 2002), a simpler one is to undersample the majority used by randomly selecting a sub-sample of Type 1 AGN: this way the \(n_{1}\)/\(n_{2}\) ratio is changed and brought closer to parity.
To test the approach, we train a random forest (RF) for different \(n_{1}\)/\(n_{2}\) ratios, with 18 features as inputs, the smeared photometric measurements (see Sect. 3). We choose to use 50% of the available dataset for training, and 50% for validation, by comparing the predicted from the true values. We call a true positive (TP) a Type 2 AGN classified as Type 2, a false positive (FP) a Type 1 classified as Type 2, a true negative (TN) a Type 1 classified as Type 1, and a false negative (FN) a Type 2 classified as Type 1. The precision of the classification is then defined as
\[P=\frac{TP}{TP+FP}, \tag{3}\]
that is, the ability of the classifier not to label as positive a sample that is negative. The recall, also called sensitivity or true positive rate (TPR), is calculated with:
\[R=\frac{TP}{TP+FN}, \tag{4}\]
that is, the ability of the classifier to find all the positive samples. For completion, we provide the definition of the false positive rate (FPR), used in the ROC (Receiving Operating Characteristic) curve:
\[FPR=\frac{FP}{TN+FP}, \tag{5}\]
such that it is a measure of finding all the negative samples. In instances of classification task on imbalanced datasets, the precision-recall curve (PRC) is more informative than the more widely used ROC (Receiving Operating Characteristic), as is detailed in Saito & Rehmsmeier (2015).
Fig. 6 shows the PRC for several models varying class ratios in the training, with the task of classifying AGN sources as Type 1 or Type 2, on a single pseudo-set. The confusion matrices for the 1:1 and 16:1 training ratios are also presented in Fig. 6. We reach the following conclusions:
1. At its optimized best performance, the ML-model is more apt to identify Type 2 from Type 1 sources than a single feature selection would (solid pink line in Fig.6) vs purple dashed line), with a selection efficiency of 91% for the ML-model, compared to 79% for the single feature selection.
2. The balance of minority/majority cases impacts the performance of the network substantially: at its worst (for a 16:1 ratio), the single feature selection scores better than the ML-model (purple solid line verse dashed line) in selecting Type 2 AGN.
### Classification accuracy on the labeled set
After the study on a single pseudo-set, we settle on training a random forest classifier with a 1:1 training ratio. To propagate the measurement uncertainties in the input features and the random fluctuations inherent to a ML classifier, we make use of the \(N\)=200 pseudo-sets previously generated (see Sect. 3) to label the unknown AGN. We follow the steps outlined in Shy et al. (2022): for each simulation set, a classifier is fit to a realization of the labeled data, then used to reconstruct a realization of the unlabeled data. This way, all sources are reconstructed \(N\) times, whether they belong to the unlabeled or the validation datasets. For all performance metrics defined in Sect. 4.1 we thus obtain a posterior predictive distribution comprising of the results of each set's classification, such as the precision of Type 2 classification distribution shown Fig.
Figure 6: _Top_: Precision recall curve for different training ratios and single feature selection: a perfect classifier would lie on the top right corner of the PRC. _Bottom_: Confusion matrix for training ratios 1:1 (_left_) and 16:1 (_right_). The undersampled (1:1) classifier is much more apt to identify Type 2 AGN, while still performing well in the identification of Type 1 AGN (91% and 87% efficiency respectively). The naturally imbalanced set, while accurately selecting Type 1 AGN 99% of the time, performs poorly in finding the rarer Type 2 AGN.
7. The variation across multiple fits reflects the propagated uncertainty through all steps of the procedure. The blue dotted line indicates the precision obtained running the classifier a single time without taking into account measurement errors. For all performance metrics, ignoring uncertainties leads to an overestimation of the predictive power of a classifier (Shy et al., 2022). The final scores with uncertainties on the AGN classifier can be found in Table 3.
### Softening a hard classifier
In addition to giving a more accurate view of the classifier's performance thanks to the validation set, this simulation-based method introduces nuances into the reconstruction of the unlabeled obscuration level. With each source now being reconstructed \(N\)=200 times, the reconstructed obscuration is then the relative probability of a source to be in each class. That is, while a single classifier would give a "hard" 0 (Type 1) or 1 (Type 2) prediction on unlabeled data, taking the average value of \(N\) reconstructions leads to a "softening" of the "hard" classifier: each source has now an obscuration level \(\mu_{\rm obscuration}\) between 0 and 1, which is the arithmetic mean of the \(N\) classification results, with its associated standard deviation value \(\sigma_{\rm obscuration}\). Fig. 8 shows how the unlabeled data now lies in a continuous spectrum between 0 and 1.
This softening of the classification also let us establish a custom decision threshold \(t\) on \(\mu_{\rm obscuration}\) and \(\sigma_{\rm obscuration}\) for an AGN source to be considered as either Type 1 or Type 2. This threshold can be set to be more or less stringent. Choosing to enhance the purity of the classification, we set \(t\)=0.8, such that an AGN source is considered of Type 2 if \(\mu_{\rm obscuration}>0.8\) and of Type 1 if \(\mu_{\rm obscuration}<0.2\).
Doing so, we find that 7852 are marked as Type 1, 5228 as Type 2, and 2448 as "ambiguous". This corresponds to a \(n_{1}\)/\(n_{2}\) ratio of \(\sim\) 1.5:1, which is markedly smaller than the 16:1 ratio from the labeled dataset. The reason for such a stark discrepancy can be found in Fig. 9, which shows how the labeled dataset ("known as 1", "known as 2") is biased towards optically brighter (\(u\)-band mag \(<\) 24). This follows from the target requirements established by the various SDSS surveys prior to the spectroscopic observations of the AGN targets (Alam et al., 2015). The AGN classified as Type 2 (blue histogram) constitute the fainter end of our catalogue, too faint to have been spectroscopically followed-up. Because the RF classifier infers that fainter sources are more likely to be of Type 2, the reconstructed unlabeled catalogue naturally results in a more balanced AGN ratio.
Figure 8: Histogram of the averaged reconstructed obscuration values for all unlabeled data. While the majority of sources have an obscuration value equal to 0 or 1, a non-negligible number of them lie in the region between the two hard values: a hard classifier, softened.
\begin{table}
\begin{tabular}{|c|c|c|} \hline AGN Type & Precision & Recall \\ \hline Type 1 & 0.918 \(\pm\) 0.007 & 0.851 \(\pm\) 0.007 \\ Type 2 & 0.861 \(\pm\) 0.006 & 0.923 \(\pm\) 0.007 \\ \hline \end{tabular}
\end{table}
Table 3: Precision and recall scores and uncertainties for Type 1 and Type 2 prediction using \(N\)=200 fits to pseudo-sets with measurement uncertainties.
Figure 7: Posterior distribution for the precision of Type 2 prediction from fitting the labeled dataset \(N\)=200 times. The vertical blue dotted line indicates the value obtained from a single RF reconstruction without inclusion of measurement errors, in which case the performance of the classifier is overestimated.
Figure 9: SDSS \(u\)-band magnitude for all labeled and reconstructed datasets. The unlabeled sources classified as Type 2 AGN are fainter than the labeled Type 2 sources, which have made the photometric limit criteria for SDSS spectroscopic observations. In general, obscured AGN have a fainter optical spectra than unobscured ones.
## 5 Machine-learning for AGN properties estimations
The following section dives into the detail of selecting a suitable machine learning model to predict the parameters of Table 2, using as inputs the features presented in Table 1. Since redshift measurements are available for almost half of the 21 364 AGN sources, but not for the other, we train and test two separate models, which we call \(ML_{w/z}\), where \(z\) is added as an input, and \(ML_{w/z}\), where \(z\) is one of the outputs of the regressor. Just as is was done in Sect. 4 for ML classification, we develop how measurement errors are taken into account in ML regression using the pseudo-sets generated, giving in the process a more complete picture of the performance and quality of the reconstruction.
For the training, we transform our target parameters: \(z\), \(L_{\rm X}\), \(L_{\rm Bol}\), \(M_{\rm BH}\), and \(\lambda_{\rm Edd}\) are often expressed in log scale to representatively describe the span of values across several decades.
### Exploratory data analysis
Before choosing and training a machine-learning algorithm on the SPIDERS sample, we explore the relationship between the input variables and the target parameters. Fig 10 shows the sorted Pearson's correlation coefficients for all inputs and one of the outputs, the black hole mass of the AGN. The relative infrared and optical color magnitudes demonstrate the highest level correlation. This is even more visually evident when one looks, once more, at the IR color-color plot in Fig. 11. The scatter plot presents the W1-W2 vs W2-W3 AllWISE colors for AGN with \(\lambda_{\rm Edd}<0.1\) and \(\lambda_{\rm Edd}>0.1\), the median value of \(\lambda_{\rm Edd}\) in the training sample. We observe that strong accretion disks (higher \(\lambda_{\rm Edd}\)) are redder in both W1-W2 and W2-W3 than lower \(\lambda_{\rm Edd}\) values. These clear connections between infrared photometry and spectroscopic observables, already accessible with a naive and straightforward data analysis, are encouraging indications that our goal -- the estimation of AGN physical properties -- is suited for a machine-learning task.
### Machine-learning model parameters
Ours is essentially a multi-dimensional linear regression task, with 18 multi-wavelength inputs, listed in Table 1, and 5 or 6 target parameters, depending on whether \(z\) is known for a source (see Table 2). Many ML applications are readily available to use for such a supervised learning task notably through the scikit - learn python library (Pedregosa et al. 2011). We use a single output, multi-step chain regression, in order for the ML model to learn the correlations between target parameters. In the first pass of the chain regressor, the initial 18 inputs are used to predict the first output, the redshift \(z\). In the next pass, the model takes 18+1 inputs, the extra-one being the predicted \(z\), and outputs the next parameter, \(L_{\rm X}\), and so on.
#### 5.2.1 Selection of ML-model
We detail in this section our non-exhaustive search for the most suited estimator. All supervised learning ML models essentially learn a mapping of inputs to outputs given an example of such a map. There exists however a plethora of model types one could choose from. For instance, linear models expect the output to be a linear combination of the features: certain regressors simplify the model by introducing penalty coefficients that will minimize (in the case of ridge regression) or reduce (for Lasso regression) the input parameters, if some are found to contribute less to the learning. This procedure, called regularization, aims to reduce the overall error in the validation dataset. On the other hand, Support Vector Regression (SVR), an application of the kernel-based Support Vector Machines (Cortes & Vapnik 1995), allows to tune the tolerance \(\epsilon\) to such errors, while introducing non-linearity parametrization through hyperplane fits to the data. Non-linearity is also a feature of neural networks, e.g, in a multi-layer perceptron (MLP), via an activation function connecting the neural layers. We compare these different ML models in the next section.
#### 5.2.2 Model evaluation
To determine the best model, performance metrics are defined based on the validation dataset, where \(y_{\rm true}\) and \(y_{\rm reco}\) are known.
Figure 11: W1-W2 magnitudes as a function of W2-W3 magnitudes for sources in the training sample with low (red dots) and high (blue dots) \(\lambda_{\rm Edd}\). The low and high samples are separated by the median value of the \(\lambda_{\rm Edd}\) distribution, 0.1.
Figure 10: Bar chart showing the Pearson correlation score of input variables and the black hole mass, from the training sample data. The redshift, luminosity, and bolometric luminosity correlations are included in this chart, since \(z\) (and thus \(L_{\rm X}\)) are known for almost half of the sources, and the outputs will be predicted before the black hole mass, underlining the logic behind the chain regression.
To reduce the variance of these metrics, we use the popular \(K\)-fold cross validation method (Stone 1974). It offers an alternative to a standard split of the training sample into train/test datasets, by dividing the dataset into \(k\) randomly shuffled groups, and using every single one of them as test set at least once, while training on the \(k-1\) groups left. This way, it insures a non-biased evaluation of the model, as each sample (in our case, each AGN source), will be used as a validation point once, and as a training set \(k-1\) times: the following metrics are thus calculated for a sample size \(N=7616\). As is common for regression problems, we use the \(R^{2}\) score, also called coefficient of determination, for each parameter to assess the performance of each model. This coefficient is calculated as:
\[R^{2}=1-\frac{\sum_{i}(y_{i_{\text{true}}}-y_{i_{\text{true}}})^{2}}{\sum_{i}(y _{i_{\text{true}}}-\overline{y_{\text{true}}})^{2}}, \tag{6}\]
with the numerator being the residual sum of squares, and the denominator the total sum of squares. For a perfect regressor, we have \(R^{2}=1\). Fig. 12 presents the target-by-target comparison between the two linear models (Ridge and Lasso Regression), a Support Vector Regression model, and a multi-layered perceptron (MLP) deep neural network model8. We stress that for this test, none of the model parameters have been tuned, except for the SVR. Although the SVR slightly underperforms in predicting the first target parameters, namely the soft X-ray luminosity \(L_{\text{X}}\), as is given by the lower \(R^{2}\) compared to the other models, the model is markedly better at predicting \(M_{\text{BH}}\), \(L_{\text{Edd}}\) and \(\lambda_{\text{Edd}}\). This trend is also confirmed in the more difficult case of unknown \(z\), represented by open circles in Fig. 12. From a pragmatic aspect, the runtime speed and low number of tuning parameters of the SVR regressor were clear advantages compared to the vast phase space of MLP neural networks.
Footnote 8: As default parameters, the MLP has one hidden layer with 100 neurons, use the “relu” activation function and the “adam” optimizer. More details can be found [https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html)
Once the performance of the SVR model has been assessed, we complete a grid search over its hyperparameters \(C\) and \(\epsilon\), the regularization and margin of errors, respectively, to find their optimal values. We repeat this procedure for the no-redshift case, \(ML_{\text{no/}k}\). This step is important as the the final result can quite vary between default and optimized parameters. We summarize the final parameters of the machine learning model to be trained in Table 4.
#### 5.2.3 Regression metrics on \(N\) pseudo-sets
Just as it was done for the classification task, we use the \(N\)=200 pseudo-sets to propagate both the uncertainties in the photometric measurements in the training and reconstructed datasets, as well as fluctuations of the regressor's reconstruction. Here again, we adopt an iterative method, where, for each \(i\)-th training sample \(T_{i}\) the SVR model is fitted, values of \(T_{i}\) are predicted (for performance evaluation studies). This \(i\)-th fitted SVR is then used to reconstruct the target parameters in \(C_{i}\), \(i\)-th pseudo-set of the unreconstructed catalogue. The process is repeated \(N\) times.
The top panel of Fig. 13 displays the true and predicted distributions for the bolometric luminosity of a single source in the training sample. \(\mu_{\text{true}}\) and \(\sigma_{\text{true}}\) are given by the SPIDERS-AGN catalogue for all target parameters, and all sources, while \(\mu_{\text{pred}}\) and \(\sigma_{\text{pred}}\) are obtained by fitting a gaussian function to the \(N\)=200 reconstructed values for each source. From these, we construct the "pseudo-pull" distribution \(\mu_{\text{true}}-\mu_{\text{pred}}\). We note how, in this single source, the \(ML_{\text{no/}k}\) (purple) and \(ML_{\text{no/}k}\) (yellow) reconstructed values of \(L_{\text{Bol}}\) are multiple \(\sigma_{\text{pred}}\) apart. This is also clear from the bottom panel of Fig. 13, which shows the distribution of all pulls for the reconstructed \(L_{\text{Bol}}\). Here, the difference in the quality of reconstruction between \(ML_{\text{no/}k}\) and \(ML_{\text{no/}k}\) is apparent in the greater smearing of the pull distribution.
The precision of the reconstruction is also derived from the normalized median absolute deviation \(\sigma_{\text{NMAD}}\) = 1.48\(\times\) median(\(\Delta\mu\) - median(\(\Delta\mu\)) \(\mid/\mu_{\text{true}}\), expressed in %, with \(\Delta\mu=\mu_{\text{true}}-\mu_{\text{pred}}\). It is a robust measure of the deviation, one that is insensitive to outliers.
Finally, the last metric we establish is the contamination level of the reconstruction: for each bin in \(\mu_{\text{true}}\), we look at the \(\mu_{\text{pred}}\) distribution and fit a gaussian PDF (see Fig. 14). We then define the contamination to be the overlapping area between the lowest and highest true intervals reconstructed PDF, represented by the hashed area in Fig. 14. In the case of the \(\lambda_{\text{Edd}}\) parameter, it is the
\begin{table}
\begin{tabular}{|c c c c|} \hline Model features & Type & Properties & Notes \\ \hline \multirow{3}{*}{Model type} & \multicolumn{2}{c}{Kernel function=“tbf”,} & Training a chain regressor \\ & & \(C\)=1, & connects the non-independent target \\ & & \(\epsilon\)=0.001 & parameter to one another \\ \hline Data scaling & Max-min normalization & Inputs and outputs are scaled & Aids the model to learn the problem \\ \hline \multirow{2}{*}{Validation method} & \multirow{2}{*}{K-fold cross-validation} & \(k\)=10 & \multirow{2}{*}{Ensures the model gets trained} \\ & & with data shuffle & on every single data point \\ \end{tabular}
\end{table}
Table 4: Properties of the final SVR ML-model properties chosen to be trained on.
Figure 12: Comparison of \(R^{2}\) for ML-models tested on all target parameters. Full circles represent \(ML_{\text{no/}k}\) and open circles \(ML_{\text{no/}k}\), the learning done for sources with unknown redshift. The SVR algorithm performs best on crucial variables (\(M_{\text{BH}}\) and \(\lambda_{\text{Edd}}\)).
measure of how often our ML-model mistakes a low accretion-rate AGN with a high accretion-rate AGN, and vice versa. This is a useful measure, in the event that the regressor lacks the precision to carry on single source studies, but provides enough information to look at features of a larger population, as it does for \(\lambda_{\rm Edd}\): even though the mean of the binned reconstructed values do not correspond to the center of the true bins, the scaling relation between lower and higher \(\lambda_{\rm Edd}\) is preserved.
All performance metrics described in this section are presented in Table 5 for the \(ML_{\rm w/z}\) and \(ML_{\rm wo/z}\) cases. In the following section, we discuss in greater details the ability of our ML-model to predict the physical parameters of AGN cores.
### Prediction performance
Fig.15 and Fig. 16 summarizes the performance of the SVR for the case with (\(ML_{\rm w/z}\)) and without redshift (\(ML_{\rm wo/z}\)) respectively. Each bin of the response matrix is normalized to the true bins (by column). The matrix elements represent the probability for an AGN with target parameter \(P_{\rm true}\) to be reconstructed with a value \(P_{\rm pred}\). The error on the reconstructed value \(\sigma_{\rm pred}\) for each source is used as weight to the histogram.
#### 5.3.1 Prediction of \(z\)
Determining redshifts through spectroscopic or photometric means has always been a primary goal of large AGN surveys. The overall performance of the SVR in predicting the redshift up to z \(\sim\) 2.5 can be seen on the top left matrix of Fig. 16. To better evaluate the accuracy of \(\mid\Delta z\mid/(1+z_{\rm true})\) (\(\Delta z=z_{\rm pred}-z_{\rm true}\)), we modify our formula of \(\sigma_{\rm NMAD}\) slightly to match the same estimator found in the literature (Brammer et al. 2008; Luo et al. 2010), such that \(\sigma_{\rm NMAD}\)=1.48\(\times\) median(\(\mid\Delta z\)- median(\(\Delta z\)) \(\mid/(1+z_{\rm true})\). An outlier is defined as having \(\mid\Delta z\mid/(1+z_{\rm true})>0.15\) (see Fig. 17). For redshifts reconstructed with the best parameter SVR, we find the rate of outlier to be 3.48% and \(\sigma_{\rm NMAD}\)=0.071 (accuracy of 7.1%), performing just as well as the estimation of photometric redshifts using AGN SDE templates (Luo et al. 2010).
The regressor's reconstruction worsens as we move into higher \(z\): this is simply because the SPIDERS dataset provides few sample for the supervised learning to train on, as the distribution of \(z\) decreases sharply (see Fig. 19). The ML reconstruction of redshifts proves to be a very reliable estimator for a crucial parameter for AGN studies, and one that could be adapted to different depths given an appropriate training dataset.
#### 5.3.2 \(L_{\rm X}\) and \(L_{\rm Bol}\)
Naturally, once the model has an estimation for \(z\), it can easily find the appropriate regression for \(L_{\rm X}\), given that the X-ray flux is one of the model's inputs. As hoped for, the X-ray luminosity is predicted with high accuracy and precision: when \(z\) is known, the error on the estimate is log(\(L_{\rm X}\)/erg s\({}^{-1}\)) \(\sim\) 0.05, and rises to \(\sim\) 0.33 when \(z\) is reconstructed. The bolometric luminosity \(L_{\rm Bol}\), being the convolution of multiple wavelength observations presents the first moderate challenge for the model to predict: it however gives reliable reconstructed values, with \(\sigma_{\rm err}\) = 0.12 (0.31) for \(ML_{\rm w/z}\) (\(ML_{\rm wo/z}\)). In general the performance worsens slightly as we move into the tails of the bin edges, and the data sample to train on become scarce: the reconstructed parameters
Figure 14: Distributions of \(\lambda_{\rm Edd_{\rm end}}\) for various bins in \(\lambda_{\rm Edd_{\rm end}}\), in log space. Gaussian PDFs are fitted and shown over their respective histograms. The hatched area corresponds to the contamination between the lowest and highest range PDFs: the values quoted in Table 5 were calculated by dividing the overlapping (hatched area) with the integral of the highest range PDF.
Figure 13: _Top_: True (red), and predicted distributions reconstructed \(N\) times with \(ML_{\rm w/z}\) (purple) and \(ML_{\rm wo/z}\) (yellow) of the bolometric luminosity for a training source. The true value is represented by a normal distribution by taking into account the measurement error \(\sigma_{\rm true}\) and assuming it to be gaussian. _Bottom_: Mean pull distribution for the \(L_{\rm Bol}\) for all training sources, taking all \(\mu_{\rm true}-\mu_{\rm pred}\) values for \(ML_{\rm w/z}\) (red) and \(ML_{\rm wo/z}\) (purple).
tends to be overestimated for low values of the true parameter, while they are underestimated for high values.
#### 5.3.3 Prediction of \(M_{\rm BH}\) and \(\lambda_{\rm Edd}\)
When it comes to estimations of \(M_{\rm BH}\), the knowledge of \(z\) of the source is the most determining factor for the performance, as a visual comparison of Figs 15 and 16 reveals. The \(R^{2}\) score is 0.65 for \(ML_{w/z}\) and 0.49 for \(ML_{w/z}\), and the width of the pull \(\sigma_{\rm pull}\) goes from 0.54 to 0.66 in units of \(\log(M_{\odot})\). The metric which shows the greatest discrepancy is the contamination level: \(\sim 2\%\) for \(ML_{w/z}\) and \(\sim 8\%\)\(ML_{w/z}\).
The Eddington ratio is the hardest parameter to predict, since the errors from the previous predictions get propagated and compounded. It is also the characteristic for which the prior knowledge of \(z\) has the least impact, although the reconstruction of both \(L_{\rm Bol}\) and \(M_{\rm BH}\) is markedly better in the \(ML_{w/z}\). To understand why that is, we can study the correlations between the predicted parameters in the form of the mean pull values \(\mu_{\rm pull}\). Fig. 18 presents the correlations of pulls between all parameters, for \(ML_{w/z}\) (red) and \(ML_{w/z}\) (blue). In the case where the redshift \(z\) is the first predicted parameter in the chain regression (\(ML_{w/z}\)), all subsequent parameters remain more or less strongly correlated to one another, as the Pearson's correlation score attest to. That is, if a previous parameter is poorly reconstructed, the subsequent one will be as well. On the other hand, no such correlations between the pulls is found in the case of \(ML_{w/z}\), where \(L_{\rm X}\) is the first estimated parameter.
Considering that the reconstruction quality \(\sigma_{\lambda_{\rm fid}}\) depends on \(\sim\frac{\sigma_{\rm Must}}{\sigma_{\rm Lab}}\), and propagating the errors of the ratio gives
Figure 15: Normalized performance matrices for ML-estimator with known redshift as an input (\(ML_{w/z}\)). The true and reconstructed parameters are plotted on the x and y axis, respectively. The error on the reconstruction is used as a weight to the histogram.
Figure 16: Normalized performance matrices for the ML-estimator without a known redshift as an input (\(ML_{\rm{novi}}\)). The matrix for the reconstructed \(\varepsilon\) is added to the variables already presented in Fig. 15.
\(\sim\sqrt{\sigma_{\rm M_{MI}}^{2}+\sigma_{\rm L_{hot}}^{2}-2\rho_{\rm M_{MI}-L_{hot}}}\). Hence, \(\sigma_{\rm J_{HMI}}\) decreases with increasing \(\rho_{\rm M_{MI}-L_{hot}}\) value, which is higher for \(ML_{\rm wo/z}\) (0.42 vs 0.12). That is, the covariance term decreases the uncertainty, which explains why the compounded parameter \(\lambda_{\rm Edd}\) appears not to be better reconstructed with \(ML_{\rm w/z}\) from \(ML_{\rm wo/z}\)
## 6 Reconstruction of the full catalogue
In this following section, we take a closer look at the estimated AGN physical parameters for the \(\sim\)22 000 sources without spectroscopic information. Just as it was done for the training sample, all AGN were reconstructed \(N\)=200 times with the method outlined in Sec. 3. The mean \(\mu_{\rm reco}\) and standard deviation \(\sigma_{\rm reco}\) from a gaussian fit to the posterior probability distribution are recorded for each source 9. Encoded in \(\sigma_{\rm reco}\) are pointers to the regressor's ability to reconstruct AGN that are further away from the input range, revealing differences between population type. The distribution of \(z\) for the 9944 sources in the full catalogue is overlaid on top of that of the training sample in Fig 19: the median is \(\overline{z}=0.52\), and one can observe the range of \(z\) follows a similar trend, with slightly more AGN sources in \(z>1\) range.
Footnote 9: In the training set, these were called \(\mu_{\rm pred}\) and \(\sigma_{\rm pred}\), see top panel of Fig. 13
### Reconstruction quality for Type 1 and Type 2 AGN
Although the ML-model was trained on a sample of Type 1 AGN only, we have reconstructed the 5362 sources in our catalogue identified as Type 2 AGN, using the classifier and criteria presented in Sect. 4. Fig. 20 presents the distributions of the reconstruction uncertainty \(\sigma_{\rm reco}\) on the \(M_{\rm BH}\) parameter for Type 1 and 2 AGN, with and without known \(z\). Sources reconstructed with \(ML_{\rm w/z}\) have smaller uncertainties, following the results from the training sample (see Sect. 5), and the SVR has a harder time estimating parameters for Type 2 AGN. The shape of the distribution informs that the regressor is able to reconstruct the \(M_{\rm BH}\) Type 2 AGN with known \(z\) (purple and blue distributions), but is unable to do so for Type 2 AGN with unknown z, as exemplified by the flat green curve, which is characteristic of reconstructed noise. For these sources, only the redshift \(z\) is reconstructed.
Considering what was already shown in Fig. 9, we know that many AGN classified as Type 2 fall into the faint end of the optical magnitude distribution (eg: SDSS \(u\)-band mag \(>\) 22). Fig. 21 shows the uncertainty \(\sigma_{\rm reco}\) on \(z\) as a function of the \(u\)-band magnitude: the fainter the source, the more outside of the bounds of \(u\) magnitude the ML model has trained on, the greater the uncertainty on the reconstructed parameter. This is yet another information one gains by propagating the input uncertainties and reconstructing each source iteratively: the difficulty the regressor encounters when estimating points outside of its known range is translated in the spread of the posterior distribution for all output parameters.
#### 6.1.1 Control sample: reconstructing z for Type 2 AGN with known z
For the full catalogue, the only control parameter to verify the accuracy of the regressor's reconstruction for Type 2 AGN is the redshift, which will determine the successful reconstruction of subsequent parameters down the chain. This will also help to determine the range of reasonable extrapolation as a function of the input parameters. We use the 1416 Type 2 AGN that have a redshift information, and estimate \(z_{\rm reco}\) using \(ML_{\rm wo_{z}}\). We repeat the analysis done in Sect. 5.3.1 by calculating the \(|\Delta z|\ /(1+z_{\rm true})\) distribution. We obtain an outlier rate of 7.3% and an accuracy of 8.2%, both under the 10% limit. Fig. 22 shows the accuracy for sources above a defined threshold in \(u\)-band magnitude (purple, top axis) and \(\sigma_{\rm reco}\) (pink, bottom axis), respectively. The quality of the \(z\) reconstruction is in part driven by the optical faintness of the sample (and how far from the training range it lies), and in order to select a "purer" sample, one that is well reconstructed, a selection based on \(\sigma_{\rm reco}\) is equivalent to one in optical brightness.
As an additional check, we also reconstruct subsequent parameters \(L_{\rm Bol}\), \(M_{\rm BH}\) and \(\lambda_{\rm Edd}\), to further verify the soundness of extrapolating the ML regressor from Type 1 AGN to Type 2 AGN. The blue distribution in Fig. 20 shows the posterior distribution for the black hole mass parameter for Type 2 AGN with z information, reconstructed with \(ML_{\rm wo_{z}}\). The blue curve follows the expected PDF for non-noisy reconstruction, a confirmation of the merit of the ML reconstruction for this sub-sample. This, and the good reconstruction of \(z\) for Type 2 AGN (with known \(z\)) can be taken as an assurance that within a range, the regressor is capable of estimating certain parameters for obscured sources.
#### 6.1.2 Removal of outliers
As a last step, we remove sources in the final sample for which the reconstructed values lie too far beyond the phase space of the ML training. We require that \(z_{\rm reco}<4\) and -5 \(<\log\lambda_{\rm Edd_{\rm obs}}<\) 2. 149 sources are removed after this selection. As already mentioned in Sect. 6.1, the 4457 Type 2 AGN without redshift \(z\) are given N/A values for \(L_{\rm X}\), \(L_{\rm Bol}\), \(M_{\rm BH}\) or \(\lambda_{\rm Edd}\), since the ML regressor is incapable of estimating these parameters. The reconstructed redshift and associated errors is however added to the catalogue.
### Type 1 AGN and population studies
We now focus on the AGN classified as Type 1, as that population follows the training dataset more closely. Fig. 23 presents
Figure 17: Distribution of the predicted redshift \(z_{\rm pred}\) accuracy derived with the true spectroscopically measured redshift \(z_{\rm true}\). The dashed red lines represent the limit beyond which a prediction is counted as an outlier.
the bolometric luminosity versus BH mass (top panel) for the Type 1 AGN presented in this work and the SPIDERS AGN: the reconstructed sample is well bounded by the Eddington limit. The bottom panel of Fig. 23 shows the Eddington ratio as a function of the redshift for the same subsamples. As the response matrices in Fig. 15 and Fig. 16 have shown, the ML-model is not very apt to reconstruct extreme cases, in the lower and upper tails of the target parameter distribution. That is, it will overestimate low values and underestimate higher ones: this is indeed visible in the bottom panel of Fig. 23, where the reconstructed samples occupy a smaller region of the log \(\lambda_{\rm Edd}\) space than the SPIDERS AGN do.
The top panel of Fig 24 presents the AGN number source density over a wide range of redshifts for several bins in bol
Figure 18: Correlation between pull values for all parameters for \(ML_{w/z}\) (red) and \(ML_{w/z}\) (blue). The quality of reconstruction, represented by the mean pull value, is more correlated between the variables in \(ML_{w/z}\) than it is for \(ML_{w/z}\). The vertical dashed lines in the histograms indicate the 0.16 and 0.84 quantiles of the distributions, and the numbers show the respective medians and 0.16 and 0.84 quantiles.
\begin{table}
\begin{tabular}{|c c c c c c|} \hline Predicted parameter & Unit & \(z\) input\({}^{2}\) & \(R^{2}\) & \(\mu_{\rm pull}\pm\sigma_{\rm pull}\) & \(\sigma_{\rm NMAD}\) & Contamination \\ \hline \(z\) & log & \(ML_{w/z}\) & 0.86 & 0.003 \(\pm\) 0.150 & 7.07\% & 0.00\% \\ \hline \(L_{\rm X}\) & erg.s\({}^{\star}\)(log) & \(ML_{w/z}\) & 0.97 & -0.012 \(\pm\) 0.049 & 5.81\% & 0.0\% \\ \hline \(M_{\rm bol}\) & \(M_{w/z}\) & 0.83 & 0.080 \(\pm\) 0.334 & 22.5\% & 0.43\% \\ \hline \(L_{\rm Edd}\) & erg.s\({}^{\star}\)(log) & \(ML_{w/z}\) & 0.96 & -0.000 \(\pm\) 0.117 & 7.71\% & 0.0\% \\ \hline \(ML_{\rm{X}0/z}\) & 0.83 & 0.001 \(\pm\) 0.318 & 21.5\% & 1.94\% \\ \hline \(M_{\rm BH}\) & log(\(M_{\odot}\)) & \(ML_{w/z}\) & 0.65 & -0.009 \(\pm\) 0.546 & 37.3\% & 1.49\% \\ \hline & \(M_{w/z}\) & 0.49 & -0.001 \(\pm\) 0.661 & 45.0\% & 7.53\% \\ \hline \(\lambda_{\rm Edd}\) & log & \(ML_{w/z}\) & 0.40 & 0.008 \(\pm\) 0.524 & 35.8\% & 13.4\% \\ \hline & \(ML_{w/z}\) & 0.37 & 0.008 \(\pm\) 0.532 & 36.7\% & 5.96\% \\ \hline \end{tabular}
\end{table}
Table 5: Results of the best-tuned SVR estimators, \(ML_{w/z}\) and \(ML_{w/z}\), for the performance metrics presented in Sect. 5.2.2 for all target parameters.
metric luminosity. Reconstructed Type 1 AGN are shown in full circles, and SPIDERS AGN are represented in open circles for the same luminosity bins. The same trends are observed in the spectroscopically observed and reconstructed samples: the number density of lower-luminosity AGN peaks later in cosmic time than that of more luminous ones. this effect is known as AGN downsizing (see review Brandt & Alexander (2015)). The bottom panel of Fig 24 shows the black hole masses of these sources, using the same binning in \(L_{\rm Bol}\). Although the tails of the distributions are not well represented in the reconstructed sample when matched to their SPIDERS AGN counterpart, importantly, not only does the scaling trend of increasing \(M_{\rm BH}\) with \(L_{\rm Bol}\) remain, but the peaks of the distribution is also coincident between the spectroscopically observed and ML-reconstructed Type 1 AGN samples.
## 7 Summary and conclusions
We release the first photometry based estimates of bolometric luminosity \(L_{\rm Bol}\), black hole mass \(M_{\rm BH}\), and Eddington ratio \(\lambda_{\rm Edd}\) for 16 907 sources ranging over 6 dex in luminosity and up to \(z\)=2.5 in redshift. For 11 420 of these sources the redshift was previously determined spectroscopically and is used in the estimation of the remaining parameters, as well as for the verification of the redshift estimate. For 11 404 sources without a previously known redshift, the reconstructed \(z\) is provided. An uncertainty is given for all estimated parameters, thanks to a simulation based technique which incorporates measurement errors in the fit and reconstruction of the ML regressor. In addition, we have demonstrated how ML classification tools can help identify obscured AGN, a crucial challenge in the field (Hickox & Alexander 2018).
While the addition of \(\sim\)15 000 Type 1 AGN sources from this catalogue might not dramatically improve our knowledge of the luminosity function, considering that the 8000 SPIDERS AGN sources were measured with greater accuracy in the same phase space, the release of this new dataset is of particular use for multimessenger astronomy studies, where one needs to know these physical parameters for a large sample of sources while maximizing the sky coverage. AGN have been favoured to be strong
Figure 23: (_Top_) Scatter plot of \(L_{\rm Bol}\) as a function of \(M_{\rm BH}\) for the SPI-DERS AGN (blue dots), and reconstructed AGN classified as Type 1. AGN with known \(z\) measurement are shown in red dots, and those with reconstructed \(z\) are represented in green. (_Bottom_) Same three samples for the \(\lambda_{\rm Edd}\) vs \(z\) distribution.
Figure 24: (_Top_) AGN downsizing: comoving number density vs. redshift for Type 1 AGN from this work’s catalogue (full circles) and the SPI-DERS AGN catalogue (open circles) for different bins of \(L_{\rm Bol}\) in units of log(erg s\({}^{-1}\)). (_Bottom_) Distribution of \(M_{\rm BH}\) for the same bins of bolometric luminosities, for the reconstructed AGN (colored bars) and SPI-DERS AGN (colored steps). A flat \(\Lambda\)CDM cosmology with \(H_{0}\) = 70 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm M}\) = 0.3, and \(\Omega_{\Lambda}\) = 0.7 is assumed to calculate the comoving number density.
cosmic ray emitters (Murase, 2022; Murase and Stecker, 2022), with the recent discovery showing the nearby obscured AGN NGC 1068 to be a steady source of neutrinos (ICECUBE COLLABORATION et al., 2022). Searches for a cumulative signal from different AGN populations, such as Abbasi et al. (2022), can help characterize which sources contribute most to the flux of neutrinos, based on their accretion parameters. By strategically targeting a subset of sources observed spectroscopically, we would able to train similar ML-algorithms and reconstruct a larger sample of photometrically measured AGN. The method can easily be expanded to other cosmic demographics -- higher \(z\) for instance-- granted a corresponding dataset is provided to train a ML algorithm. In this work, we were limited by demanding that sources had been observed with SDSS photometry: this constrained the coverage to a quarter of the full sky. A natural next step would be to expand the optical sky coverage by cross-matching sources with the Pan-STARRS 3\(\pi\) survey Flewelling et al. (2020), and recover most AGN identified by infrared and X-ray telescopes. Finally, eROSITA has been scanning the full-sky with unprecedented sensitivity in the soft (0.2-2.3 keV) and hard (2.3-8 keV) bands (Merloni et al., 2012; Predehl et al., 2021). Incorporating this dataset will also offer new understanding of obscured AGN, as harder X-ray photons are transparent to obscuring dust.
###### Acknowledgements.
The author wishes to thank Itrach Sadeh, Anna Franckowiak, Sjoert Van Velzen, Jannis Necker, Simone Garrapa and Jonas Sinapius for their fruitful comments and support. Timo Karg is to be thanked for helping with various technicalities.
|
2309.16000 | Universal determination of comagnetometer response to spin couplings | We propose and demonstrate a general method to calibrate the
frequency-dependent response of self-compensating noble-gas-alkali-metal
comagnetometers to arbitrary spin perturbations. This includes magnetic and
nonmagnetic perturbations like rotations and exotic spin interactions. The
method is based on a fit of the magnetic field response to an analytical model.
The frequency-dependent response of the comagnetometer to arbitrary spin
perturbations can be inferred using the fit parameters. We demonstrate the
effectiveness of this method by comparing the inferred rotation response to an
experimental measurement of the rotation response. Our results show that
experiments relying on zero-frequency calibration of the comagnetometer
response can over- or under-estimate the comagnetometer sensitivity by orders
of magnitude over a wide frequency range. Moreover, this discrepancy
accumulates over time as operational parameters tend to drift during
comagnetometer operation. The demonstrated calibration protocol enables
accurate prediction and control of comagnetometer sensitivity to, for example,
ultralight bosonic dark-matter fields coupling to electron or nuclear spins as
well as accurate monitoring and control of the relevant system parameters. | Mikhail Padniuk, Emmanuel Klinger, Grzegorz Lukasiewicz, Daniel Gavilan-Martin, Tianhao Liu, Szymon Pustelny, Derek F. Jackson Kimball, Dmitry Budker, Arne Wickenbrock | 2023-09-27T20:31:47Z | http://arxiv.org/abs/2309.16000v2 | # Universal determination of comagnetometer response to spin couplings
###### Abstract
We propose and demonstrate a general method to calibrate the frequency-dependent response of self-compensating noble-gas-alkali-metal comagnetometers to arbitrary spin perturbations. This includes magnetic and non-magnetic perturbations like rotations and exotic spin interactions. The method is based on a fit of the magnetic field response to an analytical model. The frequency-dependent response of the comagnetometer to arbitrary spin perturbations can be inferred using the fit parameters. We demonstrate the effectiveness of this method by comparing the inferred rotation response to an experimental measurement of the rotation response. Our results show that experiments relying on zero-frequency calibration of the comagnetometer response can over- or underestimate the comagnetometer sensitivity by orders of magnitude over a wide frequency range. Moreover, this discrepancy accumulates over time as operational parameters tend to drift during comagnetometer operation. The demonstrated calibration protocol enables accurate prediction and control of comagnetometer sensitivity to, for example, ultralight bosonic dark-matter fields coupling to electron or nuclear spins, as well as accurate monitoring and control of the relevant system parameters.
+
Footnote †: 1}\)Marian Smoluchowski Institute of Physics, Jagiellonian University in Krakow, Lojasiewicza 11, 30-348, Krakow, Poland
+
Footnote †: 1}\)Marian Smoluchowski Institute of Physics, Jagiellonian University in Krakow, Lojasiewicza 11, 30-348, Krakow, Poland
+
Footnote †: 1}\)Marian Smoluchowski Institute of Physics, Jagiellonian University in Krakow, Lojasiewicza 11, 30-348, Krakow, Poland
## I Introduction
Over two decades of development, self-compensating noble-gas-alkali-metal comagnetometers have been used for fundamental physics tests [1; 2; 3; 4] and precise rotation measurements with potential applications for navigation in challenging conditions [4; 5; 6; 7; 8]. Recently, these systems have gained attention as promising tools to realize long-lived quantum memories [9; 10; 11]. In this case, the potential arises from the long coherence times of the noble gas spins and efficient access to the noble gas via the alkali species.
In a comagnetometer operating under the right working conditions, called the self-compensating regime, the nuclear magnetization adiabatically cancels transverse 1 external magnetic fields experienced by the alkali-metal spins. These conditions are achieved at the so-called compensation point, at which the externally applied longitudinal magnetic field approximately cancels the total field experienced by the electron spins of the alkali-metal atoms [the precise definition of the compensation point is provided in Eq. (2)]. It is noteworthy that the total field includes not only the external leading magnetic field, but also the effective field arising from rotations, exotic fields coupling to spins, and collisional interaction of alkali and noble-gas atoms [12].
Footnote 1: Conventionally, the field oriented along the pump beam propagation direction is called the longitudinal field and fields orthogonal to the pump beam propagation direction are called transverse fields.
In turn, at the compensation point, even though the alkali-metal spins constitute a highly sensitive zero-field magnetometer, the comagnetometer signal is insensitive to drifts and fluctuations of external transverse magnetic fields at frequencies below the noble-gas Larmor frequency.
As highly-sensitive atomic magnetometers are often limited by the stability of the magnetic environment, compensation of the slowly drifting fields makes the comagnetometers an attractive choice for applications that require continuous measurement for long time periods (e.g., hours or even days). Examples of such measurements are dark-matter searches [2; 13] and measurements of electric dipole moments (EDMs) of fundamental particles [14], as well as, as mentioned above, navigation applications [15; 16]. To perform such applications, however, the frequency response of the comagnetometers to the non-magnetic perturbations (e.g., rotations) has to be accurately known, which requires a reliable method of calibration.
In the search for exotic spin couplings, presently, comagnetometer-based experiments provide some of the most stringent limits on Lorentz invariance violation [17], spin-dependent gravitational interactions [18] and spin-dependent fifth forces [19]. Many proposed extensions of the Standard Model (SM) predict the existence of new ultralight bosons [20; 21; 22; 23], which could explain dark matter. Such ultralight bosonic dark matter could interact
with SM particles over a variety of portals [13; 24; 25], leading to oscillations of fundamental constants and nuclear and electronic EDMs, as well as torques on spins. The mass of these ultralight bosons could be anywhere between \(10^{-22}\) and \(10\,\mathrm{eV}\), which results in a large boson-mass/coupling-strength parameter space to be explored. To date, several searches of such interactions have been published and more are on the way [13; 26; 27; 28].
For gradient-coupled axion-like particle dark matter [13], self-compensating comagnetometers place the most stringent limits in the frequency range from \(0.01\) to \(10\,\mathrm{Hz}\), corresponding to a mass range between \(4\times 10^{-17}\) and \(4\times 10^{-14}\,\mathrm{eV}\)[29; 30] (overall these are the most stringent limits in any mass range). Other experiments are looking even beyond this mass range [31; 32; 33]. In order to characterize a signal due to exotic interactions or place meaningful bounds, understanding the frequency response of the system to exotic-physics-related fields is of utmost importance.
Self-compensating comagnetometers also form the core of the upgraded advanced version of the Global Network of Optical Magnetometers for Exotic physics searches (GNOME) [34; 35; 28; 36]. GNOME is an international network of spin-state sensors [37] (e.g., atomic magnetometers [38; 39]), currently with 14 stations, looking for spatio-temporally correlated signatures of ultralight bosonic dark matter. The sought-after signals could be generated by compact composite exotic physics objects such as axion-like field domain walls [36; 40; 41; 42], axion stars [43], gravitationally bound axion halos [44; 45; 46], but also bursts of ultralight bosons emitted by cataclysmic astrophysical events (such as binary black hole mergers [47]) and stochastic fluctuations of the ultralight fields [33; 48]. The upgrade in the network is driven by the improved sensitivity of self-compensating comagnetometers to nuclear spin couplings, which is three (proton) and six (neutron) orders of magnitude better than that of previously used alkali-vapor-only magnetometers. Here the frequency response is crucial to understand how time-dependent signals measured with the comagnetometer differ from the predicted transient exotic field signals.
Previously, we studied the frequency response of self-compensating comagnetometers numerically [49]. However, a direct measurement of the frequency response to exotic fields is so far impossible, since the fields have not been observed.
In this work, we propose and demonstrate a calibration method to infer the frequency response of a self-compensating comagnetometer to general spin perturbations. The proposed calibration method involves an easy-to-implement protocol to measure the magnetic field frequency response, fitting it with an analytical model, and subsequently using the fit parameters to deduce the response to non-magnetic couplings. To validate the method, we built a self-compensating comagnetometer on a rotation stage and directly measured the frequency response to rotations. The experimentally measured results show excellent agreement with the model. Additionally, we discuss how the frequency response changes as a function of the applied leading magnetic field. We experimentally show how errors in the assumed field dramatically affect the interpretation of measurement results. In general, the method enables comagnetometer-based searches for new physics and accurate rotation sensing over a broad frequency range.
The paper is structured as follows: first we briefly lay out the theoretical framework, relate the magnetic frequency response to the frequency response of other perturbations and explain how to measure the magnetic frequency response (Sec. II). Then we describe experimental setup and procedure (Sec. III) before presenting and discussing the obtained results (Sec. IV). Finally, we conclude in Sec. V.
## II Theory of comagnetometer frequency response
The dynamics of two overlapping spin ensembles have been actively studied since 2002 [50]. Although theoretical and experimental considerations regarding the frequency response of the self-compensating comagnetometer system to a magnetic field have been published in multiple references [51; 52; 53], the theoretical considerations of the frequency response to exotic spin couplings have been published in Ref. [54; 55].
Here, we review the main results from Ref. [49] and its supplemental material and utilize them to construct the frequency response to rotations and exotic fields based on the magnetic field frequency response.
The coupled evolution of the alkali-metal polarization \(\mathbf{P^{e}}\) and noble-gas polarization \(\mathbf{P^{n}}\) can be described with a coupled system of two Bloch equations (also known as Bloch-Hasegawa equations) [56; 57]
\[\begin{split}\frac{d\mathbf{P^{e}}}{dt}=&\frac{1}{q} \gamma_{e}(\mathbf{B}+\alpha_{e}\mathbf{b}+\lambda M^{n}\mathbf{P^{n}}) \times\mathbf{P^{e}}+\frac{1}{q}[R_{\mathrm{se}}^{en}(\mathbf{P^{n}}-\mathbf{ P^{e}})+(\mathbf{P^{e}_{0}}-\mathbf{P^{e}})R^{e}],\\ \frac{d\mathbf{P^{n}}}{dt}=&\gamma_{n}(\mathbf{B}+ \alpha_{n}\mathbf{b}+\lambda M^{e}\mathbf{P^{e}})\times\mathbf{P^{n}}+R_{ \mathrm{se}}^{ne}(\mathbf{P^{e}}-\mathbf{P^{n}})-R^{n}\mathbf{P^{n}}.\end{split} \tag{1}\]
Here, \(\gamma_{e}\) and \(\gamma_{n}\) are the gyromagnetic ratios of the free electron spin and the noble-gas nuclear spin, respectively. The slowing-down factor of the alkali spins \(q\), reduces the electronic gyromagnetic ratio \(\gamma_{e}\) due to the hyperfine
interaction and redistribution of atoms over the hyperfine levels due to spin-exchange collisions [58]. The collisional coupling constant \(\lambda=2\kappa_{0}\mu_{0}/3\) is characteristic for a given mixture of noble gas and alkali-metal vapor and is defined with vacuum permeability \(\mu_{0}\) and spin-exchange enhancement factor \(\kappa_{0}\). The latter results from the overlap of the alkali electron wave function and the nucleus of the noble gas [59]. \(M^{e}\) represents the electron magnetization, while \(M^{n}\) represents the nuclear magnetization of the fully polarized spin species. The rates of polarization transfer from the electronic to the nuclear species (and vice versa) by spin-exchange collisions are denoted by \(R_{se}^{en}\) (\(R_{se}^{ne}\)). \(R^{e}\) represents the relaxation rate of the alkali-metal polarization due to all relaxation processes, including these related to the pump light. The equilibrium electronic polarization, \(\mathbf{P_{0}^{e}}\), results from optical pumping by the pump light. \(R^{n}\) is the relaxation rate of the nuclear polarization of the noble gas. Generic (i.e., magnetic or nonmagnetic) external perturbations are introduced by \(\mathbf{B}\) and \(\mathbf{b}\). The constant external magnetic field \(\mathbf{B}\) sets the operation conditions (i.e., the self-compensation mode). The vector \(\mathbf{b}\) represents possible perturbations with coupling constants \(\alpha_{e}\) and \(\alpha_{n}\), where the subscripts denote coupling to electron and nuclear polarization, respectively. The coupling constants \(\alpha_{e}\) and \(\alpha_{n}\) differ for magnetic fields, rotations, and exotic couplings and are given in Table 1.
We are interested in the frequency response of the system to generic spin perturbations and an experimental way to test this. For the first task, we derive an analytical expression for the frequency response, assuming the system operates near the self-compensating point. The chosen value of \(\mathbf{B}\) is determined by the equilibrium electronic spin polarization along the pump-beam axis, i.e., in the longitudinal direction. Since the measured signal is determined by the transverse polarization, we restrict our analysis to the response of the system to transverse fields.
The magnetometer is tuned to the self-compensating regime by setting the constant external magnetic field \(\mathbf{B}\) to the compensation point \(\mathbf{B_{c}}\)[57]
\[\mathbf{B_{c}}=-\lambda(M^{n}\mathbf{P_{0}^{n}}+M^{e}\mathbf{P_{0}^{e}})\,, \tag{2}\]
where \(\mathbf{P_{0}^{n}}=R_{se}^{ne}\mathbf{P_{0}^{e}}/(R^{n}+R_{se}^{ne})\) is the equilibrium nuclear polarization. We are interested in the operation of the system around the compensation point and introduce the field difference (detuning) \(\mathbf{\Delta_{B}}\) relative to the compensation point
\[\mathbf{\Delta_{B}}=\mathbf{B_{c}}-\mathbf{B}\,. \tag{3}\]
Hereafter, we replace the vector quantity \(\mathbf{\Delta_{B}}\) by its z-component \(\Delta_{B_{z}}\), assuming that the other parts of the field are zero. Furthermore, we separate \(\mathbf{P_{0}^{n}}\) and \(\mathbf{P_{0}^{e}}\) into longitudinal (\(P_{\parallel}^{n}\), \(P_{\parallel}^{e}\)) and transverse (\(P_{\perp}^{n}\), \(P_{\perp}^{e}\)) components. Close to the compensation point, the effective magnetic field experienced by each spin species from the polarization of the other species and the applied compensating fields are much larger (\(\sim\) nT) than the considered external spin perturbations (\(\sim\) fT-pT). We utilize the small-angle approximation, assuming that the longitudinal polarizations to be constant and equal to their equilibrium values. This allows us to linearize the coupled Bloch equations by separating the constant longitudinal part and the time-varying transverse part. This approximation, along with Eq. (2), result in the following form of the Bloch equations
\[\begin{split}\frac{dP_{\perp}^{e}}{dt}&=-i\frac{ \gamma_{e}}{q}\bigg{[}\big{(}\alpha_{e}b_{\perp}+\lambda M^{n}P_{\perp}^{n} \big{)}P_{\parallel}^{e}-\big{(}\Delta_{B_{z}}+\lambda M^{e}P_{\parallel}^{e }\big{)}P_{\perp}^{e}\bigg{]}-\frac{R_{e}}{q}P_{\perp}^{e}\\ \frac{dP_{\perp}^{n}}{dt}&=-i\gamma_{n}\bigg{[} \big{(}\alpha_{n}b_{\perp}+\lambda M^{e}P_{\perp}^{e}\big{)}P_{\parallel}^{n }-\big{(}\Delta_{B_{z}}+\lambda M^{n}P_{\parallel}^{n}\big{)}P_{\perp}^{n} \bigg{]}-R_{n}P_{\perp}^{n}\end{split}, \tag{4}\]
where all vector quantities are separated into the real longitudinal (\(\parallel\)) and complex transverse (\(\perp\)) parts in a sim
\begin{table}
\begin{tabular}{l c c c} Coupling type & \(\mathbf{b}\) & \(\alpha_{e}\) & \(\alpha_{n}\) \\ \hline Magnetic field & \(\mathbf{B_{ext}}\) & 1 & 1 \\ Rotation & \(\frac{\mathbf{\Omega}}{\gamma_{n}}\) & \(q\frac{\gamma_{n}}{\gamma_{e}}\) & 1 \\ Exotic neutron coupling & \(\frac{h\pi_{\text{neutron}}^{n}\chi_{\text{neutron}}}{\gamma_{n}}\mathbf{\Xi_{\text{neut}}}\) & \((q-1)\frac{\gamma_{2}}{\gamma_{e}}\frac{\sigma_{\text{neut}}^{e}}{\sigma_{n \text{neut}}^{n}}\) & 1 \\ Exotic proton coupling & \(\frac{h\sigma_{\text{neutron}}^{n}\chi_{p}}{\gamma_{n}}\mathbf{\Xi_{\text{p}}}\) & \((q-1)\frac{\gamma_{2}}{\gamma_{e}}\frac{\sigma_{\text{p}}^{e}}{\sigma_{p}^{e}}\) & 1 \\ Exotic electron coupling & \(\frac{\chi_{\text{neutron}}}{\gamma_{n}}\mathbf{\Xi_{e}}\) & \(q\frac{\gamma_{n}}{\gamma_{e}}\) & 0 \\ \end{tabular}
\end{table}
Table 1: Coupling constants \(\alpha_{e}\) and \(\alpha_{n}\) used in Eq. (1) to parameterize spin couplings of different origin. The notation used in the table is the following: \(\mathbf{B_{ext}}\) corresponds to an additional external magnetic field transverse to external magnetic field \(\mathbf{B}\), \(\mathbf{\Omega}\) represents the angular velocity vector of a mechanical rotation of the setup, \(\mathbf{\Xi_{\text{i}}}\) denotes an exotic perturbation and \(\chi_{i}\) characterizes the coupling strength with the subscript \(i\) being “\(neu\)” for neutrons, \(p\) for protons, and \(e\) for electrons. Exotic couplings to nucleons also affect the electronic polarisation via the hyperfine interaction. This is considered with constants \(\sigma_{\text{neut}}^{e}\) and \(\sigma_{p}^{e}\), for exotic neutron and proton couplings, respectively. \(\sigma_{\text{neut}}^{n}(\sigma_{p}^{n})\) is the neutron (proton) fraction of the noble-gas nucleon spin, and \(\sigma_{\text{neut}}^{e}(\sigma_{p}^{e})\) is the neutron (proton) fraction of the nuclear spin of the alkali metal [60].
ilar manner to polarisation 2. The total relaxation rates \(R_{e}\) and \(R_{n}\) take into account all relaxation processes, including polarization transfer due to spin exchange.
Footnote 2: For any vector \(\mathbf{x}\) in the Cartesian coordinate system, \(x_{\parallel}=x_{z}\) and \(x_{\perp}=x_{x}+ix_{y}\), where we assume that the polarization axis is co-linear with \(\mathbf{z}\).
In this work, we are interested in the frequency response of the comagnetometer to a generic spin perturbation. This can be obtained by solving the Bloch equations (4) with oscillating perturbation of the amplitude \(b_{0}\) at frequency \(\omega\)
\[b=ib_{0}\sin(\omega t)\,. \tag{5}\]
From the analysis of Eq. (4), one obtains that for such a perturbation the transverse polarization oscillates at the same frequency and hence can be written as
\[P_{\perp}^{e}(t)=\frac{1}{2}\bigg{(}P_{\perp}^{e+}(\omega)e^{i\omega t}-P_{ \perp}^{e-}(\omega)e^{-i\omega t}\bigg{)}\,, \tag{6}\]
where the amplitudes of the transverse electronic polarizations \(P_{\perp}^{e\pm}\) are given by
\[P_{\perp}^{e\pm}(\omega)=-\frac{\gamma_{e}P_{\parallel}^{e}b_{0}}{q}\frac{ \omega_{n}(\alpha_{e}-\alpha_{n})+(\pm\omega+\gamma_{n}\Delta_{B_{s}}-iR_{n}) \alpha_{e}}{(\pm\omega+\omega_{e}+\gamma_{e}\Delta_{B_{s}}/q-iR_{e}/q)(\pm \omega+\omega_{n}-+\gamma_{n}\Delta_{B_{s}}-iR_{n})-\omega_{e}\omega_{n}}\,. \tag{7}\]
In Eq. (7), \(\omega_{e}=\gamma_{e}\lambda M^{e}P_{\parallel}^{e}/q\) and \(\omega_{n}=\gamma_{n}\lambda M^{n}P_{\parallel}^{m}\) are the Larmor frequencies of the electron and nuclear polarizations at the compensation point. Dividing the transverse electronic polarization amplitudes by the applied perturbation amplitude results in the frequency response:
\[\mathcal{F}(\omega)=\frac{P_{\perp}^{e}(\omega)}{b_{0}}\,. \tag{8}\]
This gives the frequency response to the perturbation at a single frequency \(\omega\). A procedure for complete determination of \(\mathcal{F}(\omega)\) at all frequencies via measurement and fitting of a comagnetometer's response to controlled transverse magnetic field perturbations is the key result of this work.
### Measurement of the frequency response with a pulse
The response of the self-compensating comagnetometer is linear in small external perturbations with respect to the compensating field; therefore, to obtain the spectrum of the response \(P_{x}^{e}(\omega)\), the frequency response of the system \(\mathcal{F}(\omega)\) is multiplied by the specific perturbation spectrum \(b(\omega)\):
\[P_{x}^{e}(\omega)=\sqrt{2\pi}\mathcal{F}(\omega)b(\omega)\,. \tag{9}\]
Therefore, the frequency response allows a quantitative calibration of the system and its parameters and can be measured with a known perturbation.
One way to measure the frequency response is to excite all possible frequencies by applying a step change of the considered perturbation (magnetic field or rotation). A step change in the time domain is described by the Heaviside theta function \(\Theta(t)\) and step amplitude \(b_{0}\)
\[b_{test}(t)=b_{0}\Theta(-t)\,, \tag{10}\]
which has the following spectrum
\[b_{test}(\omega)=\mathfrak{F}[b_{test}(t)]=\frac{1}{\sqrt{2\pi}}\frac{b_{0}}{i \omega}+b_{0}\sqrt{\frac{\pi}{2}}\delta(\omega). \tag{11}\]
Applying this spectrally wide perturbation allows us to determine the complete frequency response in a single measurement. The spectral amplitude of the Heaviside theta function changes with frequency, which has to be taken into account to get to the frequency response. We do this by performing a numerical time derivative of the response data. In Fourier space, this is equivalent to a multiplication with \(\omega\) and appears simpler. Measuring the response to a perturbation, performing a numerical time derivative of the data and applying a Fourier transform to the result yields the frequency response of the comagnetometer projected onto the measurement axis:
\[\mathfrak{F}\bigg{[}\frac{dP_{x}^{e}(t)}{dt}\bigg{]}=i\omega P_{x}^{e}(\omega) =b_{0}\mathcal{F}(\omega)\,, \tag{12}\]
where we took into account Eqs. (9), (10) and (11). Hence, the frequency response of the system can be determined by calculating the Fourier transform of the (numerical) time derivative of the data obtained from the system response to a step change in the amplitude of the considered perturbation
\[\mathcal{F}(\omega)=\frac{1}{b_{0}}\mathfrak{F}\bigg{[}\frac{dP_{x}^{e}(t)}{ dt}\bigg{]}\,. \tag{13}\]
### From magnetic-field to the total response of the comagnetometer
Equation (7) shows that the response to magnetic and non-magnetic perturbations is described with the same
set of parameters. When these parameters are known, the system response to perturbations of arbitrary nature and directions can be constructed. Furthermore, the parameters are accessible by measuring any specific (e.g., magnetic) frequency response and fitting it with the appropriate model. This way, the operating regime and the generic frequency response can be determined solely by measuring the magnetic frequency response.
The magnetic field response [Eq. (13)] is fitted with a model obtained from Eqs. (6) and (7). Identifying the real and imaginary parts of the response with the in- and out-of-phase components allows one to define a complex signal,
\[\mathcal{F}^{m}=(\mathcal{F}^{m}_{in}+i\mathcal{F}^{m}_{out})e^{i\theta}=i( \mathcal{F}^{m}_{+}-\mathcal{F}^{m*}_{-})e^{i\theta}\,, \tag{14}\]
where the star operator denotes the complex conjugate and \(\theta\) is a fitting parameter taking into account possible phase shifts. The fitting model obtained from Eq. (7) for the response to magnetic fields gives the following relation for the components:
\[\mathcal{F}^{m}_{\pm}(\omega)=-a\frac{\pm\omega+\gamma_{n}\Delta_{B_{s}}-i|R_{ n}|}{(\pm\omega+\omega_{e}+\Delta_{B_{s}}\gamma_{e}/q-|R_{e}|)(\pm\omega+ \omega_{n}+\gamma_{n}\Delta_{B_{s}}-i|R_{n}|)-\omega_{e}\omega_{n}}\,, \tag{15}\]
where fitting parameters are marked blue and predefined parameters are marked green. The predefined parameters are: \(\gamma_{n}\) - the gyromagnetic ratio of the noble-gas nuclear spin, \(\gamma_{e}\) - the electron gyromagnetic ratio, and \(q\) - the slowing-down factor approximated by a constant estimated prior to the calibration. The parameters used for fitting the magnetic response can then be used to construct the response to any other perturbation along both transverse axes according to the following model
\[\mathcal{F}^{r}_{pr}(\omega) =(\mathcal{F}^{r}_{+}-\mathcal{F}^{r*}_{-})e^{i(\theta+\pi/2)}, \tag{16}\] \[\mathcal{F}^{r}_{sec}(\omega) =(\mathcal{F}^{r}_{+}+\mathcal{F}^{r*}_{-})e^{i(\theta+\pi)}, \tag{17}\]
where \(\mathcal{F}^{r}_{pr}(\omega)\) is the response to fields applied along the primary sensitivity axis, i.e., parallel to the propagation direction of the probe beam, and \(\mathcal{F}^{r}_{sec}\) is the response to fields along the secondary sensitivity axis, i.e., orthogonal to pump and probe beam. \(\mathcal{F}^{r}_{\pm}\) has the following form obtained from Eq. (7)
\[\mathcal{F}^{r}_{\pm}=-a\frac{\omega_{n}(\alpha_{e}-\alpha_{n})+(\pm\omega+ \gamma_{n}\Delta_{B_{s}}-i|R_{n}|)\alpha_{e}}{(\pm\omega+\omega_{e}+\Delta_{B _{s}}\gamma_{e}/q-|R_{e}|)(\pm\omega+\omega_{n}+\gamma_{n}\Delta_{B_{s}}-i|R_{ n}|)-\omega_{e}\omega_{n}}\,, \tag{18}\]
where the coupling strength parameters marked in magenta are set according to Table 1.
## III Experimental setup
The experimental setup is shown in Fig. 1 and it is based on the setup reported in Ref. [61]. Briefly, a 20mm diameter spherical vapor cell filled with 3\(\,\)\(\mathrm{\SIUnitSymbolMicro}\)mg of \({}^{3}\)He and 50\(\,\)\(\mathrm{\SIUnitSymbolMicro}\)torr of N\({}_{2}\) is loaded with a drop of an alkali-metal mixture with 1% \({}^{87}\)Rb and 99% natural-abundance K (molar fractions). The cell is placed in an oven and heated to 185\(\,\)\(\mathrm{\SIUnitSymbolCelsius}\) C with an AC resistive heater. The oven assembly is mounted in a Twinleaf MS-1LF magnetic shield. The Rb atoms are optically pumped with about 70\(\,\)\(\mathrm{mW}\) (intensity of about 16\(\,\)\(\mathrm{mW}\)/\(\mathrm{cm}^{2}\)) circularly polarized light tuned to the center of the rubidium D\({}_{1}\) line. Potassium (and helium) atoms are then polarized by spin-exchange collisions with the Rb atoms. The hybrid pumping technique reduces inhomogeneities in the K polarization and improves the efficiency of spin-exchange pumping of the noble-gas atoms [62; 63].
The comagnetometer readout is realized by monitoring the polarization rotation of a linearly polarized probe beam detuned about 0.5\(\,\)\(\mathrm{nm}\) towards the shorter wavelength of the potassium D\({}_{1}\) line. Low-noise detection of the response to perturbations along the \(y\)-axis is achieved using lock-in detection and parametric modulation of the \(B_{z}\) field with a sine wave (800\(\,\)Hz, 35\(\,\)\(\mathrm{nT}\) peak-to-peak) [6; 64]. The polarization-rotation signal is demodulated with a lock-in amplifier (Zurich Instruments HF2LI) using the first and second harmonics of the modulation frequency. The first-harmonic signal features a linear relationship to the \(y\)-component of potassium polarization while demodulation at the second harmonic corresponds to measurements of potassium polarization along the \(x\)-axis, see, e.g., Appendix A in Ref. [61].
The comagnetometer is operated around the self-compensation point with an equilibrium compensation field of about 120\(\,\)\(\mathrm{nT}\), achieved after optimization of the
nuclear polarization as discussed in Ref. [61]. At this level of the compensation field, the decay rate of nuclear spins reached 20 s\({}^{-1}\).
Our comagnetometer setup can be rotated about the \(y\)-axis to controllably generate non-magnetic spin perturbations experimentally. To do so, the shield with the comagnetometer is mounted on a breadboard attached to a slewing bearing (iglidur(r)PRT-01). As shown in Fig. 1, the system is rotated by actuating a 70.5(1)-cm-long arm with a linear translation stage (Thorlabs MTS50-Z8). Because in the experiment the laser beams are transmitted through fibers and the expanders are attached to the shield, the alignment does not change during rotation (see the inset in Fig. 1). With this configuration, the applied rotation rate can be approximated by \(\Omega_{y}(t)\approx v(t)/L\), where \(v(t)\) is the velocity of the translation stage and \(L\) is the arm length. This is valid as long as the rotation angle (hence the translation-stage travel) is small enough. As the travel range of our translation stage is limited to 50 mm, the maximum angle applied to the system is about 64.61(9) mrad. With a maximum velocity of 2.4(2) mm/s, the highest achievable rotation rate is 3.4(3) mrad/s.
## IV Results and discussion
The frequency response to magnetic field and rotation was determined based on the response of the comagnetometer to a step perturbation, using the procedure described in Sec. II.1. For magnetic fields, the response was obtained by applying an 80-pT square pulse along the \(y\)-axis. The pulse duration was chosen to be sufficiently long (4 s) to reach the steady state regime before applying the next field value \(B_{y}\). The response to the falling edge of the pulse was utilized to determine the frequency response. The same routine was followed to determine the rotation response. However, in this case, the square pulse consisted of three steps: angular acceleration of the rotation stage to a constant rotation rate, sufficiently long (4 s) constant rotation to reach a steady state, followed by a sudden stop of the motion. The motor driving the rotation stage was accelerated to a speed of 2.0(2) mm/s, corresponding to a rotation rate of \(\Omega_{y}=2.8(3)\) mrad/s, which translates to \(\approx 14(1)\) pT of effective pseudomagnetic field for the noble-gas spins.
A single dataset contains the response to magnetic and rotation pulses. In total 71 datasets were collected for different magnetic detunings from the compensation point [Eq. (3)], ranging from \(-10\) to \(15\) nT. A graphical summary of the measurement sequence is presented with a flow chart in Fig. 2(b). Figure 2(a) shows an example time series of the response to magnetic and rotation perturbations for three different leading field conditions: below (upper plot), close to (middle plot), and above the compensation field (lower plot). In the results presented, one can note the characteristic spin dynamics for the self-compensating regime shown in the middle plot of Fig. 2(a). It manifests as a strong damping of the magnetic field response, along with the highest response to low-frequency rotation. This can be contrasted with the dynamics away from the compensation point as shown in the upper and lower plots of Fig. 2(a).
After the step perturbation, the time series data were numerically differentiated, Fourier transformed, and divided by the (known) transfer function of the lock-in amplifier to obtain the frequency response. The unit of the frequency response is given in volts per tesla. Tesla refers here to the unit of the effective magnetic field and is used for all perturbations listed in Table 1. The magnetic frequency response obtained for each tested value of the leading field was then fitted with the magnetic-field response model [Eq. (3)].
The fitting results, along with the experimental data obtained for the time series shown in Fig. 2, are presented in the first and second columns of Fig. 3. The fitting routine, based on complex functions, accurately captures both the phase and amplitude responses of the system, as can be seen in the plots shown in the first column of Fig. 3. The second column shows the fitted amplitude response in a log-log plot to illustrate the good agreement over the full range of frequencies.
The third column in Fig. 3 presents the directly mea
Figure 1: Sketch of the experimental setup. A vapor cell containing \({}^{3}\)He, N\({}_{2}\), \({}^{87}\)Rb and K is situated inside a Twinleaf MS-1LF magnetic shield. The Rb atoms are optically pumped with circularly polarized light from a Toptica TA Pro laser on-resonant to the D1 transition. K and He spins are pumped by spin-exchange collisions with Rb. The readout is realized by measuring the polarization rotation of a probe beam detuned from the K D\({}_{1}\) line with a photodiode. The leading field is sinusoidally modulated at 800 Hz to up-convert the magnetic signal and suppress low-frequency noise. The polarization signal is then demodulated with a lock-in amplifier. The inset shows the directions and polarizations of the laser beams, magnetic field modulations and the directions of the generated electronic (\(M^{e}\)) and nuclear magnetization (\(M^{n}\)).
sured frequency response to rotations and compares the data with the reconstructed response based on the fitting parameters derived from the measured magnetic response and Eqs. (16) and (18). The rotation response reconstructed from the magnetic field response agrees well with the experimental data in the range from DC to 15 Hz. For higher frequencies, the experimental spectrum is masked by a series of resonances before being dominated by the noise floor of the instrument. We verified that these resonances are related to mechanical vibration modes of the magnetic shield assembly triggered by a sudden stop of rotation. The complete set of the results for magnetic field and rotation responses is presented as color maps in Fig. 4. As shown, the reconstructed rotation response agrees qualitatively and quantitatively with all visible features of the rotation response measurement up to around 15 Hz for all magnetic detunings from the compensation point. The observation that the measurement of the magnetic frequency response enables us to infer the frequency response to all other spin perturbations is the key result of this work. The protocol for this method is summarized in the flowchart in Fig. 2. Apart from rotation sensing, this also plays an important role in determining the sensitivity of dark-matter experiments and has, to the best of our knowledge, not been fully implemented in any experiment so far. As an example of the generality of this method, we provide the inferred response of the measured comagnetometer system to exotic spin couplings of neutrons, protons, and electrons in Fig. 4.
The presented color maps provide insights into the dynamics of the coupled evolution of noble gas and alkali-metal polarizations.
For large detunings from the compensation point, the Larmor resonances of both electron and nuclear polarizations are well resolved and their center frequencies depend linearly on the applied magnetic field. However, near the compensation point, the strong interaction between the polarizations merges both resonances into a hybrid response. In particular, one can note the canonical
Figure 2: (a) Time series of the comagnetometer response to step changes in rotation and magnetic field under different detunings from the compensation point: below the compensation point (top), at the compensation point (middle), and above the compensation point (bottom). (b) A flowchart illustrating the experimental procedure used to study the response of the comagnetometer to magnetic fields and rotations, for different detunings from the compensation point. Note that the acquisition of the magnetic and rotation responses is separated (in time) by about 8 s. The gray-shaded area highlights the necessary steps of the calibration procedure proposed and described in this work, see text.
self-compensating mechanism: around the compensation point, the response to low-frequency magnetic perturbations is minimal while the amplitude of the response to non-magnetic couplings is maximal. It is of interest to note that the hybrid electron-nuclear resonance occurs at a magnetic field not precisely at the compensation point but rather at \(\Delta_{B_{z}}\approx-2\) nT, where the effective magnetic field experienced by the electron polarization crosses zero.
The predicted response for exotic proton coupling (third row, middle plot) is similar to the response to rotations [Fig. 4 (second column, row two and three)]. This similarity arises due to the similar relative coupling strength between the alkali-metal and noble-gas polarizations for both interactions (see Table 1). The bottom right plot of Fig. 4 shows the ratio between the proton coupling and the rotation response. The ratio differs from unity only at high frequency and large detunings. For the chosen alkali species, the exotic coupling to neutrons affects only the noble-gas spins. This results in
Figure 3: Frequency response to magnetic and rotation perturbation for three different values of the magnetic field detuning \(\Delta_{B_{z}}\). The \(y\) axis shows the comagnetometer response in voltage (in mV) normalized to the applied perturbation (in pT). The first and third rows are below and above the compensation point, respectively, while the middle row is very close to the compensation point. The magnetic frequency response is fitted for each magnetic detuning. The resulting parameters are used to predict the response to the rotation frequency [Eq. (18)], which is then compared to the measured rotation frequency response, shown in the third column. The prediction shows excellent agreement up to about \(15\,\mathrm{Hz}\), beyond which acoustic resonances of the setup and the noise floor dominate the spectrum. The dashed lines illustrate the conventional constant frequency response estimation, see for example [51].
a difference in the response of these two couplings visible at high frequencies. The bottom left plot in Fig. 4 shows the inferred frequency-dependent response of the comagnetometer to oscillating exotic couplings to electrons' polarization. Clearly visible are the Larmor resonances of the electron spin for large absolute detunings, the broad resonance width, indicating the strong damping of the electron polarization, the nuclear resonances that become visible due to being driven by the modulated electron polarization, and the avoided crossing between electron and nuclear polarization close to zero detuning. These features are similar to those observed in the mag
Figure 4: Directly measured frequency responses of the comagnetometer to magnetic field and rotation, along with the fit of the magnetic response and the reconstructed responses to rotation and exotic spin perturbations. The reconstructed responses were estimated on the basis of the results of the magnetic field response fitting. The plots present the results obtained for different magnetic field detunings from the compensation point \(\Delta_{B_{z}}\) with the compensation point at \(\approx-120\) nT. The first row presented the results of the directly measured response to magnetic field and rotation. The question mark in the third column represents the absence of experimental data for yet unknown exotic spin couplings. The first column of the second row presents the results of the magnetic response fitting. The second and third column in the second row presents the results of the estimated response to rotation and neutron perturbation. The third column presents the predicted responses to electron exotic perturbation, proton exotic perturbation and the ratio between proton and rotation response. The color bar for all maps is expressed in \(\mu\)V/pT with the exception of the map for the ratio between rotation and proton coupling. Here, the numerical scale is the same as in the other plots, but the presented quantity is unitless.
netic response.
The presented calibration method provides information about the detuning from the compensation point, since it is one of the fitting parameters (\(\Delta_{B_{z}}\)) in the model, as seen in Eq. (15). Figure 5 presents the results for the fit parameter \(\Delta_{B_{z}}\) as a function of the applied leading field \(B_{z}\). For small detunings from the compensation point (\(|\Delta_{B_{z}}|\leq 2.5\) nT), the experimental results scale linearly with the applied leading field. The data are fitted linearly around the compensation field, shown in the inset, resulting in a slope value of \(-0.9953(2)\). This means that the leading field derived from the current through the coil and the dynamics of the comagnetometer are in agreement. This is an important cross-check on the correct calibration of the system and the fit parameters. For larger detunings (\(|\Delta_{B_{z}}|\geq 2.5\) nT), the slope deviates from the linear behaviour and is well described by a quadratic function (see fit in Fig. 5). We associate this deviation with a drift of the nuclear polarization due to the leading field change that in turn alters the equilibrium nuclear polarization. This phenomenon is discussed in Ref. [61].
## V Conclusion
Here we have described a calibration procedure to reliably predict the frequency-dependent response of a noble-gas-alkali-metal comagnetometer to any spin perturbation, be it magnetic or non-magnetic in nature. The calibration procedure utilizes measurements of the response of the comagnetometer system to magnetic perturbations in conjunction with a fit to a multi-parameter model based on the Bloch equations as described in Ref. [49] and presented here in Eq. (14). The accuracy of the procedure was experimentally verified by successfully using it to predict the comagnetometer response to rotations. The predicted frequency-dependent response of the comagnetometer to rotations agreed well with the directly measured comagnetometer response to rotations for a wide range of detunings from the compensation point. The calibration procedure is valid as long as the system response remains in the linear regime (small angles between alkali and noble-gas polarizations and leading field). The method is useful for applications of self-compensating comagnetometers both in fundamental physics tests and applied quantum gyroscopy, particularly due the technical simplicity of applying well-controlled magnetic perturbations to a comagnetometer.
The demonstrated calibration routine outperforms calibration methods using a constant calibration factor in multiple ways. In particular, it provides the complete frequency response to any kind of spin perturbation rather than a single number that is estimated for the whole frequency-range with the other method. This is very important difference, because, as shown in Fig. 3, the frequency response is far from uniform, especially for frequencies beyond the nuclear Larmor frequency and for magnetic fields away from the compensation point (a condition that can easily occur during the course of an long-term experiment due to drifts). It should be stressed that false estimation of the frequency response to exotic couplings results in false exotic-field parameters in case of discovery and misplaced limits for a null result. For gyroscopy the consequences of an under- or overestimation of the measurement results in navigation errors with potentially catastrophic consequences. Furthermore, the demonstrated calibration method requires no changes to the main system parameters. A small step in the transverse magnetic field and a few seconds of data acquisition suffice. The system remains in equilibrium working conditions during the entire calibration procedure. This is of particular importance for long-term searches for exotic physics and navigation tasks for two reasons. First, the system requires a long time to reach stable working conditions. If the calibration protocol results in a loss of nuclear magnetization it can take hours to get back to equilibrium. Second, the calibration protocol is fast. The strong damping at the compensation point allows acquisition of all required data within seconds. This maximizes the duty cycle and the up-time of the system. And last but not least, the demonstrated calibration method determines a complete set of system parameters. This can be used to monitor the system performance over time and control the parameters in a feedback loop. This way, for example, the magnetic field detuning can be stabilized.
An immediate application of this calibration procedure is to the Advanced GNOME experiment, which will utilize a global network of comagnetometers to search for evidence of exotic physics. The calibration procedure will enable reliable long-term operation of the comagnetometer network to produce well-characterized data that
Figure 5: Fit parameter \(\Delta_{B_{z}}\) as a function of the applied external field \(B_{z}\). The data are fitted with a quadratic function, which reproduces the data well. The inset shows the magnified central region around the compensation point, where the data are fitted with a linear function. The slope of this function is close to \(-1\). These two parameters are completely independent and therefore an important indication for the quantitative agreement between model and experiment.
can be used to search for a variety of transient signals heralding beyond-the-Standard-Model physics [28].
## Acknowledgements
This work was supported by the German Federal Ministry of Education and Research (BMBF) within the Quantentechnologien program (FKZ 13N15064), by the Cluster of Excellence "Precision Physics, Fundamental Interactions, and Structure of Matter" (PRISMA+ EXC 2118/1) funded by the German Research Foundation (DFG) within the German Excellence Strategy (Project ID 39083149) and COST Action COSMIC WISPers CA21106, supported by COST (European Cooperation in Science and Technology). The work of D.F.J.K. was supported by U.S. National Science Foundation (NSF) Grant No. PHY-2110388. SP, MP and GL acknowledge support by the National Science Centre, Poland within the OPUS program (2020/39/B/ST2/01524).
|
2309.10317 | Projective dimension and regularity of edge ideals of some
vertex-weighted oriented unicyclic graphs | In this paper we provide some exact formulas for the projective dimension and
the regularity of edge ideals of vertex-weighted oriented unicyclic graphs.
These formulas are in function of the weight of the vertices, the numbers of
edges. We also give some examples to show that these formulas are related to
direction selection and the assumptions that $w(x)\geq 2$ for any vertex $x$
cannot be dropped. | Guangjun Zhu, Hong Wang | 2023-09-19T04:59:33Z | http://arxiv.org/abs/2309.10317v1 | Projective dimension and regularity of edge ideals of some vertex-weighted oriented unicyclic graphs
###### Abstract.
In this paper we provide some exact formulas for the projective dimension and the regularity of edge ideals of vertex-weighted oriented unicyclic graphs. These formulas are in function of the weight of the vertices, the numbers of edges. We also give some examples to show that these formulas are related to direction selection and the assumptions that \(w(x)\geq 2\) for any vertex \(x\) cannot be dropped.
2020 Mathematics Subject Classification: Primary: 13F20; Secondary 05C20, 05C22, 05E40 projective dimension, regularity, edge ideal, vertex-weighted oriented unicyclic graph, vertex-weighted rooted forest
## 1. Introduction
Let \(S=k[x_{1},\ldots,x_{n}]\) be a polynomial ring in \(n\) variables over a field \(k\) and let \(I\subset S\) be a homogeneous ideal. There are two central invariants associated to \(I\), the regularity \(\operatorname{reg}\left(I\right):=\max\{j-i\mid\beta_{i,j}(I)\neq 0\}\) and the projective dimension \(\operatorname{pd}\left(I\right):=\max\{i\mid\beta_{i,j}(I)\neq 0\text{ for some }j\}\), that in a sense, they measure the complexity of computing the graded Betti numbers \(\beta_{i,j}(I)\) of \(I\). In particular, if \(I\) is a monomial ideal, its polarization \(I^{\mathcal{P}}\) has the same projective dimension and regularity as \(I\) and is squarefree. Thus one can associate \(I^{\mathcal{P}}\) to a graph or a hypergraph or a simplicial complex. Many authors have studied the regularity and Betti numbers of edge ideals of graphs, e.g. [1, 2, 4, 11, 13, 15, 21, 17, 23].
A _directed graph_ or _digraph_\(D\) consists of a finite set \(V(D)\) of vertices, together with a collection \(E(D)\) of ordered pairs of distinct points called edges or arrows. A vertex-weighted directed graph is a triplet \(D=(V(D),E(D),w)\), where \(w\) is a weight function \(w:V(D)\to\mathbb{N}^{+}\), where \(N^{+}=\{1,2,\ldots\}\). Some times for short we denote the vertex set \(V(D)\) and the edge set \(E(D)\) by \(V\) and \(E\) respectively. The weight of \(x_{i}\in V\) is \(w(x_{i})\), denoted by \(w_{i}\) or \(w_{x_{i}}\).
The edge ideal of a vertex-weighted digraph was first introduced by Gimenez et al [8]. Let \(D=(V,E,w)\) be a vertex-weighted digraph with the vertex set \(V=\{x_{1},\ldots,x_{n}\}\). We consider the polynomial ring \(S=k[x_{1},\ldots,x_{n}]\) in \(n\) variables over a field \(k\). The edge ideal of \(D\), denoted by \(I(D)\), is the ideal of \(S\) given by
\[I(D)=(x_{i}x_{j}^{w_{j}}\mid x_{i}x_{j}\in E).\]
Edge ideals of weighted digraphs arose in the theory of Reed-Muller codes as initial ideals of vanishing ideals of projective spaces over finite fields [18, 16]. If a vertex \(x_{i}\) of \(D\) is a source (i.e., has only arrows leaving \(x_{i}\)) we shall always assume \(w_{i}=1\) because in this case the definition of \(I(D)\) does not depend on the weight of \(x_{i}\). If \(w_{j}=1\) for all \(j\), then \(I(D)\) is the edge ideal of underlying graph \(G\) of \(D\). It has been extensively studied in the literature [11, 17, 20]. Especially the study of algebraic invariants corresponding to their minimal free resolutions has become popular(see [1, 2, 4, 13, 21, 22, 23]). In [23], the first three authors derive some exact formulas for the projective dimension and regularity of the edge ideals associated to some vertex-weighted digraphs such as rooted forests, oriented cycles. To the best of our knowledge, little is known about the projective dimension and the regularity of \(I(D)\) for some vertex-weighted digraphs.
In this article, we are interested in algebraic properties corresponding to the projective dimension and the regularity of the edge ideals of vertex-weighted oriented unicyclic graphs. By
using the approaches of Betti splitting and polarization, we derive some exact formulas for the projective dimension and the regularity of these edge ideals. The results are as follows:
**Theorem 1.1**.: _Let \(T_{j}=(V(T_{j}),E(T_{j}),w_{j})\) be a vertex-weighted rooted forest such that \(w(x)\geq 2\) if \(d(x)\neq 1\) for \(1\leq j\leq s\) and \(C_{n}\) a vertex-weighted oriented cycle with vertex set \(\{x_{1},x_{2},\ldots,x_{n}\}\) and \(w_{x_{i}}\geq 2\) for \(1\leq i\leq n\). Let \(D=C_{n}\cup(\bigcup\limits_{j=1}^{s}T_{j})\) be a vertex-weighted oriented graph obtained by attaching the root of \(T_{j}\) to vertex \(x_{i_{j}}\) of the cycle \(C_{n}\) for \(1\leq j\leq s\). Then_
1. \(\text{reg}\,(I(D))=\sum\limits_{x\in V(D)}w(x)-|E(D)|+1\)_,_
2. \(\text{pd}\,(I(D))=|E(D)|-1\)_._
**Theorem 1.2**.: _Let \(D=(V(D),E(D),w)\) be a vertex-weighted oriented unicyclic graph such that \(w(x)\geq 2\) if \(d(x)\neq 1\). Then_
1. \(\text{reg}\,(I(D))=\sum\limits_{x\in V(D)}w(x)-|E(D)|+1\)_,_
2. \(\text{pd}\,(I(D))=|E(D)|-1\)_._
Our paper is organized as follows. In section 2, we recall some definitions and basic facts used in the following sections. In section 3, we provide some exact formulas for the projective dimension and the regularity of the edge ideals of vertex-weighted oriented unicyclic graphs. Meanwhile, we give some examples to show the projective dimension and the regularity of these edge ideals are related to direction selection and the assumption that \(w(x)\geq 2\) for any vertex \(x\) cannot be dropped.
For all unexplained terminology and additional information, we refer to [14] (for the theory of digraphs), [3] (for graph theory), and [5, 12] (for the theory of edge ideals of graphs and monomial ideals). We greatfully acknowledge the use of the computer algebra system CoCoA ([6]) for our experiments.
## 2. Preliminaries
In this section, we gather together the needed definitions and basic facts, which will be used throughout this paper. However, for more details, we refer the reader to [1, 3, 7, 10, 12, 13, 14, 16, 19, 23].
A _directed graph_ or _digraph_\(D\) consists of a finite set \(V(D)\) of vertices, together with a collection \(E(D)\) of ordered pairs of distinct points called edges or arrows. If \(\{u,v\}\in E(D)\) is an edge, we write \(uv\) for \(\{u,v\}\), which is denoted to be the directed edge where the direction is from \(u\) to \(v\) and \(u\) (resp. \(v\)) is called the _starting_ point (resp. the _ending_ point). An _oriented_ graph is a directed graph having no bidirected edges (i.e. each pair of vertices is joined by a single edge having a unique direction). In other words an oriented graph \(D\) is a simple graph \(G\) together with an orientation of its edges. We call \(G\) the underlying graph of \(D\).
Every concept that is valid for graphs automatically applies to digraphs too. For example, the degree of a vertex \(x\) in a digraph \(D\), denoted by \(d(x)\), is simply the degree of \(x\) in \(G\). Likewise, a digraph is said to be connected if its underlying graph is connected. A digraph \(H\) is called an induced subgraph of digraph \(D\) if \(V(H)\subseteq V(D)\), and for any \(x,y\in V(H)\), \(xy\) is an edge in \(H\) if and only if \(xy\) is an edge in \(D\). For \(P\subset V(D)\), we denote \(D\setminus P\) the induced subgraph of \(D\) obtained by removing the vertices in \(P\) and the edges incident to these vertices. If \(P=\{x\}\) consists of a single element, then we write \(D\setminus x\) for \(D\setminus\{x\}\). For \(W\subseteq E(D)\), we define \(D\setminus W\) to be the subgraph of \(D\) with all edges in \(W\) deleted (but its vertices remained). When \(W=\{e\}\) consists of a single edge, we write \(D\setminus e\) instead of \(D\setminus\{e\}\). An _oriented_ path or _oriented_ cycle is an orientation of a path or cycle in which each vertex dominates its successor in the sequence. An _oriented_ acyclic graph is a simple digraph without oriented cycles. An _oriented tree_ or _polytree_ is an oriented acyclic graph formed by orienting the edges of undirected acyclic graphs. A _rooted tree_ is an oriented tree in which all edges are oriented either away from or towards the root. Unless specifically stated, a rooted tree in this article is an oriented tree
in which all edges are oriented away from the root. An _oriented forest_ is a disjoint union of oriented trees. A _rooted forest_ is a disjoint union of rooted trees.
A vertex-weighted oriented graph is a triplet \(D=(V(D),E(D),w)\), where \(V(D)\) is the vertex set, \(E(D)\) is the edge set and \(w\) is a weight function \(w:V(D)\to\mathbb{N}^{+}\), where \(N^{+}=\{1,2,\ldots\}\). Some times for short we denote the vertex set \(V(D)\) and edge set \(E(D)\) by \(V\) and \(E\) respectively. The weight of \(x_{i}\in V\) is \(w(x_{i})\), denoted by \(w_{i}\) or \(w_{x_{i}}\). Given a vertex-weighted oriented graph \(D=(V,E,w)\) with the vertex set \(V=\{x_{1},\ldots,x_{n}\}\), we consider the polynomial ring \(S=k[x_{1},\ldots,x_{n}]\) in \(n\) variables over a field \(k\). The edge ideal of \(D\), denoted by \(I(D)\), is the ideal of \(S\) given by
\[I(D)=(x_{i}x_{j}^{w_{j}}\mid x_{i}x_{j}\in E).\]
If a vertex \(x_{i}\) of \(D\) is a source (i.e., has only arrows leaving \(x_{i}\)) we shall always assume \(w_{i}=1\) because in this case the definition of \(I(D)\) does not depend on the weight of \(x_{i}\).
For any homogeneous ideal \(I\) of the polynomial ring \(S=k[x_{1},\ldots,x_{n}]\), there exists a _graded minimal finite free resolution_
\[0\to\bigoplus_{j}S(-j)^{\beta_{p,j}(I)}\to\bigoplus_{j}S(-j)^{\beta_{p-1,j}(I )}\to\cdots\to\bigoplus_{j}S(-j)^{\beta_{0,j}(I)}\to I\to 0,\]
where the maps are exact, \(p\leq n\), and \(S(-j)\) is the \(S\)-module obtained by shifting the degrees of \(S\) by \(j\). The number \(\beta_{i,j}(I)\), the \((i,j)\)-th graded Betti number of \(I\), is an invariant of \(I\) that equals the number of minimal generators of degree \(j\) in the \(i\)th syzygy module of \(I\). Of particular interests are the following invariants which measure the "size" of the minimal graded free resolution of \(I\). The projective dimension of \(I\), denoted \(\operatorname{pd}\left(I\right)\), is defined to be
\[\operatorname{pd}\left(I\right):=\max\,\{i\mid\beta_{i,j}(I)\neq 0\}.\]
The regularity of \(I\), denoted \(\operatorname{reg}\left(I\right)\), is defined by
\[\operatorname{reg}\left(I\right):=\max\,\{j-i\mid\beta_{i,j}(I)\neq 0\}.\]
We now derive some formulas for \(\operatorname{pd}\left(I\right)\) and \(\operatorname{reg}\left(I\right)\) in some special cases by using some tools developed in [7].
**Definition 2.1**.: Let \(I\) be a monomial ideal, and suppose that there exist monomial ideals \(J\) and \(K\) such that \(\mathcal{G}(I)\) is the disjoint union of \(\mathcal{G}(J)\) and \(\mathcal{G}(K)\), where \(\mathcal{G}(I)\) denotes the unique minimal set of monomial generators of \(I\). Then \(I=J+K\) is a _Betti splitting_ if
\[\beta_{i,j}(I)=\beta_{i,j}(J)+\beta_{i,j}(K)+\beta_{i-1,j}(J\cap K)\,\text{ for all }\,i,j\geq 0,\]
where \(\beta_{i-1,j}(J\cap K)=0\,\text{ if }\,i=0\).
In [7], the authors describe some sufficient conditions for an ideal \(I\) to have a Betti splitting. We need the following lemma.
**Lemma 2.2**.: ([7, Corollary 2.7]) _Suppose that \(I=J+K\) where \(\mathcal{G}(J)\) contains all the generators of \(I\) divisible by some variable \(x_{i}\) and \(\mathcal{G}(K)\) is a nonempty set containing the remaining generators of \(I\). If \(J\) has a linear resolution, then \(I=J+K\) is a Betti splitting._
When \(I\) is a Betti splitting ideal, Definition 2.1 implies the following results:
**Corollary 2.3**.: _If \(I=J+K\) is a Betti splitting ideal, then_
1. \(\operatorname{reg}\left(I\right)=\text{max}\{\operatorname{reg}\left(J\right),\operatorname{reg}\left(K\right),\operatorname{reg}\left(J\cap K\right)-1\}\)_,_
2. \(\operatorname{pd}\left(I\right)=\text{max}\{\operatorname{pd}\left(J\right),\operatorname{pd}\left(K\right),\operatorname{pd}\left(J\cap K\right)+1\}\)_._
The following lemmas is often used in this article.
**Lemma 2.4**.: ([9, Lemma 1.3]) _Let \(R\) be a polynomial ring over a field and let \(I\) be a proper non-zero homogeneous ideal in \(R\). Then_
1. \(\operatorname{pd}\left(I\right)=\operatorname{pd}\left(S/I\right)-1\)_,_
2. \(\operatorname{reg}\left(I\right)=\operatorname{reg}\left(S/I\right)+1\)_._
**Lemma 2.5**.: ([10, Lemma 2.2 and Lemma 3.2 ]) _Let \(S_{1}=k[x_{1},\ldots,x_{m}]\), \(S_{2}=k[x_{m+1},\ldots,x_{n}]\) and \(S=k[x_{1},\ldots,x_{n}]\) be three polynomial rings, \(I\subseteq S_{1}\) and \(J\subseteq S_{2}\) be two proper non-zero homogeneous ideals. Then_
1. \(\mathit{pd}\,(S/(I+J))=\mathit{pd}\,(S_{1}/I)+\mathit{pd}\,(S_{2}/J)\)_,_
2. \(\mathit{reg}\,(S/(I+J))=\mathit{reg}\,(S_{1}/I)+\mathit{reg}\,(S_{2}/J)\)_._
From Lemma 2.4 and Lemma 2.5, we have
**Lemma 2.6**.: ([22, Lemma 3.1]) _Let \(S_{1}\!\!=\!k[x_{1},\ldots,x_{m}]\) and \(S_{2}\!\!=\!k[x_{m+1},\ldots,x_{n}]\) be two polynomial rings, \(I\subseteq S_{1}\) and \(J\subseteq S_{2}\) be two non-zero homogeneous ideals. Then_
1. \(\mathit{pd}\,(I+J)=\mathit{pd}\,(I)+\mathit{pd}\,(J)+1\)_,_
2. \(\mathit{reg}\,(I+J)=\mathit{reg}\,(I)+\mathit{reg}\,(J)-1\)_._
Let \(\mathcal{G}(I)\) denote the minimal set of generators of a monomial ideal \(I\subset S\) and let \(u\in S\) be a monomial, we set \(\mathrm{supp}(u)=\{x_{i}:x_{i}|u\}\). If \(\mathcal{G}(I)=\{u_{1},\ldots,u_{m}\}\), we set \(\mathrm{supp}(I)=\bigcup\limits_{i=1}^{m}\mathrm{supp}(u_{i})\). The following lemma is well known.
**Lemma 2.7**.: _Let \(I,J=(u)\) be two monomial ideals such that \(\mathit{supp}\,(u)\cap\mathit{supp}\,(I)\!\!=\emptyset\). If the degree of monomial \(u\) is \(d\). Then_
1. \(\mathit{reg}\,(J)=d\)_,_
2. \(\mathit{reg}\,(JI)=\mathit{reg}\,(I)+d\)_,_
3. \(\mathit{pd}\,(JI)=\mathit{pd}\,(I)\)_._
**Definition 2.8**.: Suppose that \(u=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\) is a monomial in \(S\). We define the _polarization_ of \(u\) to be the squarefree monomial
\[\mathcal{P}(u)=x_{11}x_{12}\cdots x_{1a_{1}}x_{21}\cdots x_{2a_{2}}\cdots x_{n 1}\cdots x_{na_{n}}\]
in the polynomial ring \(S^{\mathcal{P}}=k[x_{ij}\ |\ 1\leq i\leq n,1\leq j\leq a_{i}]\). If \(I\subset S\) is a monomial ideal with \(\mathcal{G}(I)=\{u_{1},\ldots,u_{m}\}\), the _polarization_ of \(I\), denoted by \(I^{\mathcal{P}}\), is defined as:
\[I^{\mathcal{P}}=(\mathcal{P}(u_{1}),\ldots,\mathcal{P}(u_{m})),\]
which is a squarefree monomial ideal in the polynomial ring \(S^{\mathcal{P}}\).
Here is an example of how polarization works.
**Example 2.9**.: Let \(I(D)=(x_{1}x_{2}^{3},x_{2}x_{3}^{2},x_{3}x_{4}^{4},x_{4}x_{1}^{5})\) be the edge ideal of a vertex-weighted oriented cycle \(D\), then the polarization of \(I(D)\) is the ideal \(I(D)^{\mathcal{P}}=(x_{11}x_{21}x_{22}x_{23},x_{21}x_{31}x_{32},\\ x_{31}x_{41}x_{42}x_{43}x_{44},x_{41}x_{11}x_{12}x_{13}x_{14}x_{15})\).
A monomial ideal \(I\) and its polarization \(I^{\mathcal{P}}\) share many homological and algebraic properties. The following is a very useful property of polarization.
**Lemma 2.10**.: ([12, Corollary 1.6.3]) _Let \(I\subset S\) be a monomial ideal and \(I^{\mathcal{P}}\subset S^{\mathcal{P}}\) its polarization. Then_
1. \(\beta_{ij}(I)=\beta_{ij}(I^{\mathcal{P}})\) _for all_ \(i\) _and_ \(j\)_,_
2. \(\mathit{reg}\,(I)=\mathit{reg}\,(I^{\mathcal{P}})\)_,_
3. \(\mathit{pd}\,(I)=\mathit{pd}\,(I^{\mathcal{P}})\)_._
The following lemma can be used for computing the projective dimension and the regularity of an ideal.
**Lemma 2.11**.: ([9, Lemma 1.1 and Lemma 1.2]) _Let \(\ 0\to A\to B\to C\to 0\\) be a short exact sequence of finitely generated graded \(S\)-modules. Then_
1. \(\mathit{reg}\,(B)\leq\mathit{max}\,\{\mathit{reg}\,(A),\mathit{reg}\,(C)\}\)_,_
2. \(\mathit{pd}\,(B)\leq\mathit{max}\,\{\mathit{pd}\,(A),\mathit{pd}\,(C)\}\)_._
## 3. Projective dimension and regularity of edge ideals of vertex-weighted oriented unicyclic graphs
In this section, we will provide some exact formulas for the projective dimension and the regularity of the edge ideals of vertex-weighted unicyclic graphs. Meanwhile, we give some examples to show the projective dimension and the regularity of the edge ideals of vertex-weighted oriented unicyclic graphs are related to direction selection and the assumption that \(w(x)\geq 2\) if \(d(x)\neq 1\) cannot be dropped. We shall start from the definition of the union of some digraphs.
**Definition 3.1**.: Let \(D_{i}=(V(D_{i}),E(D_{i}))\) be a digraph with the underlying graph \(G_{i}\) for \(1\leq i\leq k\). If \(e\) is an edge of \(G_{j}\) for \(i_{1}\leq j\leq i_{s}\) with \(1\leq i_{1}\leq\cdots\leq i_{s}\leq k\), then the direction of edge \(e\) is the same in \(D_{j}\) for \(i_{1}\leq j\leq i_{s}\). The union of digraphs \(D_{1},D_{2},\ldots,D_{k}\), written as \(\bigcup\limits_{i=1}^{k}D_{i}\), is the digraph with vertex set \(\bigcup\limits_{i=1}^{k}V(D_{i})\) and edge set \(\bigcup\limits_{i=1}^{k}E(D_{i})\).
The following two lemmas is needed to facilitate calculating the projective dimension and the regularity of the edge ideal of some graphs.
**Lemma 3.2**.: ([23, Theorem 3.5, Theorem 3.3 ]) _Let \(D=(V(D),E(D),w)\) be a vertex-weighted rooted forest such that \(w(x)\geq 2\) if \(d(x)\neq 1\). Then_
1. \(\text{reg}\,(I(D))=\sum\limits_{x\in V(D)}w(x)-|E(D)|+1\)_,_
2. \(\text{pd}\,(I(D))=|E(D)|-1\)_._
**Lemma 3.3**.: ([23, Theorem 4.1]) _Let \(D=(V(D),E(D),w)\) be a vertex-weighted oriented cycle such that \(w(x)\geq 2\) for any \(x\in V(D)\). Then_
1. \(\text{reg}\,(I(D))=\sum\limits_{x\in V(D)}w(x)-|E(D)|+1\)_,_
2. \(\text{pd}\,(I(D))=|E(D)|-1\)_._
Let \(D=(V,E)\) be a digraph and \(x\in V\), then we call \(N_{D}^{+}(x)=\{y:xy\in E\}\) and \(N_{D}^{-}(x)=\{y:yx\in E\}\) to be the out-neighbourhood and in-neighbourhood of \(x\), respectively. The neighbourhood of \(x\) is the set \(N_{D}(x)=N_{D}^{+}(x)\cup N_{D}^{-}(x)\).
Now we are ready to present the main result of this section.
**Theorem 3.4**.: _Let \(T_{j}=(V(T_{j}),E(T_{j}),w_{j})\) be a vertex-weighted rooted forest such that \(w(x)\geq 2\) if \(d(x)\neq 1\) for \(1\leq j\leq s\) and \(C_{n}\) a vertex-weighted oriented cycle with vertex set \(\{x_{1},x_{2},\ldots,x_{n}\}\) and \(w_{x_{i}}\geq 2\) for \(1\leq i\leq n\). Let \(D=C_{n}\cup(\bigcup\limits_{j=1}^{s}T_{j})\) be a vertex-weighted oriented graph obtained by attaching the root of \(T_{j}\) to vertex \(x_{i_{j}}\) of the cycle \(C_{n}\) for \(1\leq j\leq s\). Then_
1. \(\text{reg}\,(I(D))=\sum\limits_{x\in V(D)}w(x)-|E(D)|+1\)_,_
2. \(\text{pd}\,(I(D))=|E(D)|-1\)_._
Proof.: Let \(N_{T_{j}}^{+}(x_{i_{j}})\!=\!\{y_{j1},\ldots,y_{j,t_{j}}\}\) for \(1\leq j\leq s\). Assume that \(i_{1}\!<\!i_{2}\!<\!\cdots\!<\!i_{s}\). We consider two cases:
Case (1): If \(T_{j}\) is a rooted star graph for any \(1\leq j\leq s\), then we have
\[I(D)= (x_{1}x_{2}^{w_{2}},\ldots,x_{n-1}x_{n}^{w_{n}},x_{n}x_{1}^{w_{1} },x_{i_{1}}y_{11}^{w_{y_{11}}},\ldots,x_{i_{1}}y_{1,t_{1}}^{w_{y_{1,t_{1}}}},x_ {i_{2}}y_{21}^{w_{y_{21}}},\ldots,x_{i_{2}}y_{2,t_{2}}^{w_{y_{2,t_{2}}}},\ldots,\] \[x_{i_{s}}y_{s1}^{w_{s_{1}}},\ldots,x_{i_{s}}y_{s,t_{s}}^{w_{y_{s,t _{s}}}}).\]
Case (2): If there exists some \(1\leq j\leq s\) such that \(T_{j}\) is a rooted tree but not a star digraph. We may assume that \(T_{j}\) is a rooted tree but not a star digraph for any \(1\leq j\leq s\). Because other cases follow the same line of arguments. In this case, we have
\[I(D)= (x_{1}x_{2}^{w_{2}},\ldots,x_{n-1}x_{n}^{w_{n}},x_{n}x_{1}^{w_{1} },x_{i_{1}}y_{11}^{w_{y_{11}}},\ldots,x_{i_{1}}y_{1,t_{1}}^{w_{y_{1,t_{1}}}},x_ {i_{2}}y_{21}^{w_{y_{21}}},\ldots,x_{i_{2}}y_{2,t_{2}}^{w_{y_{2,t_{2}}}},\] \[\ldots,x_{i_{s}}y_{s1}^{w_{s_{1}}},\ldots,x_{i_{s}}y_{s,t_{s}}^{w _{y_{s,t_{s}}}})+I(T_{1}\setminus x_{i_{1}})+I(T_{2}\setminus x_{i_{2}})+\cdots+I (T_{s}\setminus x_{i_{s}}),\]
where \(I(T_{j}\setminus x_{i_{j}})\) is the edge ideal of the induced subgraph of \(T_{j}\) obtained by removing vertex \(x_{i_{j}}\) and the edges incident to \(x_{i_{j}}\) for \(j=1,\ldots,s\).
Case (1) can be shown by similar arguments as Case (2), so we only prove the statement holds in Case (2).
Let \(I(D)^{\mathcal{P}}\) be the polarization of \(I(D)\), then
\[I(D)^{\mathcal{P}}= (x_{11}\underset{j=1}{\overset{w_{2}}{\prod}}x_{2j},\ldots,x_{n-1,1}\prod_{j=1}^{w_{n}}x_{nj},x_{n1}\prod_{j=1}^{w_{1}}x_{1j},x_{i_{1}1}\prod_{j =1}^{w_{y_{11}}}y_{11,j},\ldots,x_{i_{1}1}\underset{j=1}{\overset{w_{y_{1,t_{1} }}}{\prod}}y_{1,t_{1},j},x_{i_{2}1}\underset{j=1}{\overset{w_{y_{21}}}{\prod}}y_{ 21,j},\] \[\ldots,x_{i_{2}1}\prod_{j=1}^{w_{y_{2,t_{2}}}}y_{2,t_{2},j},\ldots, x_{i_{s}1}\prod_{j=1}^{w_{y_{s1}}}y_{s1j},\ldots,x_{i_{s}1}\prod_{j=1}^{w_{y_{s,t_{s} }}}y_{s,t_{s},j})+I(T_{1}\setminus x_{i_{1}})^{\mathcal{P}}+I(T_{2}\setminus x _{i_{2}})^{\mathcal{P}}\] \[+ \cdots+I(T_{s}\setminus x_{i_{s}})^{\mathcal{P}}.\]
We set \(i_{1}=1\), \(K=(x_{n1}\underset{j=1}{\overset{w_{1}}{\prod}}x_{1j})\) and
\[J= (x_{11}\underset{j=1}{\overset{w_{2}}{\prod}}x_{2j},\ldots,x_{n-1,1}\underset{j=1}{\overset{w_{n}}{\prod}}x_{nj},x_{n1}\underset{j=1}{ \overset{w_{1}}{\prod}}x_{1j},x_{11}\underset{j=1}{\overset{w_{y_{11}}}{ \prod}}y_{11,j},\ldots,x_{11}\underset{j=1}{\overset{w_{y_{1,t_{1}}}}{\prod}}y _{1,t_{1},j},x_{i_{2}1}\prod_{j=1}^{w_{y_{21}}}y_{21,j},\ldots,\] \[x_{i_{2}1}\underset{j=1}{\overset{w_{y_{2,t_{2}}}}{\prod}}y_{2,t _{2},j},\ldots,x_{i_{s}1}\prod_{j=1}^{w_{y_{s1}}}y_{s1,j},\ldots,x_{i_{s}1} \underset{j=1}{\overset{w_{y_{s,t_{s}}}}{\prod}}y_{s,t_{s},j})+I(T_{1} \setminus x_{i_{1}})^{\mathcal{P}}+I(T_{2}\setminus x_{i_{2}})^{\mathcal{P}}\] \[+ \cdots+I(T_{s}\setminus x_{i_{s}})^{\mathcal{P}}\]
where \(x_{n1}\underset{j=1}{\overset{w_{1}}{\prod}}x_{1j}\) denotes the element \(x_{n1}\underset{j=1}{\overset{w_{1}}{\prod}}x_{1j}\) being omitted from the ideal \(J\).
Note that \(J\) is actually the polarization of the edge ideal \(I(D\setminus e)\) of the subgraph \(D\setminus e\) where \(e=x_{n}x_{1}\), and \(D\setminus e\) is a rooted tree, whose root is \(x_{1}\). By Lemmas 2.10 and 3.2, we obtain
\[\operatorname{reg}\left(J\right) =\operatorname{reg}\left(J^{\mathcal{P}}\right)=\sum\limits_{x\in V (D\setminus e)}w(x)-\left|E(D\setminus e)\right|+1\] \[=\sum\limits_{x\in V(D)}w(x)-(w_{1}-1)-(\left|E(D)\right|-1)+1\] \[=\sum\limits_{x\in V(D)}w(x)-\left|E(D)\right|+1+2-w_{1}\] \[=\sum\limits_{x\in V(D)}w(x)-\left|E(D)\right|+3-w_{1}\]
where third equality holds because we have weighted one in vertex \(x_{1}\) in the expression \(\sum\limits_{x\in V(D\setminus e)}w(x)\), and
\[\operatorname{pd}\left(J\right)=\operatorname{pd}\left(J^{\mathcal{P}}\right)= \left|E(D\setminus e)\right|-1=\left|E(D)\right|-2.\]
Now, we will compute \(\operatorname{reg}\left(J\cap K\right)\) and \(\operatorname{pd}\left(J\cap K\right)\). We distinguish into the following two cases:
(1) If \(s=1\), or \(s\geq 2\) and \(i_{s}\neq n\), then \(d(x_{n})=2\). In this case, we set \(J\cap K=KL\). We write \(L\) as follows:
\[L=L_{1}+L_{2}\]
where \(L_{1}=(\underset{j=1}{\overset{w_{2}}{\prod}}x_{2j},x_{n-1,1}\underset{j=2}{ \overset{w_{n}}{\prod}}x_{nj},\underset{j=1}{\overset{w_{y_{11}}}{\prod}}y_{ 11,j},\ldots,\underset{j=1}{\overset{w_{y_{1k}}}{\prod}}y_{1k,j})+I(D\setminus \{x_{1},x_{n}\})^{\mathcal{P}}\) is the polarization of the edge ideal of a rooted forest \(H\), here \(H\) is the union of the induced subgraph \(D\setminus\{x_{1},x_{n}\}\) of \(D\) and a vertex-weighted oriented graph \(H^{\prime}\) with the vertex set \(V(H^{\prime})=\{x_{21},x_{22},x_{n-1,1},x_{n2},y_{11,1},y_{11,2},\ldots,y_{1,k,1 },y_{1,k,2}\}\), the edge set \(E(H^{\prime})=\{x_{21}x_{22},x_{n-1,1},x_{n2},\)
\(y_{11,1}y_{11,2},\ldots,y_{1,k,1}y_{1,k,2}\) and a weight function \(w^{\prime}:V(H^{\prime})\rightarrow\mathbb{N}^{+}\) such that \(w^{\prime}(x_{21})=1\), \(w^{\prime}(x_{22})=w_{2}-1\), \(w^{\prime}(x_{n-1,1})=1\), \(w^{\prime}(x_{n2})=w_{n}-1\), \(w^{\prime}(y_{1,j,1})=1\), \(w^{\prime}(y_{1,j,2})=w_{y_{1,j}}-1\geq 1\) for any \(1\leq j\leq k\), and \(L_{2}=(\prod\limits_{j=1}^{w_{y_{1,k+1}}}y_{1,k+1,j},\ldots,\prod\limits_{j= 1}^{w_{y_{1,t_{1}}}}y_{1,t_{1},j})\) for any \(k+1\leq\ell\leq t_{1}\). Thus \(|E(H)|=|E(D)|-(t_{1}+3)+(k+2)=|E(D)|-(t_{1}-k+1)\). We consider the following two cases:
\((i)\) If \(k<t_{1}\), notice that the variables that appear in \(K\) and \(L\), and in \(L_{1}\) and \(L_{2}\) are different, respectively. Then using Lemmas 2.6, 2.7 and 3.2, we obtain
\[\operatorname{reg}\left(J\cap K\right) =(1+w_{1})+\operatorname{reg}\left(L\right)=(1+w_{1})+ \operatorname{reg}\left(L_{1}\right)+\operatorname{reg}\left(L_{2}\right)-1\] \[=(1+w_{1})+(\sum_{x\in V(H)}w(x)-|E(H)|+1)\] \[+((\sum_{j=k+1}^{t_{1}}w_{y_{1j}}-(t_{1}-k))+1)-1\] \[=(1+w_{1}+\sum_{x\in V(H)}w(x)+\sum_{j=k+1}^{t_{1}}w_{y_{1j}})\] \[-(|E(D)|-(t_{1}-k+1))-(t_{1}-k)+1\] \[=\sum_{x\in V(D)}w(x)-|E(D)|+2\]
where the last equality holds because of \(x_{n2}\in V(H^{\prime})\) with \(w^{\prime}(x_{n2})=w_{n}-1\) and \(x_{1}\notin V(H)\), and
\[\operatorname{pd}\left(J\cap K\right) =\operatorname{pd}\left(L\right)=\operatorname{pd}\left(L_{1}+L_ {2}\right)=\operatorname{pd}\left(L_{1}\right)+\operatorname{pd}\left(L_{2} \right)+1\] \[=|E(H)|-1+(t_{1}-k-1)+1\] \[=(|E(D)|-(t_{1}-k+1))-1+(t_{1}-k-1)+1\] \[=|E(D)|-2.\]
\((ii)\) If \(k=t_{1}\), then \(L_{2}=(0)\) and \(|E(H)|=|E(D)|-(t_{1}+3)+(t_{1}+2)=|E(D)|-1\). Again by Lemmas 2.6, 2.7 and 3.2, we obtain
\[\operatorname{reg}\left(J\cap K\right) =\operatorname{reg}\left(KL\right)=(1+w_{1})+\operatorname{reg} \left(L\right)\] \[=(1+w_{1})+\sum_{x\in V(H)}w(x)-|E(H)|+1\] \[=(1+w_{1}+\sum_{x\in V(H)}w(x))-(|E(D)|-1)+1\] \[=\sum_{x\in V(D)}w(x)-|E(D)|+2\]
where the last equality holds because of \(x_{n2}\in V(H^{\prime})\) with \(w^{\prime}(x_{n2})=w_{n}-1\) and \(x_{1}\notin V(H)\), and
\[\operatorname{pd}\left(J\cap K\right)=\operatorname{pd}\left(KL\right)= \operatorname{pd}\left(L\right)=|E(H)|-1=|E(D)|-2.\]
In brief, no matter case \((i)\) or \((ii)\), we always have
\[\operatorname{reg}\left(J\cap K\right) =\sum_{x\in V(D)}w(x)-|E(D)|+2;\] \[\operatorname{pd}\left(J\cap K\right) =|E(D)|-2.\]
Since \(K\) has a linear resolution, it follows that \(I(D)^{\mathcal{P}}=J+K\) is a Betti splitting. Thus by Lemma 2.10 and Corollary 2.3, we get
\[\operatorname{reg}\left(I(D)\right) =\operatorname{reg}\left(I(D)^{\mathcal{P}}\right)=\max\{ \operatorname{reg}\left(J\right),\operatorname{reg}\left(K\right),\operatorname {reg}\left(J\cap K\right)-1\}\] \[=\max\{\underset{x\in V(D)}{\sum}w(x)-|E(D)|+3-\ w_{1},w_{1}+1, \underset{x\in V(D)}{\sum}w(x)-|E(D)|+1\}\] \[=\underset{x\in V(D)}{\sum}w(x)-|E(D)|+1,\]
and
\[\operatorname{pd}\left(I(D)\right) =\operatorname{pd}\left(I(D)^{\mathcal{P}}\right)=\max\{ \operatorname{pd}\left(J\right),\operatorname{pd}\left(K\right),\operatorname {pd}\left(J\cap K\right)+1\}\] \[=\max\{|E(D)|-2,0,|E(D)|-2+1\}\] \[=|E(D)|-1.\]
(2) If \(i_{s}=n\), then \(d(x_{n})>2\). In this case, we still set \(J\cap K=KL\) and write \(L\) as follows:
\[L=L_{1}+L_{2}\]
where \(L_{1}=(\underset{j=1}{\overset{w_{2}}{\prod}}x_{2j},x_{n-1,1}\underset{j=2}{ \overset{w_{n}}{\prod}}x_{nj},\underset{j=1}{\overset{w_{1}}{\prod}}y_{11,j}, \ldots,\underset{j=1}{\overset{w_{1}}{\prod}}y_{1k,j},\underset{j=1}{ \overset{w_{y_{1}}}{\prod}}y_{s1,j},\ldots,\underset{j=1}{\overset{w_{y_{s}},\ell}{\prod}}y_{s,\ell,j})+I(D\backslash\{x_{1},x_{n}\})^{\mathcal{P}}\) is the polarization of the edge ideal of the graph \(H\), here \(H\) is the union of the induced subgraph \(D\setminus\{x_{1},x_{n}\}\) of \(D\) and the vertex-weighted oriented graph \(H^{\prime}\) with the vertex set \(V(H^{\prime})=\{x_{21},x_{22},x_{n-1},x_{n2},y_{11,1},y_{11,2},\ldots,y_{1k,1}, y_{1k,2},y_{s1,1},y_{s1,2},\ldots,y_{s,\ell,1},y_{s,\ell,2}\}\), the edge set \(E(H^{\prime})=\{x_{21}x_{22},x_{n-1}x_{n2},y_{11,1}y_{11,2},\ldots,y_{1k,1}y_{1 k,2},y_{s1,1}y_{s1,2},\ldots,y_{s,\ell,1}y_{s,\ell,2}\}\) and a weight function \(w^{\prime}:V(H^{\prime})\rightarrow\mathbb{N}^{+}\) such that \(w^{\prime}(x_{21})=w^{\prime}(x_{n-1,1})=w^{\prime}(y_{11,1})=\cdots=w^{\prime} (y_{1k,1})=w^{\prime}(y_{s1,1})=\cdots=w^{\prime}(y_{s,\ell,1})=1\), \(w^{\prime}(x_{22})=w_{2}-1,w^{\prime}(x_{n2})=w_{n}-1,w^{\prime}(y_{1j,2})=w_{ y_{1j}}-1\geq 1\) for any \(1\leq j\leq k\leq t_{1}\) and \(w^{\prime}(y_{s,p,2})=w_{y_{s,p}}-1\geq 1\) for any \(1\leq p\leq\ell\leq t_{s}\), and \(L_{2}=(\underset{j=1}{\overset{w_{y_{1,k+1}}}{\prod}}y_{1,k+1,j},\ldots, \underset{j=1}{\overset{w_{y_{1,t_{1}}}}{\prod}}y_{1,t_{1},j},\underset{j=1}{ \overset{w_{s,\ell+1}}{\prod}}y_{s,\ell+1,j},\ldots,\underset{j=1}{\overset{ w_{s,t_{s}}}{\prod}}y_{s,t_{s},j})\). In this case, \(H\) is a forest and \(|E(H)|=|E(D)|-(t_{s}-\ell)-(t_{1}-k)-1=|E(D)|-(t_{s}-\ell+t_{1}-k+1)\). We consider the following two cases:
\((i)\) If \(k<t_{1}\) or \(\ell<t_{s}\), notice again that the variables that appear in \(K\) and \(L\), and in \(L_{1}\) and \(L_{2}\) are different, respectively. Then using Lemmas 2.6, 2.7 and 3.2, we obtain
\[\operatorname{reg}\left(J\cap K\right) =(1+w_{1})+\operatorname{reg}\left(L\right)=(1+w_{1})+ \operatorname{reg}\left(L_{1}\right)+\operatorname{reg}\left(L_{2}\right)-1\] \[=(1+w_{1})+(\underset{x\in V(H)}{\sum}w(x)-|E(H)|+1)\] \[+((\underset{j=k+1}{\overset{t_{1}}{\sum}}w_{1j}-(t_{1}-k))+1)-1\] \[=(1+w_{1}+\underset{x\in V(H)}{\sum}w(x)+\underset{j=k+1}{ \overset{t_{1}}{\sum}}w_{1j})-(|E(D)|\] \[-(t_{1}-k+1))-(t_{1}-k)+1\] \[=\underset{x\in V(D)}{\sum}w(x)-|E(D)|+2\]
where the last equality holds because of \(x_{n2}\in V(H^{\prime})\) with \(w^{\prime}(x_{n2})=w_{n}-1\) and \(x_{1}\notin V(H)\), and
\[\operatorname{pd}\left(J\cap K\right)=\operatorname{pd}\left(L\right)= \operatorname{pd}\left(L_{1}+L_{2}\right)=\operatorname{pd}\left(L_{1}\right)+ \operatorname{pd}\left(L_{2}\right)+1\]
\[=|E(H)|-1+[\sum\limits_{j=k+1}^{t_{1}}\operatorname{pd}\left((y_{1j} ^{w_{y_{1j}}})\right)+\sum\limits_{j=\ell+1}^{t_{s}}\operatorname{pd}\left((y_{sj }^{w_{y_{sj}}})\right)+1]+1\] \[=[|E(D)|-(t_{s}-\ell+t_{1}-k+1)]-1+(t_{s}-\ell+t_{1}-k-1)+1\] \[=|E(D)|-2.\]
\((ii)\) If \(k=t_{1}\) and \(\ell=t_{s}\), then \(L_{2}=(0)\) and \(|E(H)|=|E(D)|-1\). From (ii) of case (1), we also have
\[\operatorname{reg}\left(J\cap K\right)=\sum\limits_{x\in V(D)}w(x)-|E(D)|+2,\]
\[\operatorname{pd}\left(J\cap K\right)=|E(D)|-2.\]
Similar arguments as case (1), we obtain
\[\operatorname{reg}\left(I(D)\right)=\sum\limits_{x\in V(D)}w(x)-|E(D)|+1,\]
\[\operatorname{pd}\left(I(D)\right)=|E(D)|-1.\]
The proof is completed.
**Theorem 3.5**.: _Let \(D=(V(D),E(D),w)\) be a vertex-weighted oriented unicyclic graph such that \(w(x)\geq 2\) if \(d(x)\neq 1\). Then_
1. \(\operatorname{reg}\left(I(D)\right)=\sum\limits_{x\in V(D)}w(x)-|E(D)|+1\)_,_
2. \(\operatorname{pd}\left(I(D)\right)=|E(D)|-1\)_._
Proof.: Let \(D_{1},\ldots,D_{m}\) be all connected components of \(D\). Thus by Lemma 2.6, we get
\[\operatorname{reg}\left(I(D)\right)=\operatorname{reg}\left(\sum\limits_{i=1} ^{m}I(D_{i})\right)=\sum\limits_{i=1}^{m}\operatorname{reg}\left(I(D_{i}) \right)-(m-1),\]
\[\operatorname{pd}\left(I(D)\right)=|E(D)|-1\]
where \(D\) is a connected vertex-weighted oriented unicyclic graph. The conclusion follows from Theorem 3.4.
An immediate consequence of the above theorem is the following corollary.
**Corollary 3.6**.: _Let \(D=(V(D),E(D),w)\) be a vertex-weighted oriented unicyclic graph with \(mm\) connected components. Let \(w(x)\geq 2\) for any \(d(x)\neq 1\). Then_
\[\operatorname{\emph{depth}}\left(I(D)\right)=m.\]
Proof.: It follows from Auslander-Buchsbaum formula.
The following two examples show that the assumption that \(w(x)\geq 2\) if \(d(x)\neq 1\) in Theorem 3.4 and Theorem 3.5 cannot be dropped.
**Example 3.7**.: Let \(I(D)=(x_{1}x_{2}^{2},x_{2}x_{3}^{2},x_{3}x_{4}^{2},x_{4}x_{5}^{2},x_{5}x_{1}^ {2},x_{1}x_{6},x_{6}x_{7},x_{7}x_{8}^{2})\) be the edge ideal of a vertex-weighted oriented unicyclic graph \(D=(V(D),E(D),w)\) with \(w_{1}=w_{2}=w_{3}=w_{4}=w_{5}=w_{8}=2\) and \(w_{6}=w_{7}=1\). By using CoCoA, we get \(\operatorname{reg}\left(I(D)\right)=8\) and \(\operatorname{pd}\left(I(D)\right)=6\). But we have \(\operatorname{reg}\left(I(D)\right)=\sum\limits_{i=1}^{8}w_{i}-|E(D)|+1=7\) and \(\operatorname{pd}\left(I(D)\right)=|E(D)|-1=7\) by Theorem 3.4.
The following two examples show that the projective dimension and the regularity of the edge ideals of vertex-weighted oriented unicyclic graphs are related to direction selection in Theorem 3.4.
**Example 3.8**.: Let \(I(D)=(x_{1}x_{2}^{3},x_{2}x_{3}^{2},x_{4}x_{3}^{2},x_{4}x_{1}^{4},x_{1}x_{5}^{2})\) be the edge ideal of a vertex-weighted oriented unicyclic graph \(D=(V(D),E(D),w)\) with \(w_{1}=4\), \(w_{2}=3\), \(w_{3}=w_{5}=2\) and \(w_{4}=1\). By using CoCoA, we get \(\operatorname{reg}\left(I(D)\right)=9\) and \(\operatorname{pd}\left(I(D)\right)=3\). But we have \(\operatorname{reg}\left(I(D)\right)=\sum\limits_{i=1}^{5}w_{i}-|E(D)|+1=12-5+ 1=8\) and \(\operatorname{pd}\left(I(D)\right)=|E(D)|-1=5-1=4\) by Theorem 3.4.
**Example 3.9**.: Let \(I(D)=(x_{1}x_{2}^{2},x_{2}x_{3}^{2},x_{3}x_{4}^{4},x_{1}x_{4}^{4},x_{2}x_{5}^{ 2},x_{5}x_{6}^{2},x_{3}x_{7}^{2},x_{7}x_{8}^{2})\) be the edge ideal of a vertex-weighted oriented unicyclic graph \(D=(V(D),E(D),w)\) with \(w_{1}=1\), \(w_{2}=w_{3}=w_{5}=w_{6}=w_{7}=w_{8}=2\) and \(w_{4}=4\). By using CoCoA, we get \(\operatorname{reg}\left(I(D)\right)=11\) and \(\operatorname{pd}\left(I(D)\right)=6\). But we have \(\operatorname{reg}\left(I(D)\right)=\sum\limits_{i=1}^{8}w_{i}-|E(D)|+1=17-8+ 1=10\) and \(\operatorname{pd}\left(I(D)\right)=|E(D)|-1=7\) by Theorem 3.4.
_Acknowledgment_.: This research is supported by the Natural Science Foundation of Jiangsu Province (No. BK20221353). We gratefully acknowledge the use of the computer algebra system CoCoA ([6]) for our experiments.
|
2309.09646 | Concurrent Haptic, Audio, and Visual Data Set During Bare Finger
Interaction with Textured Surfaces | Perceptual processes are frequently multi-modal. This is the case of haptic
perception. Data sets of visual and haptic sensory signals have been compiled
in the past, especially when it comes to the exploration of textured surfaces.
These data sets were intended to be used in natural and artificial perception
studies and to provide training data sets for machine learning research. These
data sets were typically acquired with rigid probes or artificial robotic
fingers. Here, we collected visual, auditory, and haptic signals acquired when
a human finger explored textured surfaces. We assessed the data set via machine
learning classification techniques. Interestingly, multi-modal classification
performance could reach 97% when haptic classification was around 80%. | Alexis W. M. Devillard, Aruna Ramasamy, Damien Faux, Vincent Hayward, Etienne Burdet | 2023-09-18T10:30:27Z | http://arxiv.org/abs/2309.09646v1 | # Concurrent Haptic, Audio, and Visual Data Set During Bare Finger Interaction with Textured Surfaces
###### Abstract
Perceptual processes are frequently multi-modal. This is the case of haptic perception. Data sets of visual and haptic sensory signals have been compiled in the past, especially when it comes to the exploration of textured surfaces. These data sets were intended to be used in natural and artificial perception studies and to provide training data sets for machine learning research. These data sets were typically acquired with rigid probes or artificial robotic fingers. Here, we collected visual, auditory, and haptic signals acquired when a human finger explored textured surfaces. We assessed the data set via machine learning classification techniques. Interestingly, multi-modal classification performance could reach 97% when haptic classification was around 80%.
Haptic Data Set, Textured Surfaces, Machine Learning
## I Introduction
It is well known that human perception is often multi-sensory where different sources of information accessed through different sensory modalities are merged and integrated by the brain. This integration process is thought to increase the robustness of the perception of the properties of objects in the face of uncertainty, to resolve ambiguities, and to contribute to the perceptual stability of sensory scenes [1, 2, 3, 4].
In naptics, the senses which most frequently contribute to the perception of the mechanical properties of surfaces are vision and audition, in addition to touch. It stands to reason that the collection of data sets for use in perceptual studies, artificial perception, and machine learning research should include rich data from these three sensory modalities and their sub-modalities.
In vision, because an image depends on illumination and thus impacts shading even at small scales, richer information would be available from stereoscopic rather than cyclopean viewing [5]. In audition, the mechanical interaction in sliding contacts frequently produces audible sounds [6], which can be an important source of information [7]. In tool-mediated touch, the nature of the probe, the kinematics of scanning, and the effort applied all contribute to perceptual information.
Previous works focused on recording images of textures and associated acceleration signals produced by a rigid probe while exploring a rough surface [8, 9, 10, 11, 12]. A few studies have also recorded the sound produced during sliding interactions [13]. These studies, however, relied on rigid probes to record haptic signals such as load and acceleration. Yet other studies have recorded the load and the vibrations generated in the substrate when a bare finger slide on a textured surface [14, 15] or skin vibrations during sliding [16, 17]. To our knowledge, however, no rich data set is available with synchronised visual, audio, and haptic signals arising from a bare finger exploring surfaces. Such data set would be useful to conduct the aforementioned perceptual, artificial perception, and machine learning studies.
This observation motivated us to create a multi-modal data set comprising the signals created when a bare finger explored varied textured surfaces. The measured signals were stereoscopic images of the surface, the position and speed of the fingertip in images coordinates, the load applied by the finger, the sound emitted, and the vibrations propagating in the finger. While the option of employing an artificial finger to carry out the recordings, e.g [18, 19, 20], was available, we preferred producing the recordings with real fingers.
To illustrate the opportunities afforded by this new data set, we compared the accuracy of classifiers trained with different combinations of data from different modalities to identify different surfaces. In the foregoing, Section II describes the data acquisition apparatus, the acquisition protocol, and the surfaces employed, Section III describes the performance of various machine learning classifiers, and Section IV discusses the results and suggests further work.
## II Methods
We recorded the interactions of one participant's finger sliding against ten different surfaces. For each surface, the recorded signals comprised high resolution visual data and five trials of one minute of haptic and audio data. The signals were:
* **Visual**: Stereoscopic images.
* **Audio**: Sound produced by the finger-surface interaction.
* **Vibration**: Vibrations propagating in the finger along three-axes of acceleration.
* **Kinematics**: Planar position and speed of the fingertip.
* **Force**: Normal and tangential force components.
### _Physical Setup_
The setup, see Fig. 1, comprised two 4K cameras, a directional microphone, a set of load cells placed under the touched texture and three 1-axis accelerometers mounted on a ring. These sensors were all connected to a single computer. The recordings were conducted in a soundproof room to focus on the sound generated by the finger interacting with the material and to limit background noise.
#### Iii-A1 Cameras
The cameras (Model QXC-700, Thustar, Dongguan, China) were fitted with integrated close-range lenses. They focused on the samples at a distance of 15 cm with an horizontal offset of 5 cm to provide stereoscopic views. The cameras were mounted on an aluminium profile structure to ensure positional stability.
ImagesEach surface was photographed with 3840\(\times\)2160 resolution. To increase the variability in each image class, the position and intensity of the light sources were changed for each image of the same surface.
Position and velocityThe cameras were also used to capture the position of the fingertip interacting with a surface. The fingertip's nail was marked with nail paint to facilitate image processing and compute the position of the fingertip in image coordinates. The slow processing pipeline (4K images \(\rightarrow\) USB serial communication \(\rightarrow\) computer vision algorithms) precluded recording at high speed. The sampling rate was 15 Hz. To detect the fingertip's position each image was first binarized via a HSV-color threshold then a blob detection algorithm was applied to find the center of the nail polish blob in the image. This image processing was implemented with existing OpenCV algorithms.
#### Iii-A2 Load cells
Eight load cells (Model TAL230A-10Kg, HT Sensor Technology Co. Ltd., Xian, China) were placed below the surface support, see Fig.1.c) and connected to signal conditioning units (Model HX711, Avia Semiconductor Ltd., Xiamen, China). The signals were routed by a microcontroller that streamed the values coded on 23 bits at a rate of 80 Hz.
#### Iii-A3 Microphone
The directional microphone (Model 130A23, PCB Piezotronics Inc, Depew, NY, USA) had a sensitivity of 14 mV/Pa and was mounted on the same structure holding the cameras. The microphone pointed towards the centre of the texture to optimise sound capture.
#### Iii-A4 Accelerometers
The three accelerometers (Model 8730A, Kistler Instrumente AG, Winterthur, Switzerland) had a sensitivity of 10 mV/g and were mounted along three orthogonal axes on an adjustable ring. They measured the vibrations propagating through the finger during sliding.
#### Iii-A5 Signal conditioning
The accelerometers and the microphone were connected to a 4-channel laboratory amplifier and acquisition device (Model 5165A, Kistler Instrumente AG, Winterthur, Switzerland) sampling with 32 bits resolution. The audio and acceleration signals were recorded at 12.5 kHz. The synchronisation of these signals was critical. To ensure a precise and accurate temporal alignment, the synchronisation of the microphone and the accelerometers was managed directly by the signal conditioning unit at a hardware level. The data was then sent via a TCP socket to the main computer.
### _Surfaces_
We selected ten surfaces from over the one hundred samples described in [21], subjectively representing a diversity of materials and surface topographies.
### _Recording protocol_
In each trial, the volunteer sat in front of the acquisition setup and waited for the start of the recording. At the beginning
Fig. 1: Setup: (a) 4K cameras above a textured surface. (b) load sensing system (c). (d) Directional microphone with adjustable support. (e) Accelerometers affixed to the finger touching the surface. (f) Microphone and accelerometers connected to signal conditioning unit. (g) load sensing system connected to an Arduino. Setup installed in a sound-proofed room.
Fig. 2: Images of the ten surfaces selected from [21]. It: Glossy cardboard, 14t: Plastic film textured by trapped air bubbles, 18t: Insulation fibres, 23t: Leather chamois, 38t: Plastic foam, 54t: Kitchen cloth, 55t: Thick paper, 57t: Sanding cloth Siamor J P 180, 83t: Ribbed aluminium block and 108t: Glass with medium structure.
of each trial, the finger was placed above the texture without touching it, enabling the calibration of the load measurement.
To provide variability in the recorded movements, each trial included,
* 10 s of lateral back and forth sliding.
* 10 s of proximal/distal back and forth sliding.
* 10 s of clockwise circular sliding.
* 10 s of anti-clockwise circular sliding.
* 20 s of unconstrained sliding.
The volunteer made best efforts to keep the same speed and applied load during the constrained motion phase. During the last 20 s of unconstrained motion, the volunteer was free to vary the load and the scanning speed.
### _Data processing_
#### Ii-D1 Synchronisation
To ensure the synchronisation of each source of data, the computer kernel clock was used to add a timestamp to every chunk of data received. Each chunk was then streamed via a LabStreamingLayer architecture [22]. In parallel, the computer ran a multi-threaded process receiving all the data and feeding it to a PSQL database. The audio and acceleration streaming at 12.5 kHz generated a lot of data. To increase the database storing speed, we used the TimeScaleDB PSQL extension which is specialised for time series storing.
#### Ii-D2 Image cropping
Although the cameras were positioned in close proximity to the surfaces, the captured images contained part of the setup holding the studied piece of texture. Images were cropped to keep only the region of interest centred on the texture. The new origin was subtracted from position signals to maintain the position values in image coordinates.
#### Ii-D3 Upsampling
The loads and positions were recorded at a lower sampling rate than the audio and acceleration signals. Force and position signals were up-sampled to 12.5 kHz to facilitate future data processing.
#### Ii-D4 Load representation
The surface samples had different masses. The initial load readings were subtracted from subsequent load signals. The load vector signals, noted \(f=(f_{1},..,f_{8})\in\mathbb{R}^{8}\), were mapped via least-squares optimisation to a force and torque vector \(F=(f_{x},f_{y},f_{x},\tau_{x},\tau_{y},\tau_{z})_{\mathcal{B}}\) represented in Cartesian space with origin at the image centre.
## III Results
It was previously shown in the audio domain that classification was advantageously carried out with a set of features termed "Mel Frequency Cepstral Coefficients" (MFCC) [23]. A combination of feature and machine learning classifiers, such as the Support Vector Machine (SVM) or the Random Forest algorithm, allowed the authors to classify human emotions as audio classes. A similar approach was practiced in [24] where traditional machine learning classifiers were applied over a data set to perform classification.
### _Classification through stereoscopic images_
Image data augmentation was performed before image data was passed through a classification networks [25]. This step enhanced the quantity of data set and therefore, the classification performance was also improved. The classification of raw surface images was then performed using a neural network model, a simple 2D image classification model [26] shown in Fig.4. This network extracted the edges and contours of the image to identify similar features and correlated them within the same class. This step led to efficient differentiation in different classes. The training and test data set were respectively split into a 70:30 ratio.
The neural network performed well on the image data set with 87% accuracy for training data and 83% for test data. The loss curves in Fig.5 exhibits successful learning. The training and test curves came close to each other, which is a sign of absence of over-fitting and under-fitting. The low training loss and test loss of 0.143 and 0.157, respectively, show the good performance of this classification model.
### _Classification using audio signals_
The raw audio data collected by the microphone contained redundant information. To get rid of redundant information and improve classification performance, it was essential to extract features in the raw data. Efficient features for audio classification were described in [23]. It was shown that the
Fig. 4: Neural network model for image classification
Fig. 3: Schematics of the recording setup. Cameras recorded images of the material and tracked the fingertip. Load cells measured forces and torques applied by the finger to the surfaces. Accelerometers measured vibrations propagating through the finger. Microphone recorded the sounds produced by the interaction of the fingertip with the surface.
features such as Mel Frequency Cepstral Coefficient, Spectral roll-off, Chromagram, Pitch, Spectral Centroid, spectral bandwidth, and RMS energy worked well for discriminating between different audio classes.
The data set was split into training and test data sets with a ratio of 70:30. We aimed to show that the audio signals recorded by the setup were sufficient to discriminate between the selected surfaces. To investigate the performance of classification models on audio data, several standard Machine Learning models were applied and their performance was assessed through the following parameters: Classification Accuracy, Precision, Recall, and F1 Score.
Table I shows the performance characteristics of each classifier applied to the audio data set. The Random Forest classifier performed the best for discriminating different classes of friction-induced sounds, leading to an accuracy of 71%.
### _Classification using acceleration signals_
The acceleration data captured by the three accelerometers was also split in a 70:30 ratio of training data and testing data. The feature sets which efficiently characterise friction-induced vibratory signals is still an open question. Since there are similarities between friction-induced vibratory signals and friction-induced sounds, we used the feature set used for audio Machine Learning classification models, here applied to the acceleration signals. To this end, we had to project the three-dimensional vector of acceleration onto a scalar. In the absence of a well defined optimisation criterion, we projected the acceleration vectors onto their Euclidean norm.
The classification performances of each model are shown in Table II. It is clearly seen that the Random Forest classifier again performed well in assessing the differences between different surfaces.
### _Multi-modal Classification_
Classification through audio and acceleration signalsNext, we investigated classification performance when audio, and acceleration data were combined. Standard machine learning models were applied to a fusion of audio and haptic data after feature extraction. The fusion model boosted the classification accuracy of the classifiers to a great extent as shown in Table III.
Classification using acceleration and load signalsIn addition to accelerations, we also collected the load applied by the finger during sliding since the load applied informs about the manner users explored the surfaces. Classification was performed on load data combined with acceleration data. Table IV shows the results. There was an overall enhancement in the ability of the classifiers to discriminate textures. Interestingly, the performance of multi-modal load-acceleration classification surpassed the performance of a single-modality classification.
Classification using audio, acceleration, and force signalsA fusion model of audio data with load and acceleration data did not enhance performance much. This result can be explained after comparing the performance of the acceleration/audio classifier with that of that of the acceleration/load classifier. Load data contributed less to the classification process than audio data, see Table V.
### _Summary_
Fig. 6 summarises the classification results. It can be observed that Random Forest algorithm persistently performed better than other classification models for audio, acceleration, acceleration/load, audio/acceleration and audio/acceleration/load classification models. Interestingly,
Fig. 5: Loss curve for image classification
multi-modal data classification works better than single modal data classification. There was a significant increase in accuracy of acceleration classifier when audio data was introduced. Except audio data classification, a similar pattern was observed in the performance of all classifiers (SVC, decision trees, Random Forest) where classification accuracy by SVC is lesser than decision trees which in turn was outperformed by Random Forest model.
## IV Discussion
We built a rich data set of multi-modal sensory signals associated with exploring with bare finger surfaces made of different materials and having varied topographies. This data set will be made publicly accessible. To assess the quality of this data set, we performed a series of tests with different types of machine learning classification algorithms. In a classification problem where the dataset is unstructured or semi-structured and not clearly understood, Support Vector Classifiers(SVC) aid such classification and are also memory efficient. The choice to use decision trees and random forest model was made because of their ease of interpretability and visualisation. However, Random forest surpasses performance as it prevents over-fitting on dataset which is a huge advantage in a classification problem.
Image, audio and haptic classification performed well in spite of the unconstrained movements of the volunteer. Classification was possible with haptic signals, audio signals, and image data. The performance of the audio classifier was below that of the haptic and image classifiers. The sounds emitted during exploration were often insufficient to discriminate between surfaces. Different surfaces gave rise to highly variable sound levels since certain surfaces produced almost no sound when scanned with a real fingertip. This result would likely be different had a rigid probe been used to scan the surfaces.
Overall, the best performance was achieved by the Random Forest classifier compared to other classifiers. Random Forest classifiers make use of ensemble learning methods which work through a multitude of decision trees. Each mutually exclusive branch represents a subcategory of input features. The Random Forest algorithm was superior to the Decision Tree algorithm. The Random Forest algorithm includes bootstrap aggregation which decorrelates the decision trees corresponding to different training sets. Hence, over-fitting was prevented through voting to predict an output. This strategy was very successful in the classification of different surfaces.
Further work will involve data collection with multiple volunteers in order to understand the variability in the haptic and sound signals. A comparison of signals from different participants may help us understand how humans perceive textured surfaces in spite of great variability of the available signals and the strategies they use to achieve versatile and reliable perception.
|
2309.11009 | Controllable Dynamic Appearance for Neural 3D Portraits | Recent advances in Neural Radiance Fields (NeRFs) have made it possible to
reconstruct and reanimate dynamic portrait scenes with control over head-pose,
facial expressions and viewing direction. However, training such models assumes
photometric consistency over the deformed region e.g. the face must be evenly
lit as it deforms with changing head-pose and facial expression. Such
photometric consistency across frames of a video is hard to maintain, even in
studio environments, thus making the created reanimatable neural portraits
prone to artifacts during reanimation. In this work, we propose CoDyNeRF, a
system that enables the creation of fully controllable 3D portraits in
real-world capture conditions. CoDyNeRF learns to approximate illumination
dependent effects via a dynamic appearance model in the canonical space that is
conditioned on predicted surface normals and the facial expressions and
head-pose deformations. The surface normals prediction is guided using 3DMM
normals that act as a coarse prior for the normals of the human head, where
direct prediction of normals is hard due to rigid and non-rigid deformations
induced by head-pose and facial expression changes. Using only a
smartphone-captured short video of a subject for training, we demonstrate the
effectiveness of our method on free view synthesis of a portrait scene with
explicit head pose and expression controls, and realistic lighting effects. The
project page can be found here:
http://shahrukhathar.github.io/2023/08/22/CoDyNeRF.html | ShahRukh Athar, Zhixin Shu, Zexiang Xu, Fujun Luan, Sai Bi, Kalyan Sunkavalli, Dimitris Samaras | 2023-09-20T02:24:40Z | http://arxiv.org/abs/2309.11009v2 | # Controllable Dynamic Appearance for Neural 3D Portraits
###### Abstract
_Recent advances in Neural Radiance Fields (NeRFs) have made it possible to reconstruct and reanimate dynamic portrait scenes with control over head-pose, facial expressions and viewing direction. However, training such models assumes photometric consistency over the deformed region e.g. the face must be evenly lit as it deforms with changing head-pose and facial expression. Such photometric consistency across frames of a video is hard to maintain, even in studio environments, thus making the created reanimatable neural portraits prone to artifacts during reanimation. In this work, we propose CoDyNeRF, a system that enables the creation of fully controllable 3D portraits in real-world capture conditions. CoDyNeRF learns to approximate illumination dependent effects via a dynamic appearance model in the canonical space that is conditioned on predicted surface normals and the facial expressions and head-pose deformations. The surface normals prediction is guided using 3DMM normals that act as a coarse prior for the normals of the human head, where direct prediction of normals is hard due to rigid and non-rigid deformations induced by head-pose and facial expression changes. Using only a smartphone-captured short video of a subject for training, we demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls, and realistic lighting effects._
## 1 Introduction
The creation of photo-realistic human portraits with explicit control of head-pose and facial expressions remains a topic of active research in the computer graphics and computer vision communities. Fully controllable Neural 3D portraits are necessary in AR/VR applications where an immersive 3D experience is important. Recent advances in neural rendering and novel view synthesis [4, 5, 10, 11, 26, 27, 29, 30, 32, 35, 46, 49, 50] have demonstrated impressive image-based rendering of complex scenes and objects. Recently, these methods have also been extended to model and reanimate the human head as shown in [3, 13, 51]. These works typically employ a learnable deformation to map the deforming head to a canonical space, where the texture and geometry is predicted and rendered. The canonical space represents a mostly static appearance of the face, akin to a UV texture map, with an optional dependence on expressions [3, 13] in order to capture texture changes induced by them. The deformation, geometry, and texture are learnt via back-propagation using the photometric error with respect to the ground truth. In order for such a setup to successfully learn the appearance, deformation, and geometry, the training data must be photometrically consistent. More specifically, the color of a particular position on the human head must remain constant once mapped to the canonical space, regardless of the articulated head-pose and facial expression. However, in realistic lighting conditions, this is rarely the case.
In the real world, there is self-shadowing of the face, and the head casts its shadow on other parts of the scene as it rotates and facial expressions change. Similarly, spatially varying skin reflectance induces specularities that change with viewing angles, head poses, and facial expressions. The assumption of a static appearance in the canonical space is no longer true. As a result, deformable NeRFs trained on such data with a _static canonical appearance_ assumption suffer from registration error that leads to blurriness in the renderings and inaccurate reproduction of specularities, shading and shadows. Further, if these models use a canonical space dependent on expression parameters, such as [3, 13], the aforementioned illumination-dependent effects become entangled with them. This entanglement leads to artifacts in articulated expression and inaccurate reproduction of the illumination effects during reanimation.
Accurately reproducing illumination-dependent effects due to strongly non-ambient lighting requires a modification of the canonical space. It cannot be static, it must be dynamic i.e. the canonical space must vary in appearance as illumination effects change in the deformed space. More specifically, in a dynamic portrait scene that is captured under constant but unknown non-ambient lighting, the following lighting effects must be reproduced in the canonical space 1) Specularities, which depend on viewing directions and surface normals 2) Shading, that is dependent on head-pose and facial expression deformations as they determine the relative orientation of the surface normals to the lighting and 3) Cast shadows, which are dependent on whether or not a strong light source is occluded w.r.t a point. Capturing each of these effects, especially the cast shadows, using a physically based model is untenable in a deforming NeRF framework due to the exponentially increasing MLP evaluations that need to be performed. Our key insight is that, given enough training data, the implicit and explicit dependencies of the aforementioned illumination effects on the surface geometry, the head-pose, and the facial expression deformations can be approximated by an appropriately conditioned MLP. Based on this, we present CoDyNeRF, a method that uses a dynamic canonical appearance space, modelled by an MLP, to enable the creation of reanimatable and photorealistic neural 3D portraits using data captured in realistic lighting conditions. CoDyNeRF predicts illumination dependent effects directly in the canonical space by conditioning an MLP on the dynamic surface normals and 3DMM keypoints as well as expression and pose deformations. This conditioning makes it easier for the MLP to interpolate illumination dependent effects for novel facial expressions, head-poses and views without sacrificing the quality of facial expression and head-pose articulation.
However, a challenge remains. While 3DMM keypoints, facial expression and head-pose deformations are given by the 3DMM, the surface normals are not. Due to the lack of available ground truth geometry, it is hard to estimate accurate surface normals for each point in the scene, especially on the head and face, which are dynamic and undergo strong deformations with changing facial expression and head-pose. One possible solution is to use the normals given by the density field of the NeRF. However, as shown in Fig 4 and observed in prior work [43], these are often noisy and inaccurate. Instead, CoDyNeRF uses a carefully designed MLP, that is able to leverage both 3DMM and scene normal priors, to predict the surface normals. The MLP-predicted normals capture more detail than 3DMM-based normals and, due to the 3DMM-prior, transform correctly as the head undergoes rigid and non-rigid deformations. These normals are then used to supervise the gradient density normals of the NeRF in order to ensure accurate reconstruction of the dynamic head geometry and, consequently, the accurate reproduction of illumination effects. Once trained, CoDyNeRF realistically reproduces shadowing, shading and specular effects during reanimation with explicit control of head pose, facial expression and camera viewpoint.
In summary, our contributions in this paper are as follows: 1) Using a dynamic canonical appearance, we are capable of
creating a fully reanimatable 3D neural portrait from data captured in strong, non-ambient lighting conditions.
2) We propose a method to predict accurate and detailed surface normals of the deforming human head, in addition to the static scene, which is critical for the dynamic appearance learning.
3) We enable a realistic re-animation of lighting and specularity effects on the human face with changing head-pose and facial expressions.
## 2 Related works
CoDyNeRF enables the realistic rendering of illumination effects for controllable neural portraits captured in challenging lighting conditions. It is closely related to recent work on neural rendering, novel view synthesis, 3D face modeling, and controllable face generation.
**Neural Scene Representations and Novel View Synthesis**. CoDyNeRF is related to recent work in neural rendering and novel view synthesis [3, 4, 5, 10, 11, 13, 26, 23, 27, 28, 29, 33, 35, 36, 40, 43, 45, 46, 47, 49, 50, 51]. Neural Radiance Fields (NeRF) use a Multi-Layer Perceptron (MLP), \(F\), to learn a volumetric representation of a scene. For every 3D point on a ray, NeRFs predict an associated color and density which is volume rendered to give the final color. While NeRFs are able to reproduce specular effects, they do so at the cost of geometric fidelity of the scene [43]. RefNeRF [43] extends NeRFs to explicitly handle specularities of static scenes by improving the learnt surface normals, and consequently the scene geometry. While NeRFs such as RefNeRF are able to generate photo-realistic images for novel view synthesis, it is only designed for a static scene and is unable to represent scene dynamics. Our approach ensures the accurate reproduction of illumination effects when reanimating neural portraits since, it is specifically designed to model illumination effects of a dynamic scene.
**Dynamic Neural Scene Representations**. There has been an active effort to extend NeRFs to dynamic scenes. Most works do so by imposing temporal constraints either explicitly using scene flow [25, 26, 35, 46] or implicitly by using a canonical frame [32, 35]. Authors of [33] build upon [32] and model topologically changing deformations by lifting the canonical space to high dimensions. The deformation fields in these approaches are conditioned on learnt latent codes without specific physical or semantic meaning, and therefore not controllable in an intuitive manner. Further, such models only work with limited illumination change across frames, since learning a common registration becomes significantly difficult without any photometric consistency.
**Controllable Face Generation**. Generative Adversarial Networks(GANs)[12, 16, 17, 18, 53, 19] have enabled high-quality image generation and have inspired a large collection of work [2, 6, 7, 8, 22, 34, 38, 39, 41, 42] capable of face manipulation. However, it is challenging to enable high-quality view synthesis and 3D controls of the portraits as most of these works lack any 3D understanding and are purely image-based. Works such as [1, 9, 20, 21] fix this by using an intermediate 3D face representation, via a 3D Morphable Model, to reanimate face images/videos. While head poses and facial expressions are modelled with good detail in these models, thanks to the 3DMM, they are often unable to perform novel view synthesis as they focus on face region but neglect the geometry or appearance of the scene.
In a similar vein, NerFACE[10], uses NeRF to model a 4D face avatar and allows pose/expression control on the head. However, no view synthesis can be performed on the scene and the subject is assumed to be uniformly lit throughout the capture process. Neural Head Avatars [13], IMAvatar [51] and RigNeRF [3] improve upon the results of NerFACE further by using a 3DMM prior more explicitly. Neural Head Avatars learns a per-vextex pose and expression conditioned deformation for the FLAME mesh along with a detailed texture while IMAvatar [51] learns a per-point FLAME basis, used for registering different head-poses and facial expressions. RigNeRF uses the 3DMM deformation as a prior on its deformation field that maps points from the articulated space to the canonical space. Unlike our method, all three aforementioned methods require an ambiently lit face throughout the capture process and are unable to render expression and head-pose dependent illumination effects.
## 3 CoDyNeRF
In this section, we describe our method, CoDyNeRF, that enables the creation of reanimatable 3D portrait scenes from videos captured in the real-world with non-ambient lighting conditions. A deformable Neural Radiance Field (NeRF) [30], with a per-point 3DMM guided deformation field models facial expression and head-pose. It maps points from the deformed space to a dynamic canonical space of the model where the volume density and the appearance is predicted. The dynamic canonical space is conditioned on the surface normals, head-pose and facial expression deformations along with other shading and shadowing based cues (Sec. 3.2). The surface normals, defined in the scene world co-ordinates, are dynamic and vary with head pose and facial expression. CoDyNeRF predicts these normals using an MLP that is trained with 3DMM normals and scene normals as a prior (Sec. 3.2.1). An overview of the full architecture is shown in Fig 2. Once trained, CoDyNeRF is not only able to control facial expression and head pose of the subject but is also able to faithfully capture the varying illumination effects, such as specularities and shadows.
### A 3DMM-guided Deformable Neural Radiance Field
A neural radiance field (NeRF) is defined as a continuous function \(F:(\mathbf{\gamma}_{m}(\mathbf{x}(t_{i})),\mathbf{\gamma}_{n}(\mathbf{d}))\rightarrow( \mathbf{c}(\mathbf{x}(t_{i}),\mathbf{d}),\sigma(\mathbf{x}(t_{i})))\), that, given the position of a point in the scene \(\mathbf{x}(t_{i})=\mathbf{o}+t_{i}\mathbf{d}\) that lies on a ray originating at \(\mathbf{o}\) with direction \(\mathbf{d}\), outputs the color \(\mathbf{c}=(r,g,b)\) and the density \(\sigma\). \(F\) is usually represented as a multi-layer perceptron (MLP) and \(\mathbf{\gamma}_{m}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3+6m}\) is the positional encoding [30] defined as \(\mathbf{\gamma}_{m}(\mathbf{x})=(\mathbf{x},...,\sin(2^{k}\mathbf{x}(t_{i})), \cos(2^{k}\mathbf{x}(t_{i})),...)\) where \(m\) is the total number of frequency bands and \(k\in\{0,...,m-1\}\). The expected color of the pixel through which a camera ray passes is calculated via volume rendering as follows:
\[\begin{split} C=\sum_{t}\omega_{t}c(x(t));\\ \text{where }w_{i}=\text{exp}(-\sum_{j<i}\sigma_{j}\left(t_{j+1}-t _{j}\right))\left(1-\text{exp}(-\sigma_{i}\left(t_{i+1}-t_{i}\right))\right) \end{split} \tag{1}\]
The parameters of \(F\) are trained to minimize the L2 distance between the expected color and the ground-truth.
NeRFs can be extended to model dynamic scenes by using a deformation field to map each 3D point of the scene to a canonical space, where the volumetric rendering takes place [3, 32, 33, 35]. The deformation field is also represented by an MLP \(D_{i}:\mathbf{x}\rightarrow\mathbf{x}_{\text{can}}\) where \(D_{i}\) is defined as \(D(\mathbf{x},\omega_{i})=\mathbf{x}_{\text{can}}\) and \(\omega_{i}\) is a per-frame latent deformation code. Following prior work [3], we use a 3DMM prior on the deformation field as follows:
\[\begin{split}\hat{D}(\mathbf{x})=3\text{3DMMDef}(\mathbf{x}, \beta_{i\text{exp}},\beta_{i\text{pose}})\\ +D(\mathbf{\gamma}_{a}(\mathbf{x}),\mathbf{\gamma}_{b}(\text{3DMMDef}( \mathbf{x},\beta_{i\text{exp}},\beta_{i\text{pose}})),\omega_{i})\\ \mathbf{x}_{\text{can}}=&\mathbf{x}+\hat{D}( \mathbf{x})\end{split} \tag{2}\]
where, \(\text{3DMMDef}(\mathbf{x},\beta_{i\text{exp}},\beta_{i\text{pose}})\) is the deformation prior given by the 3DMM, \(\beta_{i\text{exp}},\beta_{i\text{pose}}\) are the articulated facial expression and head-pose of the frame \(i\), and \(\mathbf{\gamma}_{a},\mathbf{\gamma}_{b}\) are the positional encoding functions with frequencies \(a\) and \(b\) respectively. The deformation prior, can be written as follows:
\[\text{3DMMDef}(\mathbf{x},\beta_{i\text{exp}},\beta_{i\text{pose}})=\frac{ \text{3DMMDef}(\mathbf{x},\beta_{i\text{exp}},\beta_{i\text{max}})}{\text{ exp}(\text{DisToMesh}(\mathbf{x}))} \tag{3}\]
where, \(\hat{\mathbf{x}}\) is the closest point on the mesh to \(\mathbf{x}\), \(\text{DistToMesh}(\mathbf{x})=||\mathbf{x}-\hat{\mathbf{x}}||\) is the distance between \(\mathbf{x}\) and \(\hat{\mathbf{x}}\) and \(\text{3DMMDef}(\hat{\mathbf{x}},\beta_{i\text{exp}},\beta_{i\text{pose}})\) is the deformation of the vertex \(\hat{\mathbf{x}}\) as follows:
\[\text{3DMMDef}(\hat{\mathbf{x}},\beta_{i\text{exp}},\beta_{i\text{pose}})= \hat{\mathbf{x}}_{\text{FLAME}(\beta_{i\text{exp}},\beta_{i\text{max}}, \beta_{i\text{max}})}-\hat{\mathbf{x}}_{\text{FLAME}(\beta_{i\text{exp}},\beta _{i\text{max}})} \tag{4}\]
where, \(\hat{\mathbf{x}}_{\text{FLAME}(\beta_{i\text{exp}},\beta_{i\text{max}},\alpha )}\) is the position of \(\mathbf{x}\) in the canonical space and \(\hat{\mathbf{x}}_{\text{FLAME}(\beta_{i\text{exp}},\beta_{i\text{max}})}\) is its position with head pose and facial expression parameters \(\{\beta_{\text{exp}},\beta_{i\text{pose}}\}\).
### An Illumination aware dynamic canonical appearance model
In the canonical space, CoDyNeRF predicts the density and a dynamic RGB appearance. The dynamic RGB is conditioned on the surface normals, head-pose, and expression deformations along with other shading and shadowing cues such, as the reflection vector and global location of the head. Below, we describe each aspect of the appearance model.
**A spatially conditioned density prediction model**. First, CoDyNeRF predicts the density at any point by conditioning on its position in the canonical space and its distance to the mesh:
\[\sigma(\mathbf{x}),\tau=F(\mathbf{\gamma}_{c}(\mathbf{x}_{\text{can}}),\text{ DistToMesh}(\mathbf{x})) \tag{5}\]
where, \(F\) is an MLP, \(\tau\) is a feature vector, \(\text{DistToMesh}(\mathbf{x})=||\mathbf{x}-\hat{\mathbf{x}}||\) is the distance of \(\mathbf{x}\) to the closest mesh vertex \(\hat{\mathbf{x}}\) and \(\mathbf{\gamma}_{c}\) is the positional encoding function with \(c\) frequencies.
Figure 2: **Overview of CoDyNeRF.** CoDyNeRF is a deformable NeRF architecture that consists of three learnable MLPs: a deformation MLP \(D\) and density MLP \(F\) and a dynamic appearance MLP \(\mathcal{T}\). Given an image, we shoot rays through each of its pixels. For every ray, we deform each point on it according to a 3DMM-guided deformation field similar to prior work [3]. Next, the deformed point is given as input to the color MLP, \(F\), which predicts the density and neural features that are passed onto the dynamic appearance MLP \(\mathcal{T}\). \(\mathcal{T}\) then takes as input normals, the reflection vector about the normal, the pose and expression deformations along with spherical harmonics shading and head landmark positions to predict dynamic RGB of the point. The final color of the pixel is calculated via volume rendering.
Additional conditioning on \(\text{DistToMesh}(\mathbf{x})\) modestly boosts both the training speed and quality of results by allowing \(F\) to distinguish between points in the canonical space that have never been deformed and points that have been deformed to the canonical space. We provide experiments supporting this in the supplementary section.
**An illumination aware dynamic canonical appearance model.** Next, CoDyNeRF predicts dynamic RGB conditioned on inputs that capture local geometry, surface properties and viewing direction. Since the captured neural portrait is a dynamic scene, the outgoing radiance at any point \(\mathbf{x}\), is implicitly dependent on facial expression and head-pose, \(\{\beta_{\text{exp, pose}}\}\) (or \(\{\beta_{\text{e,p}}\}\)) due surface properties and incoming radiance being dependent on them. More specifically, at any point \(\mathbf{x}\) for a particular articulation of facial expression and head-pose, \(\beta_{\text{e,p}}\), the outgoing radiance is given by the rendering equation as follows:
\[L_{r}\left(\mathbf{x},\omega_{o},\beta_{\text{e,p}}\right)=\int_{\omega_{i}} \rho\left(\mathbf{x},\omega_{i},\omega_{o},\beta_{\text{e,p}}\right)\left( \mathbf{n}\cdot\omega_{i}\right)L_{i}\left(\mathbf{x},\omega_{i},\beta_{\text {e,p}}\right)\mathrm{d}\omega_{i} \tag{6}\]
where, \(\rho\) is the articulation dependent BRDF, \(\mathbf{n}\) is the normal at \(\mathbf{x}\) and \(\omega_{i},\omega_{o}\) are the incoming and outgoing ray directions respectively. We approximate this integral in the canonical space by an MLP, \(\mathcal{T}\), as follows:
\[\left(R,G,B\right)=\mathcal{T}(\tau,\mathbf{n},\mathbf{R},\mathbf{v}_{lmk},3 \text{DMMDef}_{\text{exp}},3\text{DMMDef}_{\text{pose}},\phi_{i}) \tag{7}\]
where, \(\tau\) are features from the density prediction network from Eq. (5), \(\mathbf{n}\) is the surface normal, \(\mathbf{v}_{lmk}\) are the facial landmarks, \(\mathbf{R}=2(\mathbf{d}.\mathbf{n})\mathbf{n}-\mathbf{d}\) is the reflection vector, \(\text{3DMMDef}_{\text{exp}}\coloneqq\text{3DMMDef}(\mathbf{x},\beta_{\text{ exp}},\beta_{\text{pose, can}})\) is the expression-only deformation given by the 3DMDM, \(\text{3DMMDef}_{\text{pose}}\coloneqq\text{3DMMDef}(\mathbf{x},\beta_{\text{ exp, can}},\beta_{\text{pose}})\) is the head-pose only deformation given by the 3DMM and \(\phi_{i}\) is a per-frame latent vector that is learnt through optimization. Each input in Eq. (7) contains information that is essential to the prediction of accurate illumination effects. First, surface reflectance and absorption properties are captured by \(\tau\) which is predicted in the canonical space and thus is forced to only model deformation-independent properties of the surface. The surface normal \(\mathbf{n}\) is used to model shading effects and, along with the reflection vector \(\mathbf{R}\), specular effects. The face landmarks, \(\mathbf{v}_{lmk}\), along with expression and head-pose deformations,\(\text{3DMMDef}_{\text{exp}}\) and \(\text{3DMMDef}_{\text{pose}}\) are used to model cast shadows, inter-reflections and any other illumination effects that depend on the global orientation of the head and deformations due to facial expressions and head-pose. The latent-code captures any appearance changes due to the camera.
#### 3.2.1 Prediction of Dynamic Surface Normals
Prediction of shading and specular effects (through reflection vector \(\mathbf{R}\)) requires accurate surface normals. Within a NeRF, one straightforward way to calculate the normals at any point \(\mathbf{x}\) is to define it as the negative of the density field, \(\sigma(\mathbf{x})\), w.r.t \(\mathbf{x}\) i.e \(\text{Grad}_{\mathbf{n}}(\mathbf{x})=-\frac{\nabla_{\mathbf{x}}\sigma( \mathbf{x})}{\left|\nabla_{\mathbf{x}}\sigma(\mathbf{x})\right|}\). However, as shown in Fig 4 and observed in prior work [43], unless these normals are regularized [43], they are incredibly noisy and rather unusable. To get around this, we use an MLP, \(\mathcal{N}\), to predict the normals. We design \(\mathcal{N}\) in a manner that allows it to exploit local priors from the 3D head mesh and the scene to predict the normals at each point of the dynamic neural portrait. More specifically, the normal at any point is given as follows:
\[\mathbf{n}=\mathcal{N}(\text{Mesh}_{\mathbf{n}}(\mathbf{x}),\text{Grad}_{ \mathbf{n}}(\mathbf{x}),\text{DistToMesh}(\mathbf{x})) \tag{8}\]
where, \(\text{Mesh}_{\mathbf{n}}(\mathbf{x})\) is normal vector of the mesh vertex closest to \(\mathbf{x}\), \(\text{Grad}_{\mathbf{n}}(\mathbf{x})\) is the normal calculated by the negative gradient of the density w.r.t the input point and \(\text{DistToMesh}(\mathbf{x})\) is the distance of \(\mathbf{x}\) to the mesh. With these three inputs, \(\mathcal{N}\), is able to rely on the 3DMM mesh normals for points close to the head, while relying on gradient normals \(\text{Grad}_{\mathbf{n}}(\mathbf{x})\) everywhere else. In supplementary, be demonstrate the utility of each input to \(\mathcal{N}\) by ablating each one of them.
We train \(\mathcal{N}\) through a combination of weak supervision on mesh and scene normals, and regularization losses. The prediction of \(\mathcal{N}\) is forced to be weakly consistent with the 3DMM on its vertices as follows:
\[\mathcal{R}_{Mesh,\mathbf{n}}=\lambda_{Mesh,\mathbf{n}}\sum_{v}\left\| \mathcal{N}(v)-\text{Mesh}_{\mathbf{n}}(v)\right\| \tag{9}\]
where, \(v\) are the vertices of the mesh and \(\lambda_{Mesh,\mathbf{n}}\) is the regularization constant. The normals predicted by \(\mathcal{N}\) are also forced to be forward facing by using the following regularization [43]:
\[\mathcal{R}_{dir,\mathbf{n}}=\sum_{i}w_{i}(\mathbf{x}_{i})\max\left(0, \mathcal{N}(\mathbf{x}_{i})\cdot\mathbf{d}_{i}\right)^{2} \tag{10}\]
where, \(\mathbf{x}_{i}\) are points along the ray passing through pixel \(i\) with direction \(\mathbf{d}_{i}\) and \(w_{i}(\mathbf{x}_{i})\) is the weight of \(\mathbf{x}_{i}\) per Eq. (1). In order to ensure the gradient density normals \(\text{Grad}_{\mathbf{n}}(\mathbf{x})\) are themselves accurate, we regularize both the normals predicted by \(\mathcal{N}\) and the gradient density normals to be consistent with each other:
\[\mathcal{R}_{\mathbf{n}}=\sum_{i}w_{i}(\mathbf{x}_{i})||\mathcal{N}(\mathbf{x} _{i})-\text{Grad}_{\mathbf{n}}(\mathbf{x})|| \tag{11}\]
Figure 3: **Normals Prediction Architecture.** The Normals prediction network takes as input the mesh normals of a given point \(\mathbf{x}\), its distance to the mesh and the normals given by gradient of the NeRF’s density field. The details of the architecture are discussed in Section 3.2.1.
As observed in [43], this regularization ensures that highly specular surfaces of the scene are not explained away as subsurface emmistive lobes. The full loss on \(\mathcal{N}\) is:
\[\mathcal{L}_{\mathcal{N}}=\underbrace{\mathcal{R}_{Mesh,\mathbf{n}}}_{\text{ Ensures consistency}}+\underbrace{\mathcal{R}_{dir,\mathbf{n}}}_{\text{Ensures normally are}}+\underbrace{\mathcal{R}_{\mathbf{n}}}_{\text{Ensures consistency between}} \tag{12}\]
**Sparsity regularization for fast calculation of \(\mathcal{R}_{dir,\mathbf{n}}\) and \(\mathcal{R}_{\mathbf{n}}\)**. Calculating both \(\mathcal{R}_{dir,\mathbf{n}}\) and \(\mathcal{R}_{\mathbf{n}}\) is very computationally expensive as it requires a second derivative calculation at each point along the ray (usually \(\sim\) 100 points for most standard NeRF architectures) for each sampled ray in the batch (typically around 1000 rays). A simple way to reduce the computational burden is to evaluate the above sum only on a subset of the points on a ray as follows
\[\mathcal{R}_{\mathbf{n}}= \sum_{i}w_{i}(\mathbf{x}^{\prime}_{i})||\mathcal{N}(\mathbf{x}^{ \prime}_{i})-\text{Grad}_{\mathbf{n}}(\mathbf{x}^{\prime}_{i})||; \tag{13}\] \[\text{where}\ \mathbf{x}^{\prime}_{i}\in\mathcal{S}_{i,k}\]
where, \(\mathcal{S}_{i,k}\) is the set of top \(k\) points, sorted by weight \(w_{i}(\mathbf{x}^{\prime}_{i})\), of the ray passing through pixel \(i\). However, as the weights predicted by the NeRF are relatively broadly distributed, such regularization does not minimize Eq. (13) over the whole scene consistently. To ensure the predicted weights are more tightly distributed around the surface, we enforce a Cauchy regularization to enforce sparsity [15]
\[\mathcal{R}_{conchy}=\lambda_{c}\sum_{i}\log\left(1+\tfrac{\sigma(\mathbf{x})^ {2}}{c}\right) \tag{14}\]
similar to [15], we only apply this on the coarse MLP. Eq. (13) and Eq. (14) improve the underlying dynamic scene geometry and significantly improve the quality of the gradient density normals.
## 4 Experimental Results
In this section, we show results of head-pose control, facial expression control, and novel view synthesis using CoDyNeRF. For each scene, the model is trained on a short portrait video captured using a consumer smartphone.
**Baseline approaches**. We compare CoDyNeRF quantitatively and qualitatively to the following prior work for human head animation 1) RigNeRF [3] is a method for reanimating neural portraits with full control of facial expressions, head pose and viewing direction. Similar to us, the authors use a volumetric representation to model the dynamic scene. 2) Neural Head Avatars [13] creates a neural head model by deforming a mesh with pose and expression dependent offsets along with an MLP dependent texture space. 3) PointAvatar [52] uses a point cloud and an SDF as the geometric representation of the avatar in the canonical space. Additionally, PointAvatar seperates the appearance of the avatar into an albedo and a normals-conditioned RGB shading. RigNeRF [3] and Neural Head Avatars [13] use an expression and pose dependent canonical space while PointAvatar uses a normals conditioned shading network to handle appearance variation during the capture. Unlike in [3], we _do not optimize_ the appearance or deformation latent code during testing. Since we want to evaluate the fidelity of reproduction of illumination dependent effects for novel head-pose, facial expressions and views we cannot assume access to testing frames. We always use the deformation and appearance code of the first frame during testing.
**Training and evaluation Data**. The training and validation data was captured using an iPhone 13 Pro Max for all the experiments in the paper. In the first half of the capture, we ask the subject to enact a wide range of expressions and speech while trying to keep their head still as the camera is panned around them. In the next half, the camera is fixed at head-level and the subject is asked to rotate their head as they enact a wide range of expressions. Camera parameters are calculated using COLMAP [37]. FLAME [24] parameters and spherical harmonics coefficients \(L_{lm}\) are obtained via standard photometric and landmark fitting obtained by [14]. All videos are between 50-80 seconds long; we use the first \(\sim\) 1200-1500 frames for training and the remaining 120-150 _held out_ frames, with novel expressions and head-poses, for validation. Please find full details of each experiment in the supplementary document.
### Evaluation on Test Data
We evaluate CoDyNeRF, along with three state-of-the-art baselines, Neural Head Avatars [13], RigNeRF [3], and PointAvatar [52] on held out testing frames. These frames contain a variety of facial expressions and head-poses. In Fig 5, we show a qualitative comparison between CoDyNeRF and the baselines. We observe that outdoor-captured videos, where pose and expression deformations under sunlight creates large appearance variations, pose a significant challenge to existing methods[3, 13, 52]. Neural Head
Figure 4: **Dynamic Surface Normals.** Surface normals directly influence the specularities and shading of the human head. Here we visualise the surface normals given by the density field of CoDyNeRF. As can be seen, the normals are able to capture the dynamic foreground with fine detail without sacrificing accuracy. In comparison, the normals of RigNeRF [3] are noisy and fail to capture the facial geometry accurately.
Avatars [13] and RigNeRF [3], entangle illumination dependent effects with expression parameters in their canonical space and are not able to faithfully reconstruct the target appearance, creating artifacts in the overall appearance including in shadows, specularities, and in the mouth and eye region. Neural Head Avatars [13] is able to generate some shadows and specularities but they often lack of details and do not match the ground truth well. Further, due to their heavy reliance on an explicit face mesh, results from [13] are unable to accurately capture the head geometry in detail, resulting in an unnaturally deformed shape. RigNeRF [3], similar to NHA [13], uses an expression and pose (through deformation features) conditioned canonical space and is able to generate some shadows and specularities. However,
Figure 5: **Qualitative comparison by reanimation with novel facial expression, head-pose, and camera view parameters.** Here we reanimate CoDyNeRF, RigNeRF [3] and Neural Head Avatars [13] with novel facial expressions and head-pose extracted from the ground-truth frame in Row 1. As can be observed in the highlighted red boxes, while RigNeRF [3] is able to generate some shadowing and specular effects on the face, they are incorrect. Additionally, RigNeRF’s results have significant artifacts around the mouth and face regions. Similarly, due to the use of an explicit mesh and the entanglement of illumination effects with expressions, Neural Head Avatars [13] is unable to recover accurate geometries or predict accurate illuminations effects. Since PointAvatar’s design does not take into account cast shadows and specularities, it struggles to reproduce them accurately as can be seen in columns (1) - (5). In contrast, our approach, CoDyNeRF, is able to accurately reproduce cast shadows and specularities without sacrificing the quality of facial expressions and head-pose articulation.
they are often inaccurate and visually unnatural. For example, the specularity columns 1 and 2 of Fig 5 are incorrect, as well as the specularity around the eyes of columns 3 and 4. Similarly, the shadow on the mouth of column 5 is incorrect. Additionally, RigNeRF [3] results have significant artifacts, especially around the mouth and the eyes. artifacts around the mouth can be seen in columns (1), (2), (3) and (5) and there are artifacts around the eyes in columns (2), (3) and (4) of Fig 5 respectively. PointAvatar [52] uses separate shading and texture networks to disentangle the albedo from the shading. However, its design does not take into account cast shadows and specularities, thus it is unable to learn and predict them*. PointAvatar is unable to reproduce the shadows and specularities on the forehead in columns (1) - (4) of Fig 5. It is also unable to predict the shadow cast by the nose on the mouth in column (5) of Fig 5. In contrast, CoDyNeRF can faithfully reproduce the shadows and specularities on the forehead in columns (1) - (4) and the shadow cast by the nose on the mouth in column (5) of Fig 5.
Footnote *: We analyse this further in the appendix section.
In contrast to prior work, CoDyNeRF is able to faithfully reproduce illumination effects and generate high quality renders. As can be seen in Fig 5, our method captures dynamic appearance details, such as shadow patterns and specularities, under varying pose and expression more accurately _without_ sacrificing the quality of expression and head-pose articulation. In Table 1, we provide quantitative evaluation of these four methods. We report image similarity measurement (PSNR), perceptual quality (LPIPS) and quality of facial appearance reconstruction (FaceMSE i.e MSE measured over
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{_Subject 1_} & \multicolumn{4}{c}{_Subject 2_} & \multicolumn{4}{c}{_Subject 3_} & \multicolumn{4}{c}{_Subject 4_} \\ \hline Models & PSNR \(\uparrow\) & LPIPS \(\downarrow\) & FaceMSE \(\downarrow\) & PSNR \(\uparrow\) & LPIPS \(\downarrow\) & FaceMSE \(\downarrow\) & PSNR \(\uparrow\) & LPIPS \(\downarrow\) & FaceMSE \(\downarrow\) & PSNR \(\uparrow\) & LPIPS \(\downarrow\) & FaceMSE \(\downarrow\) \\ \hline CoDyNeRF (Our) & 23.46 & [0.40] & 19.26\(\pm\) & [3] & 21.24 & [0.32] & 21.76\(\pm\) & [0.32] & 1.06\(\pm\) & [0.32] & 23.36 & [0.40] & [1.36\(\pm\) 2.5] \\ RigNeRF [3] & 21.0 & 0.41 & 3.36\(\pm\) & -3 & 20.86 & 0.35 & 3.66\(\pm\) & -3 & 1.67 & 0.54 & 1.76\(\pm\) & 22.26 & 0.45 & 2.16\(\pm\) 3 \\ NHA [13] & - & - & 0.058 & - & - & 0.0382 & - & - & 0.070 & - & - & 0.068 \\ PointAvatar [52] & - & - & 6.36\(\pm\) & - & - & 8.38e-3 & - & - & 1.05e-2 & - & - & 7.74e-3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results of Subject 1,2,3 and 4 on test data. Here we calculate PSNR and LPIPS over the full image while FaceMSE is only restricted to the MSE calculated over the face region. Our results are better than RigNeRF [3], Neural Head Avatars [13] and IMAvatar [51] across all subjects.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{_Subject 1_} & \multicolumn{4}{c}{_Subject 2_} \\ \hline Models & PSNR \(\uparrow\) & LPIPS \(\downarrow\) & FaceMSE \(\downarrow\) & PSNR \(\uparrow\) & LPIPS \(\downarrow\) & FaceMSE \(\downarrow\) \\ \hline CoDyNeRF (Our) & 23.46 & [0.40] & 19.26\(\pm\) & [3] & 21.24 & [0.32] & 21.76\(\pm\) & [0.32] & 22.83 & [0.40] & [1.36\(\pm\) 2.5] \\ RigNeRF [3] & 21.0 & 0.41 & 3.64e-3 & 20.66 & 0.35 & 3.66\(\pm\) & -3 & 1.67 & 0.54 & 1.76\(\pm\) 3 & 22.26 & 0.45 & 2.16\(\pm\) 3 \\ NHA [13] & - & - & 0.058 & - & - & 0.0382 & - & - & 0.070 & - & 0.068 \\ PointAvatar [52] & - & - & 6.36\(\pm\) & - & - & 8.38e-3 & - & - & 1.05e-2 & - & - & 7.74e-3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation of dynamic appearance conditioning.
Figure 6: Applications of **CoDyNeRF**. CoDyNeRF enables the creation of Neural 3D Portraits captured in challenging lighting conditions with the accurate reproduction of illumination effects during novel expression and pose reanimation. In (a), we demonstrate that specularities, especially on the forehead and the nose, change realistically with changing viewing directions. In (b) we demonstrate the realistic reproduction of shading, shadows and specularities when the neural portrait is reanimated with a driving 3DMM (shown as white insets in the top left of each image). We see from results in the top row of (b) that the shading on the forehead and the specularities change realistically. Similarly, in the bottom row of (b) we see the nose being unshaded as the head rotates and the shadow cast by the ear changes realistically.
the face region only) on novel expressions and head-poses. As we can see, CoDyNeRF outperforms previous approaches across all subjects.
### Ablation of the Dynamic Canonical appearance Model
In Table 2, we have ablated the conditioning of the canonical appearance model on 3DMM-based inputs and normals. First, we measured the performance of model with a static canonical space. This model does not use any surface normals or 3DMM-based information as input, and as can be seen in Table 2, it performs quite poorly. When the surface normals, expression and pose parameters are added, the results are much better, but the model still lacks global head-position conditioning via the landmarks i.e \(v_{lmk}\), and thus is unable to reason about self-shadowing of the face. Our full model (third row), with conditioning from the surface normals, 3DMM expression, pose and the landmarks performs the best.
## 5 Limitations and Conclusion
CoDyNeRF has certain limitations that we address in this section. Similar to prior work, CoDyNeRF is subject specific and trains an individual model per scene. It also requires training sequences of atleast 40 seconds in order to sample the expression and head-pose space sufficiently. CoDyNeRF also does not support relighting with novel lighting. Being a method that allows photorealistic facial reanimation, CoDyNeRF may have potentially negative societal impact if misused. We discuss this further in the supplementary.
In conclusion, we present CoDyNeRF, a method that enables the creation of reanimatable Neural 3D Portraits captured in challenging lighting conditions. Using a dynamic canonical appearance representation, CoDyNeRF is able to accurately reproduce illumination effects with varying facial expression and head-pose. Additionally, we also propose a dynamic normals prediction module that utilizes 3DMM priors, along with a importance sampling based regularizer, to predict accurate dynamic surface normals. We believe this work takes us a step closer to creating in-the-wild Neural 3D portraits that are captured with casual smartphone devices.
## 6 Acknowledgements
This work was supported in part by the CDC/NIOSH through grant U01 OH012476 and a gift from Adobe. |
2303.01241 | PANACEA: An Automated Misinformation Detection System on COVID-19 | In this demo, we introduce a web-based misinformation detection system
PANACEA on COVID-19 related claims, which has two modules, fact-checking and
rumour detection. Our fact-checking module, which is supported by novel natural
language inference methods with a self-attention network, outperforms
state-of-the-art approaches. It is also able to give automated veracity
assessment and ranked supporting evidence with the stance towards the claim to
be checked. In addition, PANACEA adapts the bi-directional graph convolutional
networks model, which is able to detect rumours based on comment networks of
related tweets, instead of relying on the knowledge base. This rumour detection
module assists by warning the users in the early stages when a knowledge base
may not be available. | Runcong Zhao, Miguel Arana-Catania, Lixing Zhu, Elena Kochkina, Lin Gui, Arkaitz Zubiaga, Rob Procter, Maria Liakata, Yulan He | 2023-02-28T21:53:48Z | http://arxiv.org/abs/2303.01241v1 | # PANACEA: An Automated Misinformation Detection System on COVID-19
###### Abstract
In this demo, we introduce a web-based misinformation detection system PANACEA on COVID-19 related claims, which has two modules, _fact-checking_ and _rumour detection_. Our _fact-checking_ module, which is supported by novel natural language inference methods with a self-attention network, outperforms state-of-the-art approaches. It is also able to give automated veracity assessment and ranked supporting evidence with the stance towards the claim to be checked. In addition, PANACEA adapts the bi-directional graph convolutional networks model, which is able to detect rumours based on comment networks of related tweets, instead of relying on the knowledge base. This _rumour detection_ module assists by warning the users in the early stages when a knowledge base may not be available.
## 1 Introduction
The dangers of misinformation have become even more apparent to the general public during the COVID-19 pandemic. Following _false_ treatment information has led to a high number of deaths and hospitalisations Islam et al. (2020). Manual verification can not scale to the amount of misinformation being spread, therefore there is a need to develop automated tools to assist in this process.
In this work, we focus on automating misinformation detection using information from credible sources as well as social media. We produce a web-based tool that can be used by the general public to inspect relevant information about the claims that they want to check, see supporting or refuting evidence, and social media propagation patterns.
For _false_ information, the commonly used and relatively reliable method for automated veracity assessment is to check the claim against a verified knowledge base, which we call _fact-checking_. Previous works such as EVIDENCEMINER Wang et al. (2020), PubMed1 and COVID-19 fact-checking sites recommended by the NHS2 are all designed to retrieve related documents/sentences from a reliable knowledge base. However, this approach leaves users to summarise a large amount of potentially conflicting evidence themselves. PANACEA, which is supported by novel natural language inference methods Arana-Catania et al. (2022), is instead able to provide automated veracity assessment and supporting evidence for the input claim. In addition, previous works retrieve results using entities in the input claim, and thus often include results related to a keyword in the input claim instead of the whole query, while PANACEA considers the whole query for better result. The supporting pieces of evidence are also ranked by their relevance score and classified according to their stance towards the input claim.
Footnote 1: [https://www.ncbi.nlm.nih.gov/pmc/](https://www.ncbi.nlm.nih.gov/pmc/)
Footnote 2: [https://library.hee.nhs.uk/covid-19/coronavirus-%28covid-19%29-misinformation](https://library.hee.nhs.uk/covid-19/coronavirus-%28covid-19%29-misinformation)
In addition to _false_ information, truthful information can also be misused to harm competitors or gain attention on social media Pennycook et al. (2020); Tsfati et al. (2020). However, the latter is harder to be found by checking reliable knowledge bases as those are focused on _false_ information. Regarding this issue, previous work has analysed the spread of misinformation using features such as stance Zhu et al. (2021), sentiment, topics, geographical spread, the reliability of external links included in the tweet Sharma et al. (2020), origin and propagation networks Finn et al. (2014). However, it is still hard for users to identify rumours by directly looking at those features. Previous research shows that the propagation pattern is different between fake and real news, which would offer additional features for early detection of misinformation on social media Zhao et al. (2020). PANACEA extends this by using tweets' propagation patterns to identify rumours. _Rumour detection_ is not as reliable as _fact-checking_, but it generalises the system to various situations that _fact-checking_
cannot cover: First, _true_ or _unverified_ information with _intent to harm_; Second, scenarios where no verified knowledge database is available. _Rumour detection_ cannot prove the truth of a claim but may alert the user about claims with a high risk of being misinformation.
Previous work have either retrieved tweets from a short fixed time period Sharma et al. (2020) or search recent tweets Finn et al. (2014), which is limited by Twitter to only the last 7 days. We instead maintain an updated database which is constituted of an annotated tweets dataset with popular claims and an unlabelled streaming of COVID-19 related tweets that are crawled and selected periodically to update the dataset. Besides building on the various analytic functionalities used in previous work, PANACEA improves the architecture of these elements and adds extra features to the updated dataset for more efficient results.
A screencast video introducing the system3, illustrating its use in the checking of a COVID-19 claim, and the demo4 are also available online. The system can be easily adapted to other claim topics.
Footnote 3: [https://www.youtube.com/watch?v=D1PN8_9oYso](https://www.youtube.com/watch?v=D1PN8_9oYso)
Footnote 4: [https://panacea2202.github.io/](https://panacea2202.github.io/)
PANACEA covers various types of misinformation detection related to COVID-19 with the following contributions:
* We built a new web-based system, PANACEA, which is able to perform both _fact-checking_ and _rumour detection_ with natural language claims submitted by users. The system includes visualisations of various statistical analyses of the results for a better user understanding.
* PANACEA performs automated veracity assessment and provides supporting evidence that can be ranked by various criteria, supported by novel natural language inference methods. The system is able to manage multiple user requests with low latency thanks to our development of a queuing system.
* PANACEA is able to perform automated rumour detection by exploiting state-of-the-art research on propagation patterns. The system uses an annotated dataset and streams of COVID-19 tweets are collected to maintain an updated database.
## 2 Datasets
The following datasets are used in the project:
Knowledge DatabaseThis is used for fact-checking, and includes COVID-19 related documents from selected reliable sources 5. The documents were cleaned and split into 300 token paragraphs to construct a reliable knowledge database, whose supporting documents are retrieved and visualised in our system.
Footnote 5: _Centers for Disease Control and Prevention_ (CDC), _European Centre for Disease Prevention and Control_ (ECDC), _WebMD_ and _World Health Organisation_ (WHO)
PANACEA Dataset(Arana-Catania et al., 2022), constructed from COVID-19 related data sources6 and using BM25 and MonoT5 (Nogueira et al., 2020) to remove duplicate claims. This dataset includes 5,143 labelled claims (1,810 _False_ and 3,333 _True_), and their respective text, source and claim sub-type.
COVID-RV datasetIn order to fine-tune our model, we constructed a new COVID-19 related propagation tree dataset for rumour detection. Similar previous datasets are Twitter15 and Twitter16 Ma et al. (2018), which are widespread tweets' propagation trees with rumour labels, however, they are not COVID-19 related. Our dataset has been constructed by extending COVID-RV Kochkina et al. (2023), including _the number of retweets_, _user id_, _post time_, _text_, _location_ and _tweet reply ids_ as metadata for each tweet. Each tree is annotated with a related claim chosen from our claim dataset and a stance label (chosen from _Support_ or _Refute_) towards its related claim. Such a stance label for each tree is purely based on the content of the source tweet. In COVID-RV the conversations are annotated as either _True_ or _False_ based on the veracity of the claim and the stance of the source tweet towards it. Tweets supporting a false claim or challenging a true claim are annotated as _False_, tweets supporting true claims or challenging a false claim are annotated as _True_. Twitter15 and Twitter16 datasets also contain _Unverified_ conversations, which are discussing claim that are neither confirmed or denied.
COVID Twitter Propagation Tree (Live)Besides the last dataset constructed for fine-tuning,
PANACEA also runs a crawler to collect a stream of COVID-19 tweets that are used to maintain an updated database. This live dataset is not annotated, instead, it is labelled by the pre-trained rumour detection model. As the Twitter's search API does not allow retrieval of tweets beyond a week window, we retrieve COVID-19 related historical tweets based on the widely used dataset of COVID-19-TweetIDs (Chen et al., 2020), which contains more than 1 billion tweet IDs. Considering the size of the dataset, and for the storage and retrieval efficiency, we filtered out the less popular tweets with limited impact. To date, more than 12k propagation trees have been collected, starting from January 2020. For each tweet, its pseudo rumour label is generated by the trained model.
## 3 Architecture of PANACEA
Figure 1 shows an overview of PANACEA, including two functions: _fact-checking_ and _rumour detection_ for COVID-19. For _fact-checking_, there are three modules: (1) resource allocation system; (2) veracity assessment; and (3) supporting evidence retrieval. PANACEA also supports a unique function, _rumour detection_ by propagation patterns, which has the following modules: (1) tweet retrieval; (2) rumour detection; and (3) tweet meta-information analysis.
### Fact-Checking
Resource Allocation SystemUsers can input natural language claims into our system, and PANACEA provides autocompleted input guesses based on the current input and the claims dataset. Claim autocompletion can help users to input the claim faster and the results included within the claims dataset can be pre-computed for faster retrieval. However, if the user cannot find what they would like to check through the claims dataset, the new claim would be passed to our model for real-time evaluation. Veracity assessment and evidence retrieval are based on our natural language inference model NLI-SAN (Arana-Catania et al., 2022), which needs GPU resources to run. Therefore we built a queuing system that manages the resources and queues the claims while the GPUs are being used. The results are sent to the user. To avoid duplicate searches, a temporary copy of this result is saved in our database based on the user's IP address until the user searches for a new claim or the saved period expires.
Veracity AssessmentPANACEA is supported by NLI-SAN (Arana-Catania et al., 2022), which incorporates natural language inference results of claim-evidence pairs into a self-attention network. The input claim \(c\) is paired with each retrieved relevant evidence \(e_{i}\) to form claim-evidence pairs, where the relevant evidences are the retrieved sentences as described in the following paragraph. Each claim-evidence pair \((c,e_{i})\) is fed into both a RoBERTa-large7 model to get a representation \(S_{i}\) and into a RoBERTa-large-MNLI7 model to get a probability triplet \(I_{i}\) of stance (_contradiction, neutrality_, or _entailment_) between the pair. Next, \(S_{i}\) is mapped to a Key \(K\) and a Value \(V\), while \(I_{i}\) is mapped onto a Query \(Q\). \((Q,K,V)_{i}\) forms the input of the self-attention layer and the outputs \(O_{i}\) for all the claim-evidence pairs are concatenated together. The output is then passed to a MLP layer to get the veracity assessment result (_True_ or _False_) as shown in Figure 2.
Footnote 7: [https://huggingface.co/](https://huggingface.co/)
Supporting Evidence RetrievalThis module includes three parts: document retrieval, sentence retrieval and corresponding meta-data generation. Multi-stage retrieval is applied, retrieving first the top 100 relevant documents with BM25, that then are re-ranked by MonoT5 (Nogueira et al., 2020) and the top 10 documents are selected. For each of those documents, the top 3 sentences are selected. Both documents and sentences are ranked by their relevance score, which is the cosine similarity between the documents/sentences and the input claim embeddings. Each of those texts are represented through embeddings obtained using Sentence-Transformers with the pre-trained model MiniLM-L12-v2 (Wang et al., 2020). The corresponding metadata of the supporting documents, including type, source, relevance score, and stance towards the claim are also shown, together with the ranked documents/sentences. Users can also
Figure 1: Architecture of PANACEA
filter or re-rank the result using the metadata. An example of documents retrieved is shown in Figure 2 and the corresponding detailed information visualisation is shown in Figure 3. On the details page, the whole document text is shown with the top 3 relevant sentences highlighted by their stance towards the input claim. The stance distribution, described in the veracity assessment module is also visualised.
### Rumour Detection
Another approach to detecting rumours that has been found to be effective [14, 15] is modelling user comments and propagation networks. Next we describe the relevant rumour detection modules of our system.
Claim-related tweets retrievalSimilar to the fact-checking module, this module includes an autocomplete function for the user's natural language input claim that guesses the input from our claims dataset. The results for existing claims are also pre-computed to retrieve tweets faster. For a claim that is not in our claim dataset, we use BM25 to retrieve the related propagation trees from the large Twitter propagation tree database maintained by the active Twitter crawler.
Rumour Assessment and Data AnalysisPANACEA adapts a bi-directional graph convolutional networks model (BiGCN) [15] to perform rumour detection, which is trained on Twitter16 and fine-tuned on our annotated propagation trees. The reason we chose BiGCN is that it behaves relatively better compared with
Figure 3: The detail page of user selected supporting document
Figure 2: Fact checking result with input claim: _coronavirus is genetically engineered._
other models in cross-dataset evaluation (Kochkina et al., 2023). For an input claim, the system gives the rumour detection result generated by the weighted average of propagation trees' rumour assessment label, \(\frac{\sum_{i\in T}n_{i}r_{i}}{\sum_{j\in T}n_{j}}\), where \(T\) is the set of retrieved propagation trees. We generate the sentiment labels of each tweet by VADER8 and stance of tweet towards the input claims by natural language inference (Nie et al., 2020).
Footnote 8: [https://www.nltk.org/api/nltk.sentiment.vader.html](https://www.nltk.org/api/nltk.sentiment.vader.html)
Twitter propagation visualisationAs shown in Figure 4, PANACEA has six modules, which use the metadata we crawled from the tweet and generated from data analysis to visualise the propagation pattern:
1. [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]
2. _Tweet Count_, showing the total number of tweets related to the input claim against the posting date, and aiming to reflect the total influence and scale of discussion of the claim.
3. _Word Cloud_, showing the top 30 words in tweets refuting the input claim and the top 30 words in tweets supporting the input claim.
Figure 4: Rumour detection result with input claim: _vitamin c cures coronavirus_.
Stopwords, punctuation, and numbers are removed to reduce non-informative words.
3. Discussion Topics, building on Latent Dirichlet Allocation (LDA), where each topic is encoded by COVID-Twitter-BERT 9 and the representative tweet is selected by its embedding similarity with respect to the topic. Principal component analysis (PCA) is applied to visualise each topic. Top 10 words and corresponding weights of the chosen topic are shown in a bar chart. Footnote 9: [https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2)
4. Tweet Spread, showing the influence of each original tweet in the scatter plot, where the radius denotes the number of tweets that are direct comments or retweets from the original tweet. The y-axis "Total Spread" also includes the number of indirect comments/retweets of the original tweet, such as the retweets of retweets, etc.
5. Propagation Graph, showing the propagation graph between the source tweet and its comments, showing the information spread path. 5 other claims are randomly chosen from popular claims for users to compare propagation patterns. This module aims to visualise propagation graphs in a straightforward way and help users see the difference between trees of different types.
6. Tweet Map. Related tweets to the input claim are plotted on the world map and coloured by their stances, where _red/yellow/blue_ represents _refute/neutral/support_. The difference in stance and popularity towards the input claim in the different regions can be easily seen, which shows the local context and geo-location bias.
## 4 Evaluation Results
Fact-CheckingWe investigate the performance of our system in document retrieval and veracity assessment in (Arana-Catania et al., 2022). Table 1 shows that combining BM25 and MonoT5 is the most effective approach for document retrieval of the selected techniques. In addition, Figure 5 shows that NLI-SAN achieves similar performance with KGAT (Liu et al., 2020), while having a simpler architecture for the application, and outperforms GEAR (Zhou et al., 2019).
## 5 Conclusion
This paper introduces a web-based system on _factchecking_ and _rumour detection_ based on novel natural language processing models for COVID-19 misinformation detection. Going forward, we will keep updating the data and explore other methods for misinformation identification to improve the current system and introduce more functions to the system as part of our continuing efforts to support the general public to identify misinformation.
## Acknowledgements
This work was supported by the UK Engineering and Physical Sciences Research Council (grant no. EP/V048597/1). YH and ML are each supported by a Turing AI Fellowship funded by the UK Research and Innovation (grant no. EP/V020579/1, EP/V030302/1).
|
2309.12514 | Prospects for establishing limits on the SMEFT operators from the
production processes of three and four top quarks in hadron collisions | Numerical simulations of processes of three and four top quark
hadroproduction are carried out in the SMEFT model framework. The simulated
data are used to derive expected theoretical constraints on Wilson coefficients
of relevant SMEFT operators of dimension six. Obtained limits for both cases
are discussed and compared in terms of processes' sensitivity to possible BSM
contribution. Results show that operator $O_{tt}^1$ is better constrained by
the process of four top quark production, whereas other four operators
$O_{QQ}^1$, $O_{Qt}^1$, $O_{Qt}^8$ and $O_{QQ}^8$, are similarly constrained in
three and four top quark production processes. In all cases, the expected
limits taken from the simultaneous analysis of the production of three and four
top quarks are strengthened. Analytical expressions for the partial amplitudes
of the processes $tt\to tt$ and $t\bar{t}\to t\bar{t}$ caused by the operators
$O^1_{tt}$, $O^1_{QQ}$, $O^1_{Qt}$, $O^8_{Qt}$, $O^8_{QQ}$ were obtained for
the first time. Based on the expressions of the obtained partial amplitudes,
graphs of the perturbative unitarity boundary for the listed operators were
drawn. The question of how kinematic cuts motivated by partial unitarity affect
the resulting constraints on the Willson coefficients is addressed. It is shown
that in all cases the limits are getting somewhat worse if such cuts are
applied. | A. Aleshko, E. Boos, V. Bunichev, L. Dudko | 2023-09-21T22:41:20Z | http://arxiv.org/abs/2309.12514v3 | Prospects for establishing limits on the SMEFT operators from the production processes of three and four top quarks in hadron collisions
###### Abstract
Numerical simulations of processes of three and four top quark hadroproduction are carried out in the SMEFT model framework. The simulated data are used to derive expected theoretical constraints on Wilson coefficients of relevant SMEFT operators of dimension six. Obtained limits for both cases are discussed and compared in terms of processes sensitivity to possible BSM contribution. Results show that operator \(O^{1}_{tt}\) is better constrained by the process of four top quark production, whereas other four operators \(O^{1}_{QQ}\), \(O^{1}_{Qt}\), \(O^{8}_{Qt}\) and \(O^{8}_{QQ}\), are similarly constrained in three and four top quark production processes. In all cases, the expected limits taken from the simultaneous analysis of the production of three and four top quarks are strengthened. The question of how kinematic cutoffs motivated by partial unitarity affect the resulting constraints on the Willson coefficients is addressed. It is shown that in all cases the limits are getting somewhat worse if the kinematic cutoffs reflecting the unitarity are applied.
## 1 Introduction
Currently, no experimental evidence of physics beyond the Standard Model (BSM) has been observed. In the pursuit of the New Physics, researchers are inclined to try "indirect" approaches, in which one seeks BSM manifestations in the interactions of already known Standard Model particles. The conventional assumption here is that New Physics is on a scale beyond our direct reach at the moment, but should still manifest itself on lower scales in the form of modified SM interactions, which can be measured and analyzed. A convenient framework for such kind of analyzes is the Standard Model Effective Field Theory (SMEFT) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (see also reviews [11, 12] and references therein). SMEFT approach is to parametrize BSM effects in a model-independent way in terms of higher dimension gauge-invariant operators. If only operators of dimension six are preserved, then the SMEFT Lagrangian reads as follows:
\[L=L_{SM}+\sum\frac{c_{i}}{\Lambda^{2}}O^{d=6}_{i}, \tag{1}\]
where \(L_{SM}\) is the SM Lagrangian, \(\Lambda\) - hypothetical scale of the BSM physics, \(O^{d=6}_{i}\) - local composite SMEFT operators of dimension six, \(c_{i}\) - dimensionless Wilson coefficients. We consider here only terms of dimension six since in the processes under study they are leading in the
expansion on inverse scale \(\Lambda^{-1}\). In the SMEFT framework any observable, in particular the cross-section, can be parametrized in the following form:
\[\sigma=\sigma_{SM}+\sum_{k}\frac{c_{i}}{\Lambda^{2}}\sigma_{k}^{(1)}+\sum_{j<=k} \frac{c_{i}c_{k}}{\Lambda^{4}}\sigma_{k,j}^{(2)}, \tag{2}\]
where \(\sigma_{SM}\) is the SM value, \(\sigma^{(1)}\) and \(\sigma^{(2)}\) - coefficients, representing linear and quadratic (in terms of EFT coupling) contributions of the SMEFT operators. Given the measurement of \(\sigma\) and having calculated values of \(\sigma_{SM}\), \(\sigma^{(1)}\) and \(\sigma^{(2)}\), one can estimate constrains on Wilson coefficients \(c_{i}\). These constraints can be used to calculate limits on physical parameters in various SM extensions.
Search for four top quark production was performed and upper limits on its cross section have been established [13, 14, 15] in proton-proton collisions at \(\sqrt{s}=13\) TeV. The process was observed recently with measured cross-section \(17.7^{+6.0}_{-5.4}\) fb [16] by CMS and \(22.5^{+6.6}_{-5.5}\) fb [17] by ATLAS experiments. The SM NLO QCD cross section at 14 TeV has been calculated [18] for various choices of the factorisation and renormalisation scales. In addition, the computation of NLO QCD and EW corrections in SM have been carried out in [19] at 13 and 100 TeV energies. The SM cross section at next-to-leading logarithmic accuracy including threshold non-logarithmic corrections (so called NLL\({}^{\prime}\)) is now available for 13 and 13.6 TeV [20]. Complete tree level QCD and Electroweak analysis including contributions of all five relevant SMEFT operators of dimension six was presented and expected individual theoretical limits were given at various energies [21]. CMS and ATLAS collaborations also presented experimental limits on corresponding dimension six operators [13, 17]. One should stress that all the computations of the four top quark production cross section at corresponding energies are consistent with each other and with mentioned experimental measurements within claimed uncertainties.
One can also consider processes with 3, 5, etc. top quarks in a final state when searching for SMEFT manifestation beside the four top production. In the current work, we consider the process of three top quark production and compare its sensitivity to the contributions of the SMEFT operators with that obtained in the relatively well-studied four top quark production process, and also investigate how the sensitivity improves when both processes are taken into account simultaneously.
Three top quark production has LO SM cross-section of 1.9 fb [22, 23] and has not been experimentally observed yet. Nevertheless, one can estimate possible constraints on relevant Wilson coefficients following from three top quark production by comparing SM and SMEFT theoretical cross sections. We use the SM cross section as an effective "measurement" and compare it with the SMEFT cross section as given in Eq. 2. In order to compare expected sensitivities obtained from the three and four top quark production processes, the same analysis procedure is applied to both processes to determine the limits. The expected limits obtained are also compared with currently known experimental limits only for the production of four top quarks [13, 17].
One potentially serious problem arising in the SMEFT approach is that the contributions of EFT operators grow too fast with energy. An important consequence is that one should be very careful with the potential violation of unitarity. In the current work, we investigate this problem by studying the effect of the partial unitarity requirement on the limits of the extracted Wilson coefficients.
Thus, the work has two main goals: to estimate the theoretically expected limits on the Wilson coefficients obtained in the process of the production of three and four top quarks and to study the impact of the unitary requirement. The paper is organized as follows. Section 2 contains computation details, cross sections and theory uncertainties. The partial unitarity limits
are discussed in Section 3. In Section 4 the methodology of obtaining limits has been discussed. The final results and comparisons are presented in the Summary.
## 2 Cross sections and theoretical uncertainties
### Simulation details
In the present work, most of the calculations are conducted using the MadGraph5_aMCNLO package [24]. Some of the results were also cross-checked with the CompHEP package [25, 26]. For modeling with SMEFT operator contributions, we use the SMEFTatNLO model [27]. Simulations at LO and NLO are done using NNPDF31_lo_as_0118 and NNPDF31_nlo_as_0118_luxqed [28] PDF sets respectively. The value of \(\alpha_{s}\) is taken as follows from the PDF set. The mass of the top quark is set to 172.5 GeV. To obtain factorization/renormalization scale uncertainties, we perform calculations at various characteristic scale points as implemented in MadGraph. All other settings are taken as default in Madgraph unless stated otherwise.
### Computations in Standard Model
Representative LO diagrams for four top SM production are depicted in figure 1. In our study we use NLO corrections to the cross section of four top quark production as calculated in SM [18, 19]. The numerical value of the cross-section strongly depends on the factorization/renormalization scale \(\mu_{F/R}\) and the PDF set chosen. In table 1 SM cross sections at \(\sqrt{s}=13\) TeV are presented. We explore different choices of factorization/renormalization scales, which are often used for this process: \(H_{t}/2\), \(2m_{top}\), and \(m_{top}\). \(H_{t}/2\) is defined as half of a scalar sum of transverse momenta of all final state particles, while the \(m_{top}\) corresponds to the fixed scale equal to the mass of the top quark. With the current PDF set, the latter choice of scale provides a smaller K-factor, hence we will use it for all subsequent computations.
Figure 1: Representative Feynman diagrams of four top quarks LO production in the Standard Model.
Figure 2: Representative Feynman diagrams of three top quarks LO production in the Standard Model.
The three top quark production is also been studied in some works [22, 23, 29, 30]. Representative diagrams for three top LO SM production are shown in figure 2. A full set of diagrams can be found in [22]. The three top production is also strongly dependent on the choice of QCD factorization scale, the main consequence of which is a relatively large scale uncertainty. To correctly compare the two processes we also want to use NLO corrections to the cross section of three top quark production, similar to the four top quark production. The calculation of triple top production with NLO corrections is tricky due to the presence of resonant diagrams in the real part of the correction. These are diagrams of type \(pp\to tt\bar{t}t\bar{t}\) with decay \(t\to Wb\). LO cross-section for the production of four top quarks is an order of magnitude greater than one for triple top production, which therefore leads to spoilage of convergence of the perturbative series. From a computational point of view, this means that the simulation will produce very unstable results (or will not converge at all). To obtain precise results, the issue should be considered in the same way as \(tWb\) production processes, which is beyond the scope of the current work. Therefore, for the current exploratory goal of this paper, we simply use the general Diagram Reduction procedure as implemented in the MadSTR plugin [31] for Madgraph. Table 2 contains results for three and four top quarks SM production at c.o.m. energies 13 and 14 TeV.
### SMEFT computations
The introduction of SMEFT operators adds new possible interaction vertices, which modifies SM cross-sections. In this work following four-fermion SMEFT operators were considered, which contribute to both three and four top quark production processes:
\[O^{1}_{tt} =(\bar{t}_{R}\gamma^{\mu}t_{R})(\bar{t}_{R}\gamma_{\mu}t_{R}), \tag{3}\] \[O^{1}_{QQ} =(\bar{Q}_{L}\gamma^{\mu}Q_{L})(\bar{Q}_{L}\gamma_{\mu}Q_{L}),\] \[O^{1}_{Qt} =(\bar{Q}_{L}\gamma^{\mu}Q_{L})(\bar{t}_{R}\gamma_{\mu}t_{R}),\] \[O^{8}_{Qt} =(\bar{Q}_{L}\gamma^{\mu}T^{A}Q_{L})(\bar{t}_{R}\gamma_{\mu}T^{A }t_{R}),\] \[O^{8}_{QQ} =(\bar{Q}_{L}\gamma^{\mu}T^{A}Q_{L})(\bar{Q}_{L}\gamma_{\mu}T^{A }Q_{L}),\]
The SMEFT model introduces several additional parameters, namely the NP scale \(\Lambda\) and
\begin{table}
\begin{tabular}{c|c|c|c} \hline Order & \multicolumn{3}{c}{LO} \\ \hline Scale, \(\mu_{F/R}\) & cross-section, \(\sigma\) pb & \(\delta_{scale}\) & \(\delta_{PDF}\) \\ \cline{2-4} \(m_{top}\) & 9.03 & +73\% -39\% & \(\pm\) 7.3\% \\ \(2m_{top}\) & 5.47 & +65.1\% -36.9\% & \(\pm\) 6.81\% \\ \(H_{t}/2\) & 3.93 & +60\% -35\% & \(\pm\) 6.4\% \\ \hline \hline Order & \multicolumn{3}{c}{QCD NLO} \\ \hline Scale, \(\mu_{F/R}\) & cross-section, \(\sigma\) pb & \(\delta_{scale}\) & \(\delta_{PDF}\) \\ \cline{2-4} \(m_{top}\) & 13.2 & +9.5\% -20.1\% & \(\pm\) 2.7\% \\ \(2m_{top}\) & 11.2 & +26.7\% -25.0\% & \(\pm\) 2.6\% \\ \(H_{t}/2\) & 8.4 & +29.6\% -25.1\% & \(\pm\) 2.5\% \\ \hline \end{tabular}
\end{table}
Table 1: Cross-sections of the four top quarks hadroproduction with corresponding scale/PDF uncertainties.
Wilson coefficients \(c_{i}\) for each SMEFT operator. The scale \(\Lambda\) is conventionally set to 1 TeV, while \(c_{i}\) is our "parameters of interest", which we want to set bounds on.
The main idea of this work is to use the theoretical value of the cross-section as an approximation for the experimental value in Eq. 2 to obtain theoretical constraints on the Wilson coefficients of operators listed in Eq. 3.
To extract values for parameterization coefficients \(\sigma^{(1)}\) and \(\sigma^{(2)}\) a series of simulations using SMEFT model was conducted. The latest versions of Madgraph allow for direct computations of squared and interference SMEFT terms, however, it is not advised, as computation of small interference parts can potentially be unstable. Therefore following procedure of coefficient extraction is utilized. Our approach is to calculate the cross-section with two opposite values of \(c_{i}\) (i.e. \(c_{i}\) and \(-c_{i}\)). In principle, one can take an arbitrary value of \(c_{i}\), since it only affects coefficients in the set of linear equations generated by 2, which when solved for \(\sigma^{(1)}\) and \(\sigma^{(2)}\) provide the same values, regardless of the choice. One should note, however, that due to the choice of \(\Lambda\) the value of \(c_{i}\) is required to stay within the \([-4\pi,4\pi]\) range to ensure the stability of the perturbation series. We tested several choices of \(c_{i}\) and settled for the simple \(c_{i}=\pm 1\) for the sake of clarity. Hence, to obtain coefficients \(\sigma^{(1)}\) and \(\sigma^{(2)}\) we calculate cross-section twice with \(c_{i}=\pm 1\) and solve linear equation set obtained from 2.
The described procedure is conducted for all of the SMEFT operators listed in 3 as well as for both three and four top production processes. Thus obtained values for \(\sigma^{(1)}\) and \(\sigma^{(2)}\) are used in section 4 to obtain limits for Wilson coefficients \(c_{i}\). However, before moving to the calculation of restriction on Wilson coefficients, the important point to discuss is the problem of potential violation of perturbative unitarity.
## 3 Optical theorem and perturbative unitarity
Effective operators lead to an increase in cross sections with an increase in energy, which violates unitarity. For our calculations to be self-consistent, we must check that we do not consider kinematic regions where perturbative unitarity is violated. To estimate the admissible range of parameters, we apply the optical theorem, which follows from the unitarity of the S
\begin{table}
\begin{tabular}{l|c|c|c} \multicolumn{4}{c}{4 top production, LO} \\ \hline C.o.m. energy, TeV & SM cross.-sect. \(\sigma_{SM}\), fb & scale uncert., \% & PDF uncert., \% \\ \hline
13 & 9.02 & 73.4 & 7.32 \\
14 & 12.0 & 72.6 & 7.28 \\ \hline \multicolumn{4}{c}{4 top production, NLO} \\ \hline
13 & 13.2 & 20.1 & 2.5 \\
14 & 17.8 & 20.3 & 2.5 \\ \hline \hline \multicolumn{4}{c}{3 top production, LO} \\ \hline C.o.m. energy, TeV & SM cross.-sect. \(\sigma_{SM}\), fb & scale uncert., \% & PDF uncert., \% \\ \hline
13 & 1.16 & 30.1 & 7.1 \\
14 & 1.5 & 29.2 & 6.64 \\ \hline \multicolumn{4}{c}{3 top production, NLO} \\ \hline
13 & 1.78 & 20.0 & 3.3 \\
14 & 2.3 & 19.4 & 3.1 \\ \end{tabular}
\end{table}
Table 2: SM cross-sections for process \(pp\to t\bar{t}t\bar{t}\)
matrix. The optical theorem states that the imaginary part of the forward scattering amplitude is proportional to the total cross section of the process:
\[\sigma=\frac{1}{s}\mbox{Im}\left(A(\theta=0)\right)=\frac{16\pi}{s}\sum_{l=0}^{ \infty}(2l+1)|a_{l}|^{2}, \tag{4}\]
where \(a_{l}\) - is the amplitude of the partial wave. Therefore, \(\mbox{Im}a_{l}=|a_{l}|^{2}\) and
\[|\mbox{Re(a}_{\mbox{\scriptsize i}})|^{2}+\left[\mbox{Im(a}_{\mbox{ \scriptsize i}})-\frac{1}{2}\right]^{2}=\frac{1}{4}. \tag{5}\]
\[|\mbox{Re(a}_{0})|<\frac{1}{2}. \tag{6}\]
\[a_{0}=\frac{1}{16\pi\lambda}\left|\int\limits_{t_{-}}^{t_{+}}dt\cdot A\right| \tag{7}\]
where \(\lambda\) is the kinematic function of the triangle and \(A\) is the amplitude of the process.
Effective four-fermion operators are obtained by functional integration over massive modes of intermediate vector fields. A four-particle vertex can be formed from three-particle vertices of the interaction of two fermions with an auxiliary massive vector field and propagators of this massive field. In this case, the denominator of the auxiliary field propagator is replaced by a constant equal to the scale of the new physics. The effective vertex of the interaction of right-handed polarized fermions with the auxiliary vector field has the form \(\gamma(1+\gamma_{5})/\sqrt{2}\), and that of left-handed polarized fermions - \(\gamma(1-\gamma_{5})/\sqrt{2}\).
Using the Weyl representation for spinors and the method of helicity amplitudes, for each case of an anomalous operator, the amplitudes \(2\to 2\) of \(tt\to tt\) and \(t\bar{t}\to t\bar{t}\) processes were calculated. It should be noted that the \(tt\to tt\) process includes t and u channel components, and the \(t\bar{t}\to t\bar{t}\) process includes s and t channel components, which were taken into account in the calculation. For each process, out of 16 possible helicity amplitudes, the one that gives the greatest contribution was chosen. The resulting helicity amplitudes were expressed in terms of the Mandelshtam invariant variables and integrated over the two-particle phase volume to calculate the corresponding partial amplitudes.
Now let's take a closer look at each of the listed processes. Thus, the amplitude of the \(tt\to tt\) process caused by the operator \(O^{1}_{tt}\) is formed by two interacting right-handed fermionic currents and consists of one t-channel and one u-channel component. For simplicity, we fixed the angle \(\phi=0\). The helicity amplitudes of this process do not depend on the angle \(\theta\). To evaluate the applicability of the \(O^{1}_{tt}\) operator, we chose one of the dominant helicity amplitudes, which is proportional to the square of the invariant mass s of the two top quarks. The remaining helicity amplitudes are proportional to the t-quark mass, and their relative contribution decreases with increasing s. Analytical expressions for the dominant helicity amplitude and the partial amplitude calculated on its basis are given in the Appendix in formulas (8) and (9), respectively. It can be seen that the partial amplitude of the process is proportional to the factor \((1+\beta)^{2}\), which tends to 4 for large s.
The process \(tt\to tt\) with the operator \(O^{1}_{QQ}\) is caused by the interaction of two left-handed currents and its dominant helicity amplitude looks similar to the case of the operator \(O^{1}_{tt}\). The corresponding expressions for the amplitudes are given in formulas (11) and (12).
For a process with the operator \(O^{1}_{Qt}\) the picture becomes more complicated. This process is caused by the interaction of the left and right currents of massive fermions. To the t-channel and u-channel amplitude components, two more are added, in which the right-handed and left-handed three-particle vertices are swapped. The dominant helicity amplitude of the process still does not depend on the angle \(\theta\) and is directly proportional to s (formula 13), however, due to a different combination of initial and final polarizations of fermions, as well as a different composition of the diagrams, the common factor of the dominant helicity amplitude is now proportional to \((1+\beta^{2})\). This factor tends to 2 as s increases, therefore the partial amplitude of the process \(tt\to tt\) with the operator \(O^{1}_{Qt}\) (formula 14) is half the corresponding amplitude with the operators \(O^{1}_{tt}\) or \(O^{1}_{QQ}\).
Now consider the process \(tt\to tt\) with the operators \(O^{8}_{QQ}\) and \(O^{8}_{Qt}\). The Lorentz structure of the corresponding amplitudes (formulas 14 and 17) is similar to the cases of the operators \(O^{1}_{QQ}\) and \(O^{1}_{Qt}\), however, color factors \(\lambda^{a}_{j,i}\lambda^{a}_{k,l}/4\) additionally appear in the diagrams. In order to estimate the level of maximum influence of the operators \(O^{8}_{QQ}\) and \(O^{8}_{Qt}\), we took the largest possible value of the color factor \(\lambda^{a}_{j,i}\lambda^{a}_{k,l}/4=1/2\). Thus, the obtained partial amplitudes for the process \(tt\to tt\) with the operators \(O^{8}_{QQ}\) and \(O^{8}_{Qt}\) are two times smaller than the corresponding amplitudes with the operators \(O^{1}_{QQ}\) and \(O^{1}_{Qt}\).
The listed operators also contribute to the processes \(t\bar{t}\to t\bar{t}\). The amplitudes of this process include s-channel and t-channel components. The dominant helicity amplitude of the process \(t\bar{t}\to t\bar{t}\) with the participation of the operator \(O^{1}_{tt}\) (formula 19) has an angular dependence and is proportional to the factor \(-(1+\cos\theta)\), which is proportional to the Mandelstam variable u. Also, this helicity amplitude is proportional to the factor \((1+\beta)\), which tends to 2 as the value of s increases. Technically, the difference in the dominant helicity amplitudes for the processes \(tt\to tt\) and \(t\bar{t}\to t\bar{t}\) is due to different redistribution of momenta between the initial and final states, as well as different combinations of initial and final polarizations. These differences lead to the fact that at large s the partial amplitudes for the processes \(tt\to tt\) with the operators \(O^{1}_{tt}\) and \(O^{1}_{QQ}\) are 4 times larger than the corresponding partial amplitudes for processes \(t\bar{t}\to t\bar{t}\). At the same time, for large values of s, partial amplitudes with the operator \(O^{1}_{Qt}\) for the process \(t\bar{t}\to t\bar{t}\) coincide with the corresponding partial amplitude of the process \(t\bar{t}\to t\bar{t}\).
From the calculations performed, it is clear that the partial amplitudes for the processes \(t\bar{t}\to t\bar{t}\) do not exceed the partial amplitudes for the processes \(tt\to tt\), therefore, further, to estimate the limits of applicability of the operators, we will use only partial amplitudes for \(tt\to tt\) processes.
It should be noted that the considered helicity dominant amplitudes do not allow us to correctly estimate the contribution of processes with different operators. For such a comparison, it is necessary to calculate the full square of the amplitude modulus based on all the helicity amplitudes of the process. We used the dominant helicity amplitudes only to estimate the limits of applicability of the corresponding anomalous operators. We also note that the use of pair production processes of top quarks to determine the unitary limit of applicability of four-fermion operators provides a more conservative estimate than directly using the processes of production of three or four top quarks. We extend the values of the unitary limits obtained in the processes \(tt\to tt\) to all processes involving the four-fermion operators under study. We used the obtained restrictions on the invariant mass of a pair of top quarks to set cutoffs on the invariant mass of three and four top quarks in the corresponding processes.
Based on the calculated partial amplitudes for the operators under study, we plotted graphs
of the boundary of perturbative unitarity, where the invariant mass of a pair of top quarks is plotted along the x-axis, and the current upper experimental limits for the Wilson coefficients of anomalous operators are plotted along the y-axis. The intersection of the dotted lines corresponding to the accuracy of the Wilson coefficient measurements with the line of the unitarity boundary shows the value of the invariant mass of a pair of top quarks above which the effective field theory approach does not work. As experimental limits for the Wilson coefficients, their current values of measurement accuracy in the processes of production of three top quarks are taken.
In Fig. 3, the red color marks the values at which the partial amplitude \(a_{0}=\frac{1}{2}\). The green color marks the zone of parameters allowed from the point of view of perturbative unitarity. The dashed horizontal line indicates the experimental limits on the corresponding Wilson coefficients for different LHC operating modes. Dashed vertical lines with an arrow indicate the limits of the invariant mass of two t-quarks for corresponding LHC modes. When evaluating the unitarity limitation on the operators \(O^{1}_{tt}\), \(O^{1}_{QQ}\), \(O^{1}_{Qt}\), \(O^{8}_{Qt}\) at the LHC energy equal 13 TeV, the current values of the measurement accuracy of the Wilson coefficients obtained in the CMC experiment [13] were used. At the same time, when estimating the corresponding limitations for the LHC energy equal to 27 and 100 TeV, the expected values of the measurement accuracy, obtained theoretically, were used. For operator \(O^{8}_{QQ}\) only theoretical values of measurement accuracy were used.
From Fig. 3 it is clear that the lower the value of the upper experimental limit on the Wilson coefficient of the operator, the higher the value of the invariant mass of the process is allowed by the condition of perturbative unitarity. At the same time, the upper limit of the Wilson coefficient of the operator depends on the cross section of the process under study. The larger the cross section of a hypothetical process involving an anomalous operator, the greater the accuracy of its measurement is allowed. In the absence of manifestations of new physics, this leads to lower values of the experimental limits on the value of the Wilson coefficient.
For example, consider the behavior of unitary limits for 4t operators at the LHC 13 TeV mode. (Fig. 3 top left) shows the unitary bound for the operator \(O^{1}_{tt}\). With an experimental limit on the interaction parameter of this operator equal to 2 \(TeV^{-2}\), we obtain an upper unitary limit on the invariant mass of the process equal to 1.5 TeV. The same values are obtained for the operator \(O^{1}_{QQ}\) on (Fig. 3 upper right). For the operator \(O^{1}_{Qt}\) (Fig. 3 middle left) the picture changes. Since the partial amplitude for this operator is two times less than for the operators \(O^{1}_{tt}\) and \(O^{1}_{QQ}\), the line of the unitary boundary moves further from the coordinate axes. At the same time, the cross section for processes with four top quarks involving this operator is smaller than the corresponding cross sections for the operators \(O^{1}_{tt}\) and \(O^{1}_{QQ}\), therefore the accuracy of the experimental measurement of \(C^{1}_{Qt}/\Lambda^{2}\) is more rough and equal to 3.5 TeV\({}^{-2}\). It turns out that the intersection of the rough value of the experimental limit with the moved unitary boundary again gives a unitary limit on the invariant mass equal to 1.5 TeV. This trend continues for the \(O^{8}_{Qt}\) operator (Fig. 3 middle right). The partial amplitude corresponding to this operator is additionally two times smaller, and the measurement accuracy \(C^{8}_{Qt}/\Lambda^{2}\) is even rougher and is equal to 6.5 TeV\({}^{-2}\). This leads to the fact that the unitary constraint on the invariant mass of a process with this operator is again around the value of 1.5 TeV. Thus, the total unitary limit for all considered 4t operators in the LHC 13 TeV mode is close to the value of 1.5 TeV.
It is also clear from (Fig. 3) that although changing the LHC operating mode from 13 TeV to 27 TeV and 100 TeV leads to an expansion of the unitary limits of operators, this expansion is not very significant and is equal to 2 TeV and 3 TeV, respectively. This situation is due to the fact that the cross sections for processes with four top quarks are accumulated at invariant masses not exceeding 10 TeV.
It should be noted that in the case of the considered 4-fermion operators, the partial amplitude
Figure 3: Perturbative unitarity limit \(a_{0}=\frac{1}{2}\)(red line) for various anomalous operators. Upper left: \(O^{1}_{tt}\), upper right: \(O^{1}_{QQ}\), middle left: \(O^{1}_{Qt}\), middle right: \(O^{1}_{Qt}\), bottom: \(O^{8}_{QQ}\). The green zone corresponds to the allowed area. The dashed horizontal line indicates the experimental limits on the corresponding Wilson coefficients for different LHC energy values. Dashed vertical lines with an arrow indicate the limits of the invariant mass of two t-quarks for corresponding LHC energy values.
grows proportionally to s, which limits the use of these operators already for the invariant mass of two top quarks equal to 1-3 TeV. For comparison, the unitary limitation obtained for single top production with anomalous operators contributing to the Wb [32] vertex, as well as with FCNC operators [33], is at the level of 10 TeV and higher.
## 4 Setting limits on Wilson coefficients
Having the values of coefficients \(\sigma^{(1)}\) and \(\sigma^{(2)}\) and their uncertainties, calculating constraints on \(c_{i}\) is a matter of choice of a suitable statistical model. In the statistical model, we compare the calculated with Eq. 2 total cross section with the measured or SM expected cross section, with corresponding uncertainties. We consider three types of uncertainties. The first type is theory uncertainty which has been discussed in Sec. 2, the numbers are provided in Tables 1, 2. The other two uncertainties are experimental systematic and statistical uncertainties. Based on the recent measurements of four top-quark production in CMS [16] and ATLAS [17] experiments one can extrapolate the statistical uncertainty and estimate experimental systematic relative uncertainty to be the same for all calculations. This approach is more or less conservative since the analysis methodology usually improves and one can expect smaller systematic uncertainties in the future. Since the number of expected events \(n\sim\sigma L\) the extrapolation of statistical relative uncertainty can be taken as \(\delta(n1)/\delta(n2)=\sqrt{(\sigma_{2}L_{2})/(\sigma_{1}L_{1})}\). Based on the CMS results [16]\(\sigma_{\rm{4top}}=17.7^{+3.7}_{-3.5}({\rm{stat}})^{+2.3}_{-1.9}({\rm{sys}})\) fb we estimate experimental uncertainties as listed in Table 3. The integrated luminosity (L) for the calculations has been taken as 138 fb\({}^{-1}\) for 13 TeV (available experimental results), 3 ab\({}^{-1}\) for 14 TeV (HL-LHC), 15 ab\({}^{-1}\) for 27 TeV (HE-LHC) and 25 ab\({}^{-1}\) for 100 TeV (FCC).
### Cross checks of the methodology
For the first fit, a statistical model based on the chi-square distribution was used. Later, the results were validated using acknowledged EFTfitter [34] and SMEFiT [35] packages. The first comparison between different statistical models was done for linear terms in Eq. 2 where only \(\sigma^{(1)}\) terms are taken into account. Based on the calculated cross sections of the four and three top-quarks production processes in SM and with EFT contribution one can estimate the expected results with different statistical approaches. The achieved with the fit 95% CL exclusion limits on the Wilson coefficients \(C_{k}/\Lambda^{2}(TeV^{-2})\) are shown in Table 4. Two processes \(pp\to 4top\) (4t) and \(pp\to 3top+X\) (3t) at \(\sqrt{s}=13\) TeV are considered separately and as a combination in one statistical model. Two types of statistical models are realized. The 1D approach with a variation of only one Wilson coefficient in the statistical model, and the 5D approach with a simultaneous variation of all five coefficients in the same model. The combination of 4t and 3t processes has been considered in a dedicated statistical model marked as (3+4t). For some of
\begin{table}
\begin{tabular}{l c c|c c} & \multicolumn{2}{c}{\(pp\to 4top\)} & \multicolumn{2}{c}{\(pp\to 3top+X\)} \\ & \(\delta sys\), \(\%\) & \(\delta stat\), \(\%\) & \(\delta sys\), \(\%\) & \(\delta stat\), \(\%\) \\ \hline
13 TeV & 13 & 24 & 13 & 76 \\
14 TeV & 13 & 5 & 13 & 14 \\
27 TeV & 13 & 0.6 & 13 & 2 \\
100 TeV & 13 & 0.1 & 13 & 0.3 \\ \end{tabular}
\end{table}
Table 3: Estimated experimental systematic and statistical uncertainties for processes of three and four top quark production at energies of 13, 14, 27 and 100 TeV
the couplings, there is no sensitivity (flat posterior distribution) with linear terms in the range from -150 to 150 for the value of the coefficient. In this case, the sign "-" is shown in the table. The comparison in Table 4 for the linear EFT terms demonstrates significant improvement of the \(C^{1}_{QQ}\), \(C^{1}_{Qt}\) and \(C^{8}_{QQ}\) limits for the scenarios where triple top-quark production has taken into account. The limits in Table 4 demonstrate very weak sensitivity of the linear EFT terms to considered operators, the limits are far beyond the perturbative unitarity limits of \(4\pi\).
In the scenarios with quadratic EFT terms, where \(\sigma^{(2)}\) in Eq. 2 are taken into account, the limits are significantly tighter than with only linear terms scenarios. The corresponding limits with quadratic terms are shown in Table 5 for different statistical models at \(\sqrt{s}=13\) TeV. The Tables 4, 5 demonstrate good agreement between SMEFiT, EFTfitter and \(\chi^{2}\) results. For the quadratic fit, the \(\chi^{2}\) test provides a bit wider limits. For the linear fit multi-dimensional variation leads to much weaker limits than the one-dimensional statistical model which is comparable with previous results with linear fit [36]. In the scenario with quadratic EFT terms, the multi-dimensional statistical model leads to tighter limits than the one-dimensional statistical model. Such behavior is comparable with previous results for the quadratic fits [37]. In Fig. 4 the posterior distributions of the probability density function for the Wilson coefficients are provided for the one-dimensional (left plots) and multi-dimensional (right plots) statistical models, such example has taken for four-top-quark production at \(\sqrt{s}=13\) TeV with quadratic EFT terms. The distributions are calculated with the SMEFiT package. Due to quadratic terms, the distributions in multi-dimensional models are sharper and the integral for 0.95 quantile is gaining faster. Depending on the uncertainties such behavior can give tighter limits in multi-dimensional statistical model than in one-dimensional.
The achieved upper limits in Table 5 for the scenarios with quadratic EFT terms demonstrate rather similar sensitivity of four- and triple-top quark production processes to the considered EFT operators. The combination of four- and triple-top quark production in one statistical
\begin{table}
\begin{tabular}{l|c c c c} Stat. model & \(C^{1}_{tt}\) & \(C^{1}_{QQ}\) & \(C^{1}_{Qt}\) & \(C^{8}_{Qt}\) & \(C^{8}_{QQ}\) \\ \hline \hline \(\chi^{2}\),1D,4t & [-27,27] & [-39,39] & - & [-28,28] & - \\ EFTfitter,1D,4t & [-27,27] & [-39,39] & - & [-28,28] & [-130,130] \\ SMEFiT,1D,4t & [-27,27] & [-39,39] & - & [-28,28] & [-130,130] \\ \hline \(\chi^{2}\),1D,3t & - & [-27,27] & [-64,64] & [-123,121] & [-43,43] \\ EFTfitter,1D,3t & - & [-27,27] & [-64,64] & [-123,121] & [-43,43] \\ SMEFiT,1D,3t & - & [-27,27] & [-64,64] & [-123,121] & [-43,43] \\ \hline \(\chi^{2}\),1D,3+4t & [-27,27] & [-22,22] & [-64,64] & [-27,27] & [-42,42] \\ EFTfitter,1D,3+4t & [-27,27] & [-22,22] & [-64,64] & [-27,27] & [-42,42] \\ SMEFiT,1D,3+4t & [-27,27] & [-22,22] & [-64,64] & [-27,27] & [-42,42] \\ \hline EFTfitter,5D,4t & [-138,138] & [-141,141] & - & [-138,138] & - \\ SMEFiT,5D,4t & [-138,138] & [-141,141] & - & [-138,138] & - \\ \hline EFTfitter,5D,3t & - & [-125,125] & - & - & - \\ SMEFiT,5D,3t & - & [-125,125] & - & - & - \\ \hline EFTfitter,5D,3+4t & [-132,132] & [-123,123] & - & [-140,140] & - \\ SMEFiT,5D,3+4t & [-132,132] & [-123,123] & - & [-140,140] & - \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of the expected limits on \(C_{k}/\Lambda^{2}[\mathrm{TeV}^{-2}]\) estimated for \(pp\to 4top\) (4t) and \(pp\to 3top+X\) (3t) cross sections with only linear \(\sigma^{(1)}\) EFT terms. The statistical models with only one Wilson coefficient have marked 1D and the models with all five coefficients considered simultaneously have marked 5D. The models with the combination of 4t and 3t processes are marked 3+4t.
model leads to better sensitivity, in general.
The theoretical constraints seem to coincide pretty well with experimental limits obtained from four top-quark production in CMS [13] and ATLAS [17] experiments. Since there is a good agreement in the results from EFTfitter and SMEFiT packages, EFT limits in the next sections are shown only from the SMEFiT package.
Figure 4: Posterior distributions of probability density function for the Wilson coefficients in 1D (left plots) and 5D (right plots) statistical models, in the scenario with quadratic terms are taken into account.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} Stat. model & \(C_{t}^{1}\) & \(C_{OO}^{1}\) & \(C_{Ot}^{8}\) & \(C_{Ot}^{8}\) & \(C_{OO}^{8}\) \\ \hline \hline \(\chi^{2}\),1D,4t & [-1.3,1.2] & [-2.5,2.3] & [-2.2,2.2] & [-6.3,5.2] & [-5.6,5.5] \\ EFTfitter,1D,4t & [-1.1,1.1] & [-2.2,2.1] & [-2.0,2.0] & [-5.7,4.6] & [-5.1,4.9] \\ SMEFiT,1D,4t & [-1.1,1.1] & [-2.2,2.1] & [-2.0,2.0] & [-5.7,4.6] & [-5.0,4.8] \\ \hline \(\chi^{2}\),1D,3t & [-4.2,4.1] & [-2.9,3.2] & [-2.9,3.0] & [-6.0,6.3] & [-5.8,6.8] \\ EFTfitter,1D,3t & [-3.8,3.7] & [-2.5,2.9] & [-2.6,2.7] & [-5.4,5.6] & [-5.2,6.1] \\ SMEFiT,1D,3t & [-3.7,3.7] & [-2.5,2.9] & [-2.6,2.7] & [-5.3,5.6] & [-5.1,6.1] \\ \hline \(\chi^{2}\),1D,3+4t & [-1.3,1.2] & [-2.2,2.2] & [-2.1,2.1] & [-5.2,4.7] & [-4.8,5.0] \\ EFTfitter,1D,3+4t & [-1.1,1.1] & [-2.0,2.0] & [-1.8,1.9] & [-4.7,4.2] & [-4.3,4.5] \\ SMEFiT,1D,3+4t & [-1.1,1.0] & [-2.0,2.0] & [-1.8,1.8] & [-4.7,4.2] & [-4.2,4.5] \\ \hline EFTfitter,5D,4t & [-0.94,0.92] & [-2.3,2.2] & [-1.7,1.6] & [-4.9,3.7] & [-5.2,5.1] \\ SMEFiT,5D,4t & [-0.95,0.90] & [-1.8,1.7] & [-1.6,1.6] & [-4.8,3.6] & [-4.2,4.0] \\ \hline EFTfitter,5D,3t & [-3.3,3.7] & [-2.1,2.4] & [-2.2,2.5] & [-4.6,5.4] & [-4.3,5.5] \\ SMEFiT,5D,3t & [-3.1,3.0] & [-2.0,2.4] & [-2.1,2.2] & [-4.3,4.6] & [-4.2,5.1] \\ \hline EFTfitter,5D,3+4t & [-0.98,0.92] & [-1.8,1.8] & [-1.5,1.6] & [-4.0,3.4] & [-3.9,4.0] \\ SMEFiT,5D,3+4t & [-0.95,0.90] & [-1.6,1.6] & [-1.5,1.5] & [-4.0,3.3] & [-3.5,3.7] \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of the expected limits on \(C_{k}/\Lambda^{2}[\mathrm{TeV}^{-2}]\) estimated for \(pp\to 4top\) (4t) and \(pp\to 3top+X\) (3t) cross sections with quadratic \(\sigma^{(2)}\) EFT terms are taken into account. The statistical models with only one Wilson coefficient have marked 1D and the models with all five coefficients considered simultaneously have marked 5D. The models with the combination of 4t and 3t processes are marked 3+4t.
### Electroweak and QCD contributions in triple top-quark production
The importance of electroweak contribution (EW) to the triple top quark production processes has been shown in the article [22]. The EW diagrams have the same rate as diagrams with gluon (QCD), also the interference between EW and QCD diagrams is of almost the same rate as QCD or EW contribution and is negative. If one takes into account only QCD diagrams (e.g. it is the default setting in MadGraph) the total cross section will be almost correct due to the cancellation of negative interference terms and EW contribution, but the kinematic properties can be significantly different. In this subsection, we compare how the inclusion of EW contribution changes the EFT limits in comparison with the limits where only QCD contribution has been taken into account. The EFT limits shown in Table 6 are calculated with triple top quark production processes with only QCD diagrams (marked as QCD) and in the complete case with QCD, EW diagrams and their interference (marked as QCD+EW). Two statistical models are shown with 1D and 5D variations. The quadratic EFT terms with \(\sigma^{(2)}\) terms are taken into account.
Direct comparison of the expected limits in Table 6 demonstrate notable improvement in the sensitivity for the calculations with the correct simulation of all QCD, EW and interference contributions.
### Limits with unitarity bound cuts
The unitarity bounds considered in Sec. 3 have to be taken into account with additional requirements for an invariant mass of all top-quark pairs. Such additional requirements have been applied for the calculation of cross sections and \(\sigma^{(1)}\), \(\sigma^{(2)}\) EFT terms. The achieved limits on \(C_{k}/\Lambda^{2}(TeV^{-2})\) are provided in Table 7 and are marked as (cut) when the requirement has applied, for the comparison of the results without unitarity bound requirements (nocut) are also shown.
Expected limits in Table 7 achieved with additional unitarity bound cuts obviously worse than without such cuts, but not significantly. Since the unitarity is necessary requirement the simulation for the final results in the next section obeys the calculated in Sec. 3 unitarity bound cuts.
The general comparison of the separate and combined results for the four and three top quark production processes demonstrates significant improvement in sensitivity for the linear EFT terms, especially for \(O^{1}_{QQ}\), \(O^{1}_{Qt}\) and \(O^{8}_{QQ}\) operators. In the scenarios with quadratic EFT terms the Wilson coefficient of the operator \(O^{1}_{tt}\) is better constrained by the process of four top quark production. However, in the case of other operators, the three top quark production
\begin{table}
\begin{tabular}{l|c c c c c} model & \(C^{1}_{tt}\) & \(C^{1}_{QQ}\) & \(C^{1}_{Qt}\) & \(C^{8}_{Qt}\) & \(C^{8}_{QQ}\) \\ \hline \hline
3t,QCD,1D & [-3.6,3.7] & [-2.6,2.5] & [-2.5,2.5] & [-6.3,4.8] & [-6.5,5.3] \\
3t,QCD+EW,1D & [-3.4,3.4] & [-2.3,2.7] & [-2.4,2.5] & [-5.0,5.2] & [-4.8,5.6] \\ \hline
3t,QCD,5D & [-3.0,3.0] & [-2.2,2.1] & [-2.0,2.1] & [-5.3,3.9] & [-5.5,4.2] \\
3t,QCD+EW,5D & [-2.8,2.8] & [-1.9,2.2] & [-1.9,2.1] & [-4.1,4.3] & [-3.8,4.7] \\ \end{tabular}
\end{table}
Table 6: Comparison of the expected limits on \(C_{k}/\Lambda^{2}[\mathrm{TeV}^{-2}]\) estimated for \(pp\to 3top+X\) (3t) cross sections with quadratic \(\sigma^{(2)}\) EFT terms. Two simulation models are considered, the only QCD diagrams and QCD plus EW diagrams and their interference. The statistical models with only one Wilson coefficient have marked 1D and the models with all five coefficients considered simultaneously have marked 5D.
apparently provides similar limits. The combination of four and three top quark production processes gains in sensitivity for most of the considered operators.
## 5 Summary
In the present study, numerical simulations of processes of three and four top quark production were carried out. Simulation results are presented for both the Standard model and dimension six SMEFT. For the latter case, parametrization EFT terms \(\sigma^{(1)}\) and \(\sigma^{(2)}\) in Eq. 2 were extracted and used to derive expected constraints on the Wilson coefficients \(C_{k}/\Lambda^{2}(TeV^{-2})\) of respective SMEFT operators. Theoretical limits are obtained for both channels of three and four top quark production and are shown in Table 8. Constraints, obtained from combined statistics of both channels, were also presented. The results in summary Table 8 have been calculated with taking into account EW contribution in three top quark production at LO, QCD NLO contribution in four top quark production and with applied partial unitarity requirements discussed in the text above. The expected limits corresponds to the univariate variation of each Wilson coefficient separately. Results show that in the case of operator \(O^{1}_{tt}\), better constraints on the corresponding Wilson coefficient are obtained from the process of four top quark production. For other four operators \(O^{1}_{QQ}\), \(O^{1}_{Qt}\), \(O^{8}_{Qt}\) and \(O^{8}_{QQ}\), both processes have similar sensitivity. Combined statistics of three and four top quark production processes provide the most precise constraints in all considered cases. The sensitivity to triple top quark production in present collider experiments is largely limited by statistical uncertainty. For the future colliders, such as HL-LHC, the statistical uncertainty will be comparable to or much smaller than the systematic uncertainty, and the sensitivity to the BSM contribution in triple top quark production will be limited by theoretical calculations and experimental uncertainties. In the Table 8 a conservative estimation of the uncertainties is used for future colliders, also, the partial unitarity bounds reduce the sensitivity for the considered EFT operators. As a result, we do not observe a dramatic increase in sensitivity of three and four top quark production processes to EFT operators with a significant increases in energy of future colliders.
\begin{table}
\begin{tabular}{l|l|l|l|l|l} model & \(C^{1}_{tt}\) & \(C^{1}_{QQ}\) & \(C^{1}_{Qt}\) & \(C^{8}_{Qt}\) & \(C^{8}_{QQ}\) \\ \hline \hline
4t,nocut,1D & [-1.1,1.1] & [-2.2,2.1] & [-2.0,2.0] & [-5.7,4.6] & [-5.0,4.8] \\
4t,cut,1D & [-1.2,1.2] & [-2.4,2.3] & [-2.2,2.2] & [-6.8,5.0] & [-6.0,5.7] \\ \hline
3t,nocut,1D & [-3.7,3.7] & [-2.5,2.9] & [-2.6,2.7] & [-5.3,5.6] & [-5.1,6.1] \\
3t,cut,1D & [-4.3,4.2] & [-2.9,3.2] & [-3.1,3.2] & [-6.9,7.3] & [-6.4,7.7] \\ \hline
3+4t,nocut,1D & [-1.1,1.0] & [-2.0,2.0] & [-1.8,1.8] & [-4.7,4.2] & [-4.2,4.5] \\
3+4t,cut,1D & [-1.2,1.2] & [-2.2,2.2] & [-2.1,2.1] & [-5.8,4.8] & [-5.2,5.4] \\ \hline
4t,nocut,5D & [-0.95,0.90] & [-1.8,1.7] & [-1.6,1.6] & [-4.8,3.6] & [-4.2,4.0] \\
4t,cut,5D & [-1.0,1.0] & [-2.0,1.9] & [-1.8,1.9] & [-5.7,4.1] & [-4.6,4.4] \\ \hline
3t,nocut,5D & [-3.1,3.0] & [-2.0,2.4] & [-2.1,2.2] & [-4.3,4.6] & [-4.2,5.1] \\
3t,cut,5D & [-3.5,3.4] & [-2.3,2.7] & [-2.5,2.7] & [-5.6,6.1] & [-5.1,6.5] \\ \hline
3+4t,nocut,5D & [-0.95,0.90] & [-1.6,1.6] & [-1.5,1.5] & [-4.0,3.3] & [-3.5,3.7] \\
3+4t,cut,5D & [-1.0,1.0] & [-1.8,1.8] & [-1.7,1.7] & [-4.8,3.8] & [-4.1,4.3] \\ \end{tabular}
\end{table}
Table 7: Comparison of the expected limits on \(C_{k}/\Lambda^{2}[\mathrm{TeV}^{-2}]\) estimated for \(pp\to 4top\) (4t) and \(pp\to 3top+X\) (3t) cross sections with quadratic \(\sigma^{(2)}\) EFT terms. The limits are shown for the case of unitarity bound cuts applied at the simulation level (cut) and without such cuts (nocut). The statistical models with only one Wilson coefficient have marked 1D and the models with all five coefficients considered simultaneously have marked 5D.
Overall, the production of three top quarks seems to be quite an interesting target for BSM studies. Despite the lower cross-section (as compared to more widely discussed four top quark production), it can potentially provide some interesting opportunities for constraining the New Physics.
## Acknowledgments
This work was supported by the Russian Science Foundation [grant number 22-12-00152].
## Appendix A
Leading helicity and partial amplitudes for process \(tt\to tt\):
operator case \(O^{1}_{tt}\):
\[A=\left(\frac{C^{1}_{tt}}{\Lambda^{2}}\right)\cdot 2\cdot s\cdot(1+\beta)^{2} \tag{8}\]
\[a_{0}=\frac{1}{16\pi\cdot s\cdot\beta}\left|\int\limits_{\text{4M}^{2}_{t}-s}^ {0}dt\cdot A\right|=\left(\frac{C^{1}_{tt}}{\Lambda^{2}}\right)\cdot\frac{s}{8 \pi}\cdot\beta(1+\beta)^{2} \tag{9}\]
where:
\[\beta=\sqrt{1-\frac{4M^{2}_{t}}{s}} \tag{10}\]
\begin{table}
\begin{tabular}{l|c c|c c c} Energy, model & \(C^{1}_{tt}\) & \(C^{1}_{QQ}\) & \(C^{1}_{Qt}\) & \(C^{8}_{Qt}\) & \(C^{8}_{QQ}\) \\ \hline \hline
13 TeV, 4t & [-1.2, 1.2] & [-2.4, 2.3] & [-2.2, 2.2] & [-6.8, 5.0] & [-6.0, 5.7] \\
13 TeV, 3t & [-4.3, 4.2] & [-2.9, 3.2] & [-3.1, 3.2] & [-6.9, 7.3] & [-6.4, 7.7] \\
13 TeV, 3+4t & [-1.2, 1.2] & [-2.2, 2.2] & [-2.1, 2.1] & [-5.8, 4.8] & [-5.2, 5.4] \\ \hline
14 TeV, 4t & [-1.1, 1.0] & [-2.1, 2.0] & [-1.9, 1.9] & [-5.8, 4.2] & [-5.2, 4.9] \\
14 TeV, 3t & [-2.5, 2.5] & [-1.6, 2.0] & [-1.8, 1.9] & [-3.9, 4.4] & [-3.7, 5.1] \\
14 TeV, 3+4t & [-1.1, 1.0] & [-1.5, 1.7] & [-1.5, 1.6] & [-3.8, 3.6] & [-3.5, 4.3] \\ \hline
27 TeV, 4t & [-0.90, 0.83] & [-1.7, 1.6] & [-1.6, 1.6] & [-4.9, 3.6] & [-4.4, 4.2] \\
27 TeV, 3t & [-2.0, 2.0] & [-1.3, 1.5] & [-1.4, 1.6] & [-3.3, 3.9] & [-2.7, 4.1] \\
27 TeV, 3+4t & [-0.88, 0.83] & [-1.2, 1.3] & [-1.3, 1.3] & [-3.2, 3.2] & [-2.6, 3.5] \\ \hline
100 TeV, 4t & [-0.68, 0.66] & [-1.3, 1.3] & [-1.2, 1.2] & [-3.8, 3.0] & [-3.7, 3.6] \\
100 TeV, 3t & [-1.3, 1.4] & [-0.89, 1.0] & [-1.0, 1.1] & [-2.1, 2.6] & [-1.8, 2.7] \\
100 TeV, 3+4t & [-0.67, 0.64] & [-0.85, 0.94] & [-0.93, 0.94] & [-2.1, 2.3] & [-1.8, 2.5] \\ \hline \end{tabular}
\end{table}
Table 8: Comparison of the expected limits on \(C_{k}/\Lambda^{2}[\text{TeV}^{-2}]\) estimated for \(pp\to 4top\) (4t) and \(pp\to 3top+X\) (3t) cross sections with quadratic \(\sigma^{(2)}\) EFT terms. The limits are shown with applied unitarity bound requirements. The results have been achieved in one dimensional statistical model with a variation of each coefficient separately.
operator case \(O^{1}_{QQ}\):
\[A=\left(\frac{C^{1}_{QQ}}{\Lambda^{2}}\right)\cdot 2\cdot s\cdot(1+\beta)^{2} \tag{11}\]
\[a_{0}=\left(\frac{C^{1}_{QQ}}{\Lambda^{2}}\right)\cdot\frac{s}{8\pi}\cdot \beta(1+\beta)^{2} \tag{12}\]
operator case \(O^{1}_{Qt}\):
\[A=\left(\frac{C^{1}_{Qt}}{\Lambda^{2}}\right)\cdot 2\cdot s\cdot(1+\beta^{2}) \tag{13}\]
\[a_{0}=\left(\frac{C^{1}_{Qt}}{\Lambda^{2}}\right)\cdot\frac{s}{8\pi}\cdot \beta(1+\beta^{2}) \tag{14}\]
operator case \(O^{8}_{Qt}\):
\[A=\left(\frac{C^{8}_{Qt}}{\Lambda^{2}}\right)\cdot\frac{\lambda^{a}_{j,i} \lambda^{a}_{k,l}}{4}\cdot 2\cdot s\cdot(1+\beta^{2})=\left(\frac{C^{8}_{Qt}}{ \Lambda^{2}}\right)\cdot s\cdot(1+\beta^{2}) \tag{15}\]
\[a_{0}=\left(\frac{C^{8}_{Qt}}{\Lambda^{2}}\right)\cdot\frac{s}{16\pi}\cdot \beta(1+\beta^{2}) \tag{16}\]
operator case \(O^{8}_{QQ}\):
\[A=\left(\frac{C^{8}_{QQ}}{\Lambda^{2}}\right)\cdot\frac{\lambda^{a}_{j,i} \lambda^{a}_{k,l}}{4}\cdot 2\cdot s\cdot(1+\beta)^{2}=\left(\frac{C^{8}_{QQ}}{ \Lambda^{2}}\right)\cdot s\cdot(1+\beta)^{2} \tag{17}\]
\[a_{0}=\left(\frac{C^{8}_{QQ}}{\Lambda^{2}}\right)\cdot\frac{s}{16\pi}\cdot \beta(1+\beta)^{2} \tag{18}\]
Leading helicity and partial amplitudes for process \(t\bar{t}\to t\bar{t}\):
operator case \(O^{1}_{tt}\):
\[A=\left(\frac{C^{1}_{tt}}{\Lambda^{2}}\right)\cdot 2\cdot u\cdot\frac{(1+\beta) }{\beta} \tag{19}\]
\[a_{0}=\left(\frac{C_{dt}^{1}}{\Lambda^{2}}\right)\cdot\frac{s}{16\pi}\cdot\beta^{ 2}(1+\beta) \tag{20}\]
operator case \(O_{QQ}^{1}\):
\[A=\left(\frac{C_{tt}^{1}}{\Lambda^{2}}\right)\cdot 2\cdot u\cdot\frac{(1+\beta)}{\beta} \tag{21}\]
\[a_{0}=\left(\frac{C_{QQ}^{1}}{\Lambda^{2}}\right)\cdot\frac{s}{16\pi}\cdot \beta^{2}(1+\beta) \tag{22}\]
operator case \(O_{Qt}^{1}\):
\[A=\left(\frac{C_{Qt}^{1}}{\Lambda^{2}}\right)\cdot\left(2\cdot u\cdot\frac{(1 -\beta^{2})}{\beta^{2}}-2\cdot s\cdot(1+\beta^{2})\right) \tag{23}\]
\[a_{0}=\left(\frac{C_{Qt}^{1}}{\Lambda^{2}}\right)\cdot\left(\frac{s}{8\pi} \cdot\beta(1+\beta^{2})-\frac{s}{16\pi}\cdot\beta(1-\beta^{2})\right) \tag{24}\]
operator case \(O_{Qt}^{8}\):
\[A=\left(\frac{C_{Qt}^{1}}{\Lambda^{2}}\right)\cdot\frac{\lambda_{j,i}^{a} \lambda_{k,l}^{a}}{4}\cdot\left(2\cdot u\cdot\frac{(1-\beta^{2})}{\beta^{2}} -2\cdot s\cdot(1+\beta^{2})\right) \tag{25}\]
\[a_{0}=\left(\frac{C_{Qt}^{1}}{\Lambda^{2}}\right)\cdot\left(\frac{s}{16\pi} \cdot\beta(1+\beta^{2})-\frac{s}{32\pi}\cdot\beta(1-\beta^{2})\right) \tag{26}\]
operator case \(O_{QQ}^{8}\):
\[A=\left(\frac{C_{tt}^{1}}{\Lambda^{2}}\right)\cdot\frac{\lambda_{j,i}^{a} \lambda_{k,l}^{a}}{4}\cdot 2\cdot u\cdot\frac{(1+\beta)}{\beta} \tag{27}\]
\[a_{0}=\left(\frac{C_{QQ}^{8}}{\Lambda^{2}}\right)\cdot\frac{s}{32\pi}\cdot \beta^{2}(1+\beta) \tag{28}\]
|
2309.14165 | Towards End-User Development for IoT: A Case Study on Semantic Parsing
of Cooking Recipes for Programming Kitchen Devices | Semantic parsing of user-generated instructional text, in the way of enabling
end-users to program the Internet of Things (IoT), is an underexplored area. In
this study, we provide a unique annotated corpus which aims to support the
transformation of cooking recipe instructions to machine-understandable
commands for IoT devices in the kitchen. Each of these commands is a tuple
capturing the semantics of an instruction involving a kitchen device in terms
of "What", "Where", "Why" and "How". Based on this corpus, we developed machine
learning-based sequence labelling methods, namely conditional random fields
(CRF) and a neural network model, in order to parse recipe instructions and
extract our tuples of interest from them. Our results show that while it is
feasible to train semantic parsers based on our annotations, most
natural-language instructions are incomplete, and thus transforming them into
formal meaning representation, is not straightforward. | Filippos Ventirozos, Sarah Clinch, Riza Batista-Navarro | 2023-09-25T14:21:24Z | http://arxiv.org/abs/2309.14165v1 | Towards End-User Development for IoT: A Case Study on Semantic Parsing of Cooking Recipes for Programming Kitchen Devices
###### Abstract
Semantic parsing of user-generated instructional text, in the way of enabling end-users to program the Internet of Things (IoT), is an underexplored area. In this study, we provide a unique annotated corpus which aims to support the transformation of cooking recipe instructions to machine-understandable commands for IoT devices in the kitchen. Each of these commands is a tuple capturing the semantics of an instruction involving a kitchen device in terms of "What", "Where", "Why" and "How". Based on this corpus, we developed machine learning-based sequence labelling methods, namely conditional random fields (CRF) and a neural network model, in order to parse recipe instructions and extract our tuples of interest from them. Our results show that while it is feasible to train semantic parsers based on our annotations, most natural-language instructions are incomplete, and thus transforming them into formal meaning representation, is not straightforward.
## 1 Introduction
The Internet of Things (IoT) has emerged as a powerful ecosystem for supporting people in their day-to-day activities. IoT refers to "internet-connected devices which can see, hear, think and perform jobs by having them talk with each other, to share information and to synchronise pronouncements" Shah and Yaqoob (2016). Some examples of those _devices_ are sensors and actuators, mobile phones, tablets and smart home appliances.
With the proliferation of IoT devices, programming them to perform user-defined jobs (or tasks) has become a challenge. Given that the users of such devices would have varying requirements and preferences, employing professionals to write programs for controlling them will, in most cases, not be a viable, cost-effective option Schmidt (2015). It is practically impossible for professional designers and developers to keep up with the expectations of end-users, especially considering the wide range of possible applications and contexts of use Paterno and Santoro (2017).
To eliminate the aforementioned bottleneck, studies in end-user development (EUD) have emerged, aimed at enabling end-users to "program" IoT devices, without requiring them to learn any programming languages Chin et al. (2006); Paterno and Santoro (2017). One EUD approach is known as meta-design, whereby systems which were not designed a priori, are programmed based on the participation of end-users Fischer and Giaccardi (2006). In a related method called programming by demonstration or programming by example, a machine carries out particular tasks by learning from user-provided demonstration (or instructions), rather than by receiving machine commands Paterno and Santoro (2017).
In this work, we investigate the feasibility of using programming by demonstration to enable end-users to program IoT devices, on the basis of written (textual) instructions. Specifically, we focus on the case of IoT devices in the kitchen, and have developed a method for semantic parsing of cooking recipes and converting them into IoT commands, to enable machines to carry out kitchen-related tasks.
Compared to other areas in the household, the kitchen is believed to have been left behind in terms of automation Reichel et al. (2011). A study by Coskun et al. (2018) found that end-users would like to effortlessly interact with kitchen appliances, as this would enhance their cooking experience and make their food preparation tasks more efficient and flow more seamlessly. However, currently available _smart_ (i.e. IoT-enabled) kitchen devices Neumann et al. (2017); Schneider (2007); Reichel et al. (2011) require users to "program" them by specifying values for various settings manually. We seek to eliminate this manual effort by enabling devices to learn this knowledge from cooking recipes,
using a semantic parsing method. To the best of our knowledge, this work constitutes the first approach underpinned by natural language processing (NLP) methods, that is aimed at reducing the effort from end-users in programming IoT devices in the kitchen.
The contributions of the work reported in this paper include: (a) a first-of-its-kind corpus, in which natural-language recipe instructions have been labelled with semantics derived from IoT semantic control theory; (b) machine learning approaches for semantic parsing of cooking recipes; and (c) an analysis of the "completeness" of recipe instructions, answering the question of whether recipes contain all of the necessary information for generating fully formed, machine-understandable IoT commands.
## 2 Related Work
Previous studies have investigated the feasibility of enabling end-users to program IoT devices through gestures and spoken instructions Chen and Li (2017); Mofrad and Mosse (2018). However, the use of natural-language text as the basis of end-user programming has been underexplored. Frameworks such as InterPlay Messer et al. (2006) and AppsGate Coutaz and Crowley (2016) were developed to allow end-users to control devices in a smart home, but using structured (template-based) instructions, rather than natural language. Perera et al. (2015) collected natural-language household-related reminders from end-users, in the form of sticky notes. They then studied their linguistic features but did not attempt to automatically transform them into any formal, machine-understandable instructions.
In this work, we are focusing on semantic parsing of cooking recipes, a specific type of instructional text, i.e., natural-language descriptions of steps for performing specific tasks. Semantic parsing--the transformation of unstructured, natural language into a structured, formal meaning representation Andreas et al. (2013)--lends itself well to the task of generating machine-understandable IoT commands from texts. For instance, it has been applied to the conversion of natural-language text queries to machine-understandable code such as regular expressions Kushman and Barzilay (2013) and queries in the Structured Query Language (SQL) format Yu et al. (2018).
Instructional text contains procedural knowledge, which is crucial in facilitating task automation by machines. However, instructional texts tend to be incomplete in that they leave it to the reader to infer missing script knowledge, i.e., standard event sequences that comprise typical everyday activities Wanzare et al. (2016). For this reason, much of previous work analysing cooking recipes has focussed on understanding temporal, causal and dependency relationships between events, i.e., food preparation steps Abend et al. (2015); Yordanova (2015); Jermsurawong and Habash (2015). Our work, however, aims to analyse cooking recipes at a finer-grained level. That is, we seek to perform semantic parsing on each recipe instruction (involving a kitchen device), to convert it into a formalised, machine-understandable representation.
Most similar to our research is the work of Zhang et al. (2012). In their proposed approach, handcrafted rules were developed to identify instruction components (e.g., instrument, purpose, quantitative and spatial parameters) based on the outputs of the Stanford dependency parser. In contrast, our work builds upon machine learning-based methods, described in Section 3.4, for recognising instruction components. Furthermore, our target formalisation (i.e., the representation into which natural language is to be transformed) is grounded on a convention that was found to be suitable for capturing IoT semantics Milis et al. (2019).
## 3 Methodology
In this section, we present the methods underpinning our proposed approach, starting with our chosen formal representation, followed by data collection, the annotation task and the development of semantic parsers.
### Formal Representation
Choosing a target formal representation that captures IoT semantics, into which natural-language instructions will be transformed, is a crucial first step. We adopted the 5W1H representation Niitsuma et al. (2009); Zhang et al. (2014), which can be used to describe any event or situation in terms of answers to the questions "What", "Where", "When", "Who" and "How" (hence its name). We draw inspiration from a previous study which demonstrated the suitability of 3W1H for representing concepts related to the control of IoT devices Milis et al. (2019), adapting it to the kitchen environment in particular. To this end, we established
the following definitions, with examples based on the sample natural-language instruction _"Increase the temperature of the oven to 400 degrees Fahrenheit"_:
* Where: the specific kitchen device where the instruction should be carried out, e.g., _"oven"_
* What: a property of the device of interest, e.g., _"temperature"_
* Why: the purpose or role, e.g., _"Increase"_
* How: the value of the property and its unit of measurement, e.g., _"400 Fahrenheit"_
We note that for our purposes, there are only three out of the traditional five Ws. The answer to the question "Who" remains consistent -- it always refers to the reader of the instruction who is expected to execute it. Additionally, the "When" is deemed outside the purview of our tuple as per (Milis et al., 2019). Any temporal references encountered are labelled as "How". For example, in the statement _"bake for 40 minutes"_, the duration _"40 minutes"_ is attributed to the "How" component of the tuple.
We posit that the above representation is sufficient for capturing the semantics of any given recipe instruction, and thus develop an annotation task based on it, described in more detail in Section 3.3 below.
### Data Collection and Pre-processing
For our work, we sought to construct an annotated corpus of recipes based on a subset of the publicly available RecipeQA dataset (Yagcioglu et al., 2018), which contains around 20,000 unique cooking recipes in English. As a first step, we extracted titles and instructions from the recipes in the dataset (leaving out any images and the lines of text corresponding to question answering). The following pre-processing steps were then performed: (a) conversion from UTF-8 to ASCII character encoding, to eliminate any noise from special characters; (b) conversion of acronyms that are typically used in cooking to their respective full forms (e.g., from _"w/o"_ to _"without"_) using a lookup list; and (c) removal of emoticons, extraneous punctuation marks and consecutive spaces.
Out of the 20,000 recipes, we chose only a subset mentioning kitchen devices. To this end, we compiled a dictionary of words related to kitchen devices to guide our selection. Specifically, we searched for relevant vocabularies and identified the DogOnt ontology (Bonino and Corno, 2008) as the only ontology that catalogues home automation devices (including IoT devices in the kitchen). Additionally, we retrieved all of the hyponyms of _"kitchen appliance"_ in WordNet (Miller, 1995), and mapped them to device classes in DogOnt, in order to organise them hierarchically. The result is a total of 10 upper-level kitchen device classes, with _"oven"_ and _"stove"_ having the highest number of subclasses (six and eight, respectively). To further enrich our dictionary, we made use of GloVe word embedding vectors (Pennington et al., 2014) pretrained on Common Crawl1, to obtain 100 words that are most similar to each of the 10 higher-level device classes. These words were manually reviewed and those which are not related (by synonymity or device purpose) were removed. In the work described in this paper, we focussed on only two device classes, namely, _"oven"_ and _"fridge"2_.
Footnote 1: commoncrawl.org
Footnote 2: The dictionaries for these two device classes can be found at github.com/FilipposVentirozos/Towards-End-User-Development-for-IoT.
### Annotation Task
Our annotation scheme was derived from the work of Milis et al. (2019), as explained in Section 3.1. For every recipe instruction, any information pertaining to location was labelled as "Where"; in our case, this is a mention of a kitchen device. In the same sentence, the sequence of words corresponding to each of "What", "Why" and "How" was also annotated. This task, which we cast as a sequence labelling task, was facilitated by the _doccano_ annotation tool (Nakayama et al., 2018). It comes with a user-friendly interface for sequence labelling, allowing annotators to select any word sequence and to label it as one of "Where", "What", "Why" and "How".
#### 3.3.1 Annotation Guidelines
We established the annotation guidelines before any of the annotators were asked to carry out the tasks. The key points are outlined below.
* The value of "Where" should be in our compiled dictionary.
* Only instructions directly related to a kitchen device are of interest. For instance, in the instruction _"After 10 minutes, take out the dish from the oven, stir, and then set it back inside"_,
the instructions of interest are those pertaining to setting the oven timer and the (implicit) opening of the oven, rather than the stirring of the dish.
* Only instructions where the device is used to cook or produce the dish are annotated. For example, in the instruction "_Take out eggs from the fridge_", the fridge was used for storage and not as a device for producing the dish; hence, this instruction is not annotated. This guideline would also exclude comments such as "_my microwave is 1000 Watt_".
The annotations were carried out by three annotators. One annotator (the first author of this paper) completed the sequence labelling task for 221 recipes, while the second and third annotators completed 65 and 49 recipes, respectively. A common subset of 10 recipes (containing 67 instructions) were annotated by all of them, with an inter-annotator agreement (IAA) of 59% in terms of F1-score.
### Semantic Parsing
Below, we describe in detail our machine learning-based approaches to semantic parsing. Specifically, the sequence labelling task for recognising the values of "Where", "What", "Why" and "How".
#### 3.4.1 Conditional Random Fields
Firstly, we developed sequence labelling models based on conditional random fields (CRFs) Okazaki (2007). Recipe instructions were tokenised, lemmatised and tagged with parts of speech (POS) using the spaCy library Honnibal and Montani (2017). The characters of each token were then converted to lowercase. Afterwards, the following features were extracted for each token: the token itself, its lemma, its first letter, its first three letters and its POS tag. Additionally, we also made use of binary features: whether a token is capitalised, exists in our compiled dictionary, consists of alphabetic characters, and whether it is a stop word. Furthermore, we explored extracting the above features for a token's surrounding words (within a window of three), as well as its head token (as provided by the spaCy syntactic dependency parser).
#### 3.4.2 Neural Network
We sought to determine how a neural network-based approach would perform in our sequence labelling task. To accomplish this we employed an open-source named entity recognition tool called NeuroNER Dernoncourt et al. (2017). This tool is underpinned by a type of recurrent neural network (RNN) known as long short-term memory (LSTM) Dernoncourt et al. (2016). We configured it to make use of spaCy for tokenisation (same as in our CRF-based method described above), as well as pre-trained GloVe embeddings Pennington et al. (2014) for learning word sequence representations. We trained a new NeuroNER model on our own annotated corpus, using the default hyper-parameters for the neural network3.
Footnote 3: github.com/Franck-Dernoncourt/NeuroNER
## 4 Evaluation and Results
### Evaluation Method
We collected all of the recipes about a given device (according to the manual annotations) and tokenised them using the spaCy tool Honnibal and Montani (2017). These recipes were then concatenated, resulting in a total of 195 and 120 recipes mentioning the oven and fridge, respectively. Each word is labelled according to the IOB scheme Ramshaw and Marcus (1995), with four semantic labels: "Where", "What", "Why" and "How". In total there are nine possible labels: "B-Where", "I-Where", "B-What", "I-What", "B-Why", "I-Why", "B-How", "I-How" and "O".
We partitioned the dataset into training, validation, and test subsets using splits of 70%, 15%, and 15% respectively4. To guarantee a representative distribution in each subset, stratification was conducted based on the nine designated labels. Additionally, this stratification process ensured an even distribution of false positive recipes. False positive recipes refer to those containing a "Where" term from the dictionary, yet upon annotation, were not perceived to contain instructions related to the targeted devices.
Footnote 4: The datasets can be found at github.com/FilipposCentrizoos/Towards-End-User-Development-for-IoT.
Our proposed methodologies were subjected to evaluation using the test set. The validation set was used in the following manner: (1) to determine the epoch for stopping the training of the neural network; and (2) to select optimal features and tune hyper-parameters for CRFs. For the latter, we measured F1-score Sasaki (2007) by subtracting groups of features (see section 3.4.1) in the follow
ing order: (1) parent set of features, (2) features of up to three surrounding words, (3) features of up to two surrounding words, and (4) features of only one surrounding word. In addition, we performed three-fold cross-validation on the training and validation set combined to find the best hyper-parameters for the CRF. For that a random search algorithm (Pedregosa et al., 2011) was used with 80 possible combinations of the hyper-parameters \(c1\), \(c2\) and \(min\_freq\), where the first two stand for L1 and L2 regularisation, respectively, and the last one a cut-off threshold for occurrence frequency of a feature5.
Footnote 5: See sklearn-crfsuite.readthedocs.io/en/latest/api.html for further details
### Results
#### 4.2.1 Exploratory Analysis
Figure 1 provides a depiction of the per-device distribution of word sequences labelled as "Where", "What", "Why" and "How". There were a total of 2,131 oven-related and 323 fridge-related labels.
#### 4.2.2 Prediction
In Tables 1 and 2 are the predictions against the test set for the oven and the fridge, respectively. For the CRF approach, the best F1-score was achieved when all of the group of features were used; see Section 4.1. It was noticeable, however, that adding more features caused the model score to plateau; hence, the predefined group of features were considered enough.
## 5 Discussion
In Figure 1, one can notice the discrepancy in frequency between the "How" label and the rest for each device. It is worth noting at this point that part of this discrepancy is because the "How" element often was labelled as a whole phrase, whereas the rest tended to have only one word labelled. Taking this adjustment into consideration, the labels "How" and "Where" seem to be the key identifier for an event. However, according to Milis et al. (2019) we need all four of them to describe an event. Based on our observation, recipes use common sense to define events; for instance, if we take the phrase _"preheat the oven on 200 degrees C"_, we ourselves would understand that the reason to do this is to increase the temperature, from where we can infer that to increase is the "Why" and the object that we want to increase (i.e. the temperature) is the "What". Hence, there needs to be a mapping of "Where" and "How" to the quadruple.
During annotation, it was discovered that there are a number of code statements which link the
\begin{table}
\begin{tabular}{|l|l|l|l|} \multicolumn{4}{c}{Proven} \\ \hline Label \& & CRF Suite & NeuroNER & N. \\ Avg & F1 Score & F1 Score & samples \\ \hline How & 0.550 & 0.573 & 226 \\ \hline What & 0.000 & 0.000 & 3 \\ \hline Where & 0.704 & 0.778 & 34 \\ \hline Why & 0.944 & 0.878 & 19 \\ \hline micro avg & 0.598 & **0.624** & 282 \\ \hline \end{tabular}
\end{table}
Table 1: Comparative results for conditional random fields and the neural network approach on the test set, with respect to the oven. The best performance is shown in bold. The last column displays the number of true labelled words in the test set.
\begin{table}
\begin{tabular}{|l|l|l|l|} \multicolumn{4}{c}{Fridge} \\ \hline Label \& & \begin{tabular}{l} CRF Suite \\ F1 Score \\ \end{tabular} & \begin{tabular}{l} NeuroNER \\ F1 Score \\ \end{tabular} &
\begin{tabular}{l} N. gold- \\ standard \\ samples \\ \end{tabular} \\ \hline How & 0.488 & 0.760 & 29 \\ \hline Where & 0.538 & 0.727 & 11 \\ \hline Why & 0.00 & 0.000 & 0 \\ \hline micro avg & 0.507 & **0.750** & 40 \\ \hline \end{tabular}
\end{table}
Table 2: Comparative results for conditional random fields and the neural network approach on the test set, with respect to the fridge. The best performance is shown in bold. The last column displays the number of true labelled words in the test set.
Figure 1: Distribution of the values for the four slots (“Where”, “What”, “Why” and “How”). The pink and blue bars correspond to the annotations for the two devices, oven and fridge, respectively.
events with each other. For instance, _"cook until golden brown"_ would refer to a "while" condition, _"insert into the microwave and heat with 10 second intervals until melted"_ refers also to a "while" loop until a condition is met, and _"if you have a 1000W, cook for 30 seconds, unless with a 800W you should let it for 40 seconds"_ is a clear "if" statement. The keywords which indicate code statements such as in the above examples "until", "intervals" and "if" were not labelled. It was noted that often these words link together the events that could relate to IoT commands and functionality.
The linkage of cooking events and device events could also help to infer device settings from imprecise instructions. During the annotation process it came to our attention that many recipes were not written using precise language. A workaround to this would be cross-referencing amongst other recipes. That is, mentions of devices could be linked to cooking events and through these, similar device instructions could be inferred.
Finally, it was shown in our evaluation of semantic parsing (Section 4.2.2) that the best performing approach for each device was the neural network-based approach. We believe that this is due to the CRF model not being able to capture long dependencies between words. Indeed, in using the CRF model, when more than three neighbouring words were inserted, the performance (in terms of F1-score) plateaued.
## 6 Conclusion
We created a first-of-its-kind annotated corpus that links instructional text to IoT devices. The annotations in the corpus were inspired by the 3W1H methodology that has been used in previous studies to describe IoT events. We also demonstrated methods for semantic parsing and compared them. We concluded that the neural network approach was superior. We also observed that many recipe instructions are written in imprecise language, leading to missing information that can be potentially inferred.
## Acknowledgements
We would like to express our gratitude to ARM Ltd and the UK Engineering and Physical Sciences Research Council (EPSRC), Grant No. EP/S513842/1 (Studentship 2109081), for their generous financial support, which was instrumental in facilitating this research study.
|
2308.13523 | On the Particle and Field nature of $γ^μ$ matrices in the Dirac
Equation and the Nature's intrinsic fifth force | The Dirac equation is a cornerstone of modern particle physics, which
integrates special relativity and quantum mechanics into a consistent
framework, yielding the prediction of electron and its antiparticle
counterpart, positron. The Dirac equation also lays the foundation of quantum
electrodynamics, such that QED phenomenon is supported by fundamental Dirac
Algebras calculation. In this article, we will introduce new perspectives of
the $\gamma^\mu$ matrix in the Dirac Algebra, by realizing the $\gamma^\mu$
matrices are actual formal quantum fields, the excitation of $\gamma^\mu$
fields correspond to a new particle with both boson and fermion nature. Thus,
we show that $\gamma^{\mu}$ is a particle in nature, and can be referred as the
nature's intrinsic fifth force. The $\gamma^\mu$ field also serves as the
boson-fermion connector in QED interaction. | B. T. T. Wong | 2023-07-31T10:57:34Z | http://arxiv.org/abs/2308.13523v2 | On the Particle and Field nature of \(\gamma^{\mu}\) matrices in the Dirac Equation and the Nature's intrinsic fifth force
###### Abstract
The Dirac equation is a cornerstone of modern particle physics, which integrates special relativity and quantum mechanics into a consistent framework, yielding the prediction of electron and its antiparticle counterpart, positron. The Dirac equation also lays the foundation of quantum electrodynamics, such that QED phenomenon is supported by fundamental Dirac Algebras calculation. In this article, we will introduce new perspectives of the \(\gamma^{\mu}\) matrix in the Dirac Algebra, by realizing the \(\gamma^{\mu}\) matrices are actual formal quantum fields, the excitation of \(\gamma^{\mu}\) fields correspond to a new particle with both boson and fermion nature. Thus, we show that \(\gamma^{\mu}\) is a particle in nature, and can be referred as the nature's intrinsic fifth force. The \(\gamma^{\mu}\) field also serves as the boson-fermion connector in QED interaction.
## 1 Introduction
The Standard Model (SM) of particles are mainly classified into spin-0 and spin -1 Gauge Bosons, which are described by the excitation of gauge fields; and spin 1/2 fermions are described by quantization of Dirac spinor fields. The fermions include leptons and quarks, which are the matter building blocks; and the spin-one gauge bosons include photon, which is responsible for the electromagnetic interaction; \(W^{\pm}\) and \(Z\) bosons, which are responsible for the weak interaction; and eight gluons which are force carriers of the strong interaction. Finally, the spin-zero Higgs boson, the only scalar particle in SM, is responsible for accounting masses of the three electroweak bosons, as well as the fermion masses through the process of spontaneous symmetry breaking [1, 2, 3, 4, 5, 6]. Gravity, on the other hand, is manifested as spacetime geometry which is governed by Einstein theory of general relativity. All together, the electromagnetic force, strong force, weak force and gravity constitute the four fundamental forces in nature.
The Dirac equation, proposed by Dirac in 1928 [7], describes the dynamics of a relativistic fermion compatible with special relativity and quantum mechanics, reads as follows
\[(i\gamma^{\mu}\partial_{\mu}-m)\psi=0\,, \tag{1}\]
where the \(\gamma^{\mu}\)s are \(4\times 4\) matrices object that satisfies the Dirac algebra,
\[\{\gamma^{\mu},\gamma^{\nu}\}=2\eta^{\mu\nu}{\bf 1}_{4\times 4} \tag{2}\]
with \(\eta^{\mu\nu}\) the inverse of flat Minkowski metric. Mathematically, the set of gamma matrices \(\gamma^{\mu}=\{\gamma^{0},\gamma^{1},\gamma^{2},\gamma^{3}\}\) generates basis representation of the Clifford Algebra \({\rm Cl}_{1,3}(\mathbb{R})\), and
its commutator \([\gamma^{\mu},\gamma^{\nu}]=\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu}\) is the basis for the Lorentz algebra which facilitate three spatial rotations and three Lorentz boosts [8]. Physically, we know that \(ie\gamma^{\mu}\) acts as a Feynman vertex in the momentum space. However, generally speaking, the intrinsic physical meaning of \(\gamma^{\mu}\) matrix is not very clearly known.
In this paper, we will novelly realize constant gamma matrices \(\gamma^{\mu}\) as quantum field particle excitations in physics means. Secondly, we will explicitly demonstrate the Wick's theorem of \(\gamma^{\mu}\)matrix. Then finally, we will work out the formalism of gamma fields in differential geometry means, and we find that the electromagnetic field tensor \(F^{\mu\nu}\) can be represented by gamma matrices. Then the integral formalism of gamma matrices is also studied.
## 2 Constant classical background field \(\gamma^{\mu}\)
In this section, we will give an intensive study of the well-known subject \(\gamma^{\mu}\) matrix in the Dirac equation. Explicitly, we would like to investigate the physical meaning of the gamma matrix, beyond the known fact of just the basis of Clifford algebra.
First of all, the \(\gamma^{\mu}\) matrix transforms like a vector field, and it is a constant vector field,
\[\gamma^{\prime\mu}=\Lambda^{\mu}_{\nu}\gamma^{\nu}\,, \tag{3}\]
where \(\Lambda^{\mu}_{\nu}\) is the Lorentz transformation.
Since \(\gamma^{\mu}\) is a constant vector field, it admits a global transformation instead of a local transformation. We know that it globally transforms as
\[V\equiv\Lambda_{\frac{1}{2}}=e^{-\frac{i}{2}\omega_{\mu\nu}S^{\mu\nu}}\,, \tag{4}\]
where \(\omega_{\mu\nu}\) is the infinitesimal global parameter and \(S^{\mu\nu}=\frac{i}{4}[\gamma^{\mu},\gamma^{\nu}]\) is the generator of the Lorentz algebra for the Lorentz group \(SO(1,3)\). And we have the following set of global transformation rule under global SO(1,3) Lorentz group
\[\begin{cases}\gamma^{\prime\mu}=V^{-1}\gamma^{\mu}V\\ \psi^{\prime}=V\psi\,.\end{cases} \tag{5}\]
Let's take a reference to local transformation of non-abelian gauge field [9]
\[\begin{cases}A^{\prime\mu}=UA^{\mu}U^{-1}+\frac{i}{g}U\partial^{\mu}U^{-1}\\ \psi^{\prime}=U\psi\end{cases} \tag{6}\]
where \(U(x)=e^{i\alpha^{a}(x)t^{a}}\) is an element of a SU(N) Lie group. There is a remarkable similarity between 5 and 6. The only difference is the former is global and the latter is local, and the former one belongs to global non-abelian SO(1,3) group while the latter belongs to a general non-abelian Lie group. Notice that the second term of the first equation in vanishes because we have a global transformation, explicitly we should write
\[\gamma^{\prime\mu}=V^{-1}\gamma^{\mu}V+\frac{i}{g^{\prime}}V^{-1}\partial^{ \mu}V \tag{7}\]
but \(\partial^{\mu}V=0\) as \(\partial^{\mu}\omega_{\alpha\beta}=0\) given that \(\omega_{\mu\nu}\) is independent of spacetime.
The only difference is that \(V\) is non-unitary due to the fact that the SO(1,3) group generators are non-hermitian, while \(U\) unitary as the Lie group generators are unitary.
Therefore, we can see that the \(\gamma^{\mu}\) matrix behaves like a non-abelian gauge field, at least classically. We we investigate the tensor notation, one finds \(\gamma^{\mu}_{ab}\) versus \(A^{\mu}_{ij}\equiv A^{\mu a}t^{a}_{ij}\). Similarly the field strength is defined by
\[F_{\mu\nu}=\partial_{\mu}\gamma_{\nu}-\partial_{\nu}\gamma_{\mu}+\gamma_{\mu} \gamma_{\nu}-\gamma_{\nu}\gamma_{\mu}=-4iS_{\mu\nu}\,, \tag{8}\]
contrasting to non-abelian gauge field strength
\[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+A_{\mu}A_{\nu}-A_{\nu} A_{\mu}\,. \tag{9}\]
Compactly,
\[F=d\gamma+\gamma\wedge\gamma=\gamma\wedge\gamma \tag{10}\]
and
\[F=dA+A\wedge A\,. \tag{11}\]
The Lagrangian is given by
\[{\cal L}=-16{\rm Tr}S_{\mu\nu}S^{\mu\nu}=-16\times-192=3072 \tag{12}\]
which is a constant. The number \(-192\) denotes the spin field strength.
In non-Abelian field strength, it transforms as
\[F^{\prime}_{\mu\nu}=UF_{\mu\nu}U^{-1}\,. \tag{13}\]
We can show that the spin tensor also transforms globally in this way:
\[\begin{split} S^{\prime}_{\mu\nu}&=\frac{i}{4}[ \gamma^{\prime}_{\mu},\gamma^{\prime}_{\nu}]\\ &=\frac{i}{4}[\gamma^{\prime}_{\mu}\gamma^{\prime}_{\nu}-\gamma^{ \prime}_{\nu}\gamma^{\prime}_{\mu}]\\ &=\frac{i}{4}[\Lambda^{-1}_{\frac{1}{2}}\gamma_{\mu}\Lambda_{ \frac{1}{2}}\Lambda^{-1}_{\frac{1}{2}}\gamma_{\nu}\Lambda_{\frac{1}{2}}- \Lambda^{-1}_{\frac{1}{2}}\gamma_{\nu}\Lambda_{\frac{1}{2}}\Lambda^{-1}_{ \frac{1}{2}}\gamma_{\mu}\Lambda_{\frac{1}{2}}]\\ &=\Lambda^{-1}_{\frac{1}{2}}\bigg{(}\frac{i}{4}[\gamma_{\mu} \gamma_{\nu}-\gamma_{\nu}\gamma_{\mu}]\bigg{)}\Lambda_{\frac{1}{2}}\\ &=\Lambda^{-1}_{\frac{1}{2}}S_{\mu\nu}\Lambda_{\frac{1}{2}}\,. \end{split} \tag{14}\]
Therefore, the gamma field behaves as global gauge field.
## 3 Canonical Quantization
It is known that fermionic spinor can be quantized by equal-time anticommutation relation,
\[\{\psi_{a}(\mathbf{x}),\psi^{\dagger}_{b}(\mathbf{y})\}=\delta^{3}(\mathbf{x}-\mathbf{y}) \delta_{ab}\,. \tag{15}\]
and
\[\{\psi_{a}(\mathbf{x}),\psi_{b}(\mathbf{y})\}=\{\psi^{\dagger}_{a}(\mathbf{x}),\psi^{ \dagger}_{b}(\mathbf{y})\}=0 \tag{16}\]
And indeed we find that the constant gamma field is naturally quantized, by the Dirac algebra of the \(\gamma^{\mu}\) field and its complex conjugate \(\gamma^{\mu\dagger}\),
\[\{\gamma^{\mu}_{ac},\gamma^{\nu\dagger}_{cb}\}=2\delta^{\mu\nu}\delta_{ab} \tag{17}\]
The factor of 2 is just from normalization. If we consider in 3 spatial dimensions, we obtain an expression in analogy to the anti-commutation relation of the fermonic field in 15,
\[\{\gamma^{i}_{ab},\gamma^{j\dagger}_{bc}\}=2\delta^{ij}\delta_{ac}\,, \tag{18}\]
and also
\[\{\gamma^{i}_{ab},\gamma^{j}_{bc}\}=\{\gamma^{i\dagger}_{ab},\gamma^{j\dagger}_ {bc}\}=0 \tag{19}\]
for \(i\neq j\).
Therefore, we see that \(\mu\) has the role of \(x\) and \(\nu\) has the role of \(y\), and in 3 dimension \(i\) has the role of \(\mathbf{x}\) and \(j\) has the role of \(\mathbf{y}\).
Next we would like to carry out canonical quantization of the \(\gamma^{\mu}\) field using fermonic creation and annihilation operators. Let's take a look on the canonical quantization of gauge field first. The quantized U(1) gauge field is
\[\hat{A}^{\mu}(\mathbf{x})=\int\frac{d^{3}\mathbf{p}}{\sqrt{(2\pi)^{3}}\sqrt{2E_{\mathbf{p} }}}\sum_{\lambda}(\epsilon^{\mu}_{\lambda}\hat{a}_{\mathbf{p}}+\epsilon^{*\mu}_{ \lambda}\hat{a}^{\dagger}_{-\mathbf{p}})e^{i\mathbf{p}\cdot\mathbf{x}}\,, \tag{20}\]
where \(\epsilon^{\mu}_{\lambda}\) is the polarization vector. For fermions, we need two different creation and annihilation operators. We ansatz the quantization of \(\gamma^{\mu}\) field as follow because the field is independent of position, and the momenta are discrete modes:
\[\hat{\gamma}^{\mu}_{ab}=\sum_{\mathbf{p}}\sum_{\lambda}\frac{1}{\sqrt{(2\pi)^{3}}} (\epsilon^{\mu}_{\lambda}\theta_{ab}\hat{b}_{\mathbf{p}}+\epsilon^{*\mu}_{\lambda }\theta^{*}_{ab}\hat{c}^{\dagger}_{-\mathbf{p}})\,, \tag{21}\]
and
\[\hat{\gamma}^{\mu\dagger}_{bc}=\sum_{\mathbf{p}}\sum_{\lambda}\frac{1}{\sqrt{(2 \pi)^{3}}}(\epsilon^{*\mu}_{\lambda}\theta_{bc}\hat{b}^{\dagger}_{-\mathbf{p}}+ \epsilon^{\mu}_{\lambda}\theta_{ab}\hat{c}_{\mathbf{p}})\,, \tag{22}\]
We ansatz the constant tensor to satisfy the following normalization
\[\theta_{ab}\theta^{*}_{bc}=\delta_{ac}\quad\text{and}\quad\theta^{*}_{ab} \theta_{bc}=\delta_{ac} \tag{23}\]
And the creation and annihilation operators satisfies the following anti-commutation relation
\[\{\hat{b}_{\mathbf{p}},\hat{b}^{\dagger}_{\mathbf{p^{\prime}}}\}=(2\pi)^{3}\delta_{ \mathbf{p}\mathbf{p^{\prime}}}\quad\text{and}\quad\{\hat{c}_{\mathbf{p}},\hat{c}^{\dagger }_{\mathbf{p^{\prime}}}\}=(2\pi)^{3}\delta_{\mathbf{p}\mathbf{p^{\prime}}}\,, \tag{24}\]
while all other anti-commutation relations are zero. Now, we compute the anti-commutator of the field:
\[\begin{split}\{\hat{\gamma}^{\mu}_{ac},\hat{\gamma}^{\mu^{ \dagger}}_{cb}\}&=\sum_{\mathbf{p}}\sum_{\mathbf{p^{\prime}}}\sum_{ \lambda}\sum_{\rho}\frac{1}{(2\pi)^{3}}(\epsilon^{\mu}_{\lambda}\epsilon^{*\nu }_{\rho}\theta_{ac}\theta^{*}_{cb}\{\hat{b}_{\mathbf{p}},\hat{b}^{\dagger}_{-\mathbf{ p^{\prime}}}\}+\epsilon^{*\mu}_{\lambda}\epsilon^{\nu}_{\rho}\theta^{*}_{ac} \theta_{cb}\{\hat{c}^{\dagger}_{-\mathbf{p}},\hat{c}_{\mathbf{p^{\prime}}}\})\\ &=\sum_{\mathbf{p}}\sum_{\mathbf{p^{\prime}}}\sum_{\lambda}\sum_{\rho} \frac{1}{(2\pi)^{3}}(\epsilon^{\mu}_{\lambda}\epsilon^{*\nu}_{\rho}\theta_{ac }\theta^{*}_{cb}\{\hat{b}_{\mathbf{p}},\hat{b}^{\dagger}_{-\mathbf{p^{\prime}}}\}+ \epsilon^{*\mu}_{\lambda}\epsilon^{\nu}_{\rho}\theta^{*}_{ac}\theta_{cb}\{ \hat{c}_{\mathbf{p^{\prime}}},\hat{c}^{\dagger}_{-\mathbf{p}}\})\\ &=\sum_{\mathbf{p}}\sum_{\mathbf{p^{\prime}}}\sum_{\lambda}\sum_{\rho} \frac{1}{(2\pi)^{3}}(\epsilon^{\mu}_{\lambda}\epsilon^{*\nu}_{\rho}\delta_{ab}( 2\pi)^{3}\delta_{\mathbf{p},-\mathbf{p^{\prime}}}+(2\pi)^{3}\epsilon^{*\mu}_{\lambda} \epsilon^{\nu}_{\rho}\delta_{ab}\delta_{\mathbf{p^{\prime}},-\mathbf{p}})\\ &=\sum_{\lambda}\sum_{\rho}(\epsilon^{\mu}_{\lambda}\epsilon^{*\nu }_{\rho}+\epsilon^{*\mu}_{\lambda}\epsilon^{\nu}_{\rho})\delta_{ab}\\ &=\sum_{\lambda}(\epsilon^{\mu}_{\lambda}\epsilon^{*\nu}_{\lambda }+\epsilon^{*\mu}_{\lambda}\epsilon^{\nu}_{\lambda})\delta_{ab}\\ &=-2\eta^{\mu\nu}\delta_{ab}\end{split} \tag{25}\]
where in the third line we have used the fact that
\[\sum_{\mathbf{p}\!\mathbf{p}^{\prime}}\delta_{\mathbf{p} \!\mathbf{,}-\mathbf{p}^{\prime}}=1 \tag{26}\]
as only the term when \(\mathbf{p}=\mathbf{p}^{\prime}=0\) survives. In the fifth line, we have used the fact from polarization sum of massless particle that
\[\sum_{\lambda}\epsilon^{\mu}_{\lambda}\epsilon^{*\nu}_{\lambda}=\sum_{\lambda }\epsilon^{*\mu}_{\lambda}\epsilon^{\nu}_{\lambda}=-\eta^{\mu\nu} \tag{27}\]
As the canonical quantization only concerns about the spatial dimensions, this suffices to give
\[\framebox{$\{\{\hat{\gamma}^{i}_{ac},\hat{\gamma}^{j\dagger}_{cb}\}=2\delta^{ ij}\delta_{ab}\}$} \tag{28}\]
and it is easy to show that
\[\framebox{$\{\{\hat{\gamma}^{i}_{ac},\hat{\gamma}^{j}_{cb}\}=\{\hat{\gamma}^{i \dagger}_{ac},\hat{\gamma}^{j\dagger}_{cb}\}=0\}$}. \tag{29}\]
as all other anti-commutation relations other than 24 are zero.
Therefore we see that the constant field \(\gamma^{\mu}\) behaves like both a boson and a fermion, for which it satisfies the global gauge transformation rule and its quantization satisfies anti-commutation relationship. This special property can be in fact demonstrated through the QED interaction
\[{\cal L}_{\rm QED,interaction}=-e\bar{\psi}_{a}\gamma^{\mu}_{ab}A_{\mu}\psi_{b}\,, \tag{30}\]
so the gamma matrix field \(\gamma^{\mu}\) connects both the fermion (Dirac spinor) and the gauge boson \(A_{\mu}\). This makes a lot of sense that the gamma field contains both bosonic and fermonic properties as it connects to both types of particle. We term the gamma field as the boson-fermion connector. Furthermore, the gamma field \(ie\gamma^{\mu}\) is known as the QED vertex, in which it connects two spinor fermions and one boson.
## 4 Correlation function of Gamma \(\gamma^{\mu}\) fields
As in the last section we have seen that the \(\gamma^{\mu}\) matrix serve as constant quantum fields, therefore \(\gamma^{\mu}\) itself is considered as a particle with undefined spin, as it is neither a fermion nor a boson. Now we will work out the \(n-\)point Green's function of the \(\gamma^{\mu}\) fields.
The \(n-\)point Green's function of the \(\gamma^{\mu}\) fields is simply the trace of \(n\) gamma matrices. The reason will be apparent when we work out some examples. First notice that the trace of odd number of gamma matrices is zero, and the \(n-\)point correlation function of fields for odd \(n\) is zero.
For example, we have
\[\langle 0|\gamma^{\mu}|0\rangle={\rm Tr}\gamma^{\mu}=0\,. \tag{31}\]
The vacuum expectation value of fields is zero
\[\langle 0|\hat{\psi}(x)|0\rangle=0 \tag{32}\]
Next
\[\langle 0|\hat{\gamma}^{\mu}\hat{\gamma}^{\nu}|0\rangle={\rm Tr}(\gamma^{\mu }\gamma^{\nu})=4\eta^{\mu\nu}\,. \tag{33}\]
And we see that
\[\langle 0|{\rm T}\hat{\phi}(x)\hat{\phi}(y)|0\rangle=\Delta_{F}(x-y) \tag{34}\]
So we can see that \(\mu\) maps to \(x\) and \(\nu\) maps to \(y\). Furthermore see that
\[\langle 0|\mathrm{T}\hat{\gamma}^{\mu}\hat{\gamma}^{\nu}\hat{\gamma}^{\rho}\hat{ \gamma}^{\sigma}|0\rangle=\mathrm{Tr}(\hat{\gamma}^{\mu}\hat{\gamma}^{\nu}\hat{ \gamma}^{\rho}\hat{\gamma}^{\sigma})=4(\eta^{\mu\nu}\eta^{\rho\sigma}-\eta^{ \mu\rho}\eta^{\nu\sigma}+\eta^{\mu\sigma}\eta^{\nu\rho})\,, \tag{35}\]
so this is just the contraction over the indices. The factor of \(4\) is unimportant and can be normalized later. When compare to the contraction of normal fields
\[\begin{split}&\langle 0|\hat{\phi}(x_{1})\hat{\phi}(x_{2})\hat{ \phi}(x_{3})\hat{\phi}(x_{4})|0\rangle\\ &=\Delta_{F}(x_{1}-x_{2})\Delta_{F}(x_{3}-x_{4})+\Delta_{F}(x_{1} -x_{3})\Delta_{F}(x_{2}-x_{4})+\Delta_{F}(x_{1}-x_{4})\Delta_{F}(x_{2}-x_{3}) \,.\end{split} \tag{36}\]
So we map \(\mu\) to \(x_{1}\), \(\nu\) to \(x_{2}\), \(\rho\) to \(x_{3}\) and \(\sigma\) to \(x_{4}\). Everything is similar except for the minus sign of the second term and the front factor of \(4\).
Let's investigate 35 in detail diagramatically
\(\mu\)\(\eta^{\mu\nu}\)\(\nu\)\(\nu\)\(\mu\)\(\eta^{\mu\sigma}\)\(\rho\)\(\eta^{\nu\sigma}\)\(\rho\)\(\eta^{\mu\sigma}\)\(\rho\)\(\eta^{\nu\sigma}\)\(\rho\)\(\eta^{\sigma\rho}\)\(\rho\)\(\eta^{\nu\sigma}\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\)\(\rho\rho\)\(
### \(N\)-point correlation function of \(\gamma^{\mu}\) fields and Wick's theorem
Now we study the rigorous definition of \(N\)-point correlation function of gamma constant quantum fields. From the above intuitive demonstration, we have already shown that the \(N\)-point correlation function is given by the the sum over all possible contractions. However, we still leave with the sign problem, in particularly the \(-\eta^{\mu\rho}\eta^{\nu\sigma}\) term. Now, we will rigorously deal with all these issues. First let's use again the 4-point example. We reconsider the contraction as follows. Since \(\mu\neq\nu\), we have \(\gamma^{\mu}\gamma^{\nu}=-\gamma^{\nu}\gamma^{\mu}\). Consider the canonical way for the contraction. We want to fix it as the contraction of the first two pairs. For notation convenience, we will drop the hat symbol.
\[\langle 0|\mathrm{T}\gamma^{\mu}\gamma^{\nu}\gamma^{\rho} \gamma^{\sigma}|0\rangle =\langle 0|\mathrm{T}\overline{\gamma^{\mu}}_{\mu}\gamma^{ \nu}\overline{\gamma^{\rho}}_{\sigma}\gamma^{\sigma}|0\rangle+(-1)\langle 0| \mathrm{T}\overline{\gamma^{\mu}}_{\mu}\gamma^{\rho}\overline{\gamma^{\nu}}_{ \nu}\overline{\gamma^{\sigma}}|0\rangle+(-1)^{2}\langle 0|\mathrm{T} \overline{\gamma^{\mu}}_{\nu}\gamma^{\sigma}\overline{\gamma^{\nu}}_{\nu} \overline{\gamma^{\rho}}|0\rangle\] \[=\frac{1}{16}\mathrm{Tr}(\gamma^{\mu}\gamma^{\nu})\mathrm{Tr}( \gamma^{\rho}\gamma^{\sigma})-\frac{1}{16}\mathrm{Tr}(\gamma^{\mu}\gamma^{ \rho})\mathrm{Tr}(\gamma^{\nu}\gamma^{\sigma})+\frac{1}{16}\mathrm{Tr}(\gamma^ {\mu}\gamma^{\sigma})\mathrm{Tr}(\gamma^{\nu}\gamma^{\rho})\] \[=\eta^{\mu\nu}\eta^{\rho\sigma}-\eta^{\mu\rho}\eta^{\nu\sigma}+ \eta^{\mu\sigma}\eta^{\nu\rho} \tag{37}\]
where for each contraction we normalized by the dimension \(D=4\).
In general, we can find that the \(N\)-point correlation function of the gamma field can be formalized by our generalized Wick's theorem as follow.
\[\langle 0|\mathrm{T}\gamma^{\mu_{1}}\gamma^{\mu_{2}}\cdots\gamma^{\mu_{N}}|0 \rangle=\frac{1}{D^{N/2}}\sum_{\begin{subarray}{c}i_{1},i_{2},\cdots,i_{N} \in S\\ i_{1}<i_{2}<\cdots<i_{N}\end{subarray}}\Theta(\mu_{i_{1}},\mu_{i_{2}},\cdots, \mu_{i_{N}})\langle 0|\gamma^{\mu_{i_{1}}}\gamma^{\mu_{i_{2}}}\cdots\gamma^{\mu_{i_{N} }}|0\rangle\,. \tag{38}\]
Here \(S\) denotes the set of all possible moves under the time order constraint \(i_{1}<i_{2}<\cdots<i_{N}\). \(\Theta(\mu_{i_{1}},\mu_{i_{2}},\cdots,\mu_{i_{N}})\) is a function of indices which gives factors of \((-1)^{m}\) where \(m\) is the number of moves. Explicitly
\[\Theta(\mu_{i_{1}},\mu_{i_{2}},\cdots,\mu_{i_{N}})=\prod_{\begin{subarray}{c}j =1\\ m_{j}\leq N-2\end{subarray}}^{N/2-1}(-1)^{m_{j}}\,. \tag{39}\]
This explicitly gives
\[\boxed{\langle 0|\mathrm{T}\gamma^{\mu_{1}}\gamma^{\mu_{2}}\cdots\gamma^{\mu_ {N}}|0\rangle=\frac{1}{D^{N/2}}\sum_{\begin{subarray}{c}i_{1},i_{2},\cdots,i_ {N}\in S\\ i_{1}<i_{2}<\cdots<i_{N}\end{subarray}}\prod_{\begin{subarray}{c}j=1\\ m_{j}\leq N-2\end{subarray}}^{N/2-1}(-1)^{m_{j}}\langle 0|\gamma^{\mu_{i_{a}}}\cdots \gamma^{\mu_{i_{l}}}|0\rangle}. \tag{40}\]
This evaluates to give
\[\langle 0|\mathrm{T}\gamma^{\mu_{1}}\gamma^{\mu_{2}}\cdots\gamma^{ \mu_{N}}|0\rangle =\frac{1}{D^{N/2}}\sum_{\begin{subarray}{c}i_{1},i_{2},\cdots,i_{N} \in S\\ i_{1}<i_{2}<\cdots<i_{N}\end{subarray}}\prod_{\begin{subarray}{c}j=1\\ m_{j}\leq N-2\end{subarray}}^{N/2-1}(-1)^{m_{j}}\prod_{\begin{subarray}{c}i_{a},i _{b}=1,2,\cdots,N\\ i_{a}<i_{b}\end{subarray}}^{N/2}\langle 0|\gamma^{\mu_{i_{a}}}\gamma^{\mu_{i_{b}}}|0\rangle \tag{41}\] \[=\sum_{\begin{subarray}{c}i_{1},i_{2},\cdots,i_{N}\in S\\ i_{1}<i_{2}<\cdots<i_{N}\end{subarray}}\prod_{\begin{subarray}{c}j=1\\ m_{j}\leq N-2\end{subarray}}^{N/2-1}(-1)^{m_{j}}\prod_{\begin{subarray}{c}i_{a},i _{b}=1,2,\cdots,N\\ i_{a}<i_{b}\end{subarray}}^{N/2}\eta^{\mu_{i_{a}}\mu_{i_{b}}}\,.\]
Let's illustrate this by an example, let's say \(N=6\) with the 6-point correlation function of \(\langle 0|\mathrm{T}\gamma^{\mu_{1}}\gamma^{\mu_{2}}\gamma^{\mu_{3}}\gamma^{\mu _{4}}\gamma^{\mu_{5}}\gamma^{\mu_{6}}|0\rangle\). There are a total of 15 possible contractions. For example, by mathematical computation we have a term of
\[-\eta^{\mu_{1}\mu_{4}}\eta^{\mu_{2}\mu_{5}}\eta^{\mu_{3}\mu_{6}}\,. \tag{42}\]
Now we want to recover the minus sign. We have
\[\begin{split}&\frac{1}{D^{3}}\langle 0|\gamma^{\mu_{i_{1}}}\gamma^{\mu_{i_{ 2}}}\gamma^{\mu_{i_{3}}}\gamma^{\mu_{i_{4}}}\gamma^{\mu_{i_{5}}}\gamma^{\mu_{i_ {6}}}|0\rangle\\ &=\frac{1}{D^{3}}(-1)^{2}\langle 0|\gamma^{\mu_{i_{1}}}\gamma^{\mu_{i_ {4}}}\gamma^{\mu_{i_{2}}}\gamma^{\mu_{i_{3}}}\gamma^{\mu_{i_{5}}}\gamma^{\mu_{ i_{6}}}|0\rangle\\ &=\frac{1}{D^{3}}(-1)^{2}(-1)^{1}\langle 0|\gamma^{\mu_{i_{1}}} \gamma^{\mu_{i_{4}}}\gamma^{\mu_{i_{2}}}\gamma^{\mu_{i_{5}}}\gamma^{\mu_{i_{3}} }\gamma^{\mu_{i_{6}}}|0\rangle\\ &=-\eta^{\mu_{1}\mu_{4}}\eta^{\mu_{2}\mu_{5}}\eta^{\mu_{3}\mu_{ 6}}\,.\end{split} \tag{43}\]
## 5 Geometrical Formalism
In this section, we will give a mathematical formalism for the gamma quantum fields in geometrical means. First, we know that by the definition of p-form, it reads
\[\alpha=\frac{1}{p!}a_{\mu_{1}\mu_{2}\cdots\mu_{p}}\omega^{\mu_{1}}\wedge \omega^{\mu_{2}}\wedge\cdots\wedge\omega^{\mu_{p}} \tag{44}\]
where \(a_{\mu_{1}\mu_{2}\cdots\mu_{p}}\) is an totally anti-symmetric tensor.
In analogy, we construct a quantized p-form formed by the gamma quantum fields. We will drop the hat notation as for convenience,
\[\alpha=\frac{1}{p!}a_{\mu_{1}\mu_{2}\cdots\mu_{p}}\gamma^{\mu_{1}}\gamma^{\mu _{2}}\cdots\gamma^{\mu_{n}} \tag{45}\]
In particular, for \(a_{\mu_{1}\mu_{2}\cdots\mu_{p}}=\epsilon_{\mu_{1}\mu_{2}\cdots\mu_{p}}\) the Levi-civita tensor for \(p=4\) it follows that in fact the \(\gamma^{5}\) matrix is a matrix 4-form as follows
\[-i\gamma^{5}=\frac{1}{4!}\epsilon_{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}\gamma^{\mu_{ 1}}\gamma^{\mu_{2}}\gamma^{\mu_{3}}\gamma^{\mu_{n}} \tag{46}\]
Next we have the following theorem for forms. Suppose \(\alpha\) is a \(q\)-form and \(\beta\) is a \(p\)-form, then
\[\alpha\wedge\beta=(-1)^{pq}\beta\wedge\alpha\,. \tag{47}\]
For the case of gamma matrices, we have
\[\alpha\beta=(-1)^{pq}\beta\alpha\,. \tag{48}\]
In explicit tensor notation,
\[\alpha_{ab}\beta_{bc}=(-1)^{pq}\beta_{ab}\alpha_{bc}\,. \tag{49}\]
Now consider \(\alpha\) as a \(p\)-form and \(\beta\) as a \(q\)-form. Let
\[\alpha=\frac{1}{p!}a_{\mu_{1}\mu_{2}\cdots\mu_{p}}\omega^{\mu_{1}}\wedge\omega ^{\mu_{2}}\wedge\cdots\wedge\omega^{\mu_{p}}\quad\text{and}\quad\beta=\frac{1 }{q!}b_{\mu_{p+1}\mu_{p+2}\cdots\mu_{p+q}}\omega^{\mu_{p+1}}\wedge\omega^{\mu_ {p+2}}\wedge\cdots\wedge\omega^{\mu_{p+q}}\,. \tag{50}\]
Then
\[\begin{split}\alpha\wedge\beta&=\frac{1}{p!q!}a_{ \mu_{1}\mu_{2}\cdots\mu_{p}}b_{\mu_{p+1}\mu_{p+2}\cdots\mu_{p+q}}\omega^{\mu_ {1}}\wedge\omega^{\mu_{2}}\wedge\cdots\wedge\omega^{\mu_{p}}\wedge\omega^{\mu_ {p+1}}\wedge\omega^{\mu_{p+2}}\wedge\cdots\wedge\omega^{\mu_{p+q}}\\ &=\frac{(p+q)!}{p!q!}a_{[\mu_{1}\mu_{2}\cdots\mu_{p}}b_{\mu_{p+1} \mu_{p+2}\cdots\mu_{p+q}]}\omega^{\mu_{1}}\wedge\omega^{\mu_{2}}\wedge\cdots \wedge\omega^{\mu_{p}}\wedge\omega^{\mu_{p+1}}\wedge\omega^{\mu_{p+2}}\wedge \cdots\wedge\omega^{\mu_{p+q}}\\ &\equiv(\alpha\wedge\beta)_{\mu_{1}\mu_{2}\cdots\mu_{p}\mu_{p+1} \mu_{p+2}\cdots\mu_{p+q}}\omega^{\mu_{1}}\wedge\omega^{\mu_{2}}\wedge\cdots \wedge\omega^{\mu_{p}}\wedge\omega^{\mu_{p+1}}\wedge\omega^{\mu_{p+2}} \wedge\cdots\wedge\omega^{\mu_{p+q}}\end{split} \tag{51}\]
Therefore we have
\[(\alpha\wedge\beta)_{\mu_{1}\mu_{2}\cdots\mu_{p}\mu_{p+1}\mu_{p+2}\cdots\mu_{p+q}}= \frac{(p+q)!}{p!q!}a_{[\mu_{1}\mu_{2}\cdots\mu_{p}}b_{\mu_{p+1}\mu_{p+2}\cdots \mu_{p+q}]} \tag{52}\]
In terms of gamma field basis
\[\alpha\beta=(\alpha\beta)_{\mu_{1}\mu_{2}\cdots\mu_{p}\mu_{p+1}\mu_{p+2}\cdots \mu_{p+q}}\gamma^{\mu_{1}}\gamma^{\mu_{2}}\cdots\gamma^{\mu_{p}\gamma}\gamma^{ \mu_{p+1}}\gamma^{\mu_{p+2}}\cdots\gamma^{\mu_{p+q}} \tag{53}\]
Now we will study a very important concept on hodge dual. Suppose \(\alpha\) is a \(p\)-form, The definition of its dual is given by [10]
\[\star\alpha=\frac{1}{p!(n-p)!}\epsilon_{\nu_{1}\cdots\nu_{p}\mu_{1}\cdots\mu_{ n-p}}a^{\nu_{1}\cdots\nu_{p}}\omega^{\mu_{1}}\wedge\cdots\wedge\omega^{\mu_{n-p}}\,. \tag{54}\]
such that \(\star\alpha\) is a \(n-p\) form. Next, by definition,
\[\star(\omega^{\nu_{1}}\wedge\cdots\wedge\omega^{\nu_{p}})=\frac{1}{(n-p)!} \sqrt{|g||}g_{p}|^{-1}\epsilon_{\nu_{1}\cdots\nu_{p}\mu_{1}\cdots\mu_{n-p}} \omega^{\mu_{1}}\wedge\cdots\wedge\omega^{\mu_{n-p}} \tag{55}\]
where \(g_{p}\) is the determinant of the metric tensor associated with the space of the \(p\)-form of \(\alpha\), and \(g\) is the determinant of the metric in the \(n\)-dimensional space. For our case, we have
\[\star\alpha=\frac{1}{p!(n-p)!}\epsilon_{\nu_{1}\cdots\nu_{p}\mu_{1}\cdots\mu_{ n-p}}a^{\nu_{1}\cdots\nu_{p}}\gamma^{\mu_{1}}\cdots\gamma^{\mu_{n-p}}\,. \tag{56}\]
and
\[\star(\gamma^{\nu_{1}}\cdots\gamma^{\nu_{p}})=\frac{1}{(n-p)!}\sqrt{|g||}g_{p }|^{-1}\epsilon_{\nu_{1}\cdots\nu_{p}\mu_{1}\cdots\mu_{n-p}}\gamma^{\mu_{1}} \cdots\gamma^{\mu_{n-p}} \tag{57}\]
The metric is defined by
\[ds^{2}=g_{\mu\nu}\gamma^{\mu}\gamma^{\nu} \tag{58}\]
for which
\[g_{\mu\nu}=\frac{1}{4}{\rm Tr}(\gamma_{\mu}\gamma_{\nu})=\frac{1}{4}\sum_{ab }\gamma_{\mu ab}\gamma_{\nu ba}\,. \tag{59}\]
It is easy to see that the factor \(\sqrt{|g||}g_{p}|^{-1}=1\). Now consider \(p=1\) case, then
\[\star\gamma^{\nu_{1}}=\frac{1}{(4-1)!}(1)\epsilon_{\nu_{1}\mu_{1}\mu_{2}\mu_{3 }}\gamma^{\mu_{1}}\gamma^{\mu_{2}}\gamma^{\mu_{3}}\,. \tag{60}\]
We then have
\[\star\gamma^{0}=\frac{1}{3!}(3!)\epsilon_{0123}\gamma^{1}\gamma^{2}\gamma^{3}\,. \tag{61}\]
Therefore we have
\[\framebox{$\star\gamma^{0}=\gamma^{1}\gamma^{2}\gamma^{3}=-i\gamma^{0}\gamma^ {5}$\,.} \tag{62}\]
Next,
\[\star\gamma^{1}=\epsilon_{1023}\gamma^{0}\gamma^{2}\gamma^{3} \tag{63}\]
Thus
\[\framebox{$\star\gamma^{1}=-\gamma^{0}\gamma^{2}\gamma^{3}=-i\gamma^{1}\gamma ^{5}$\,.} \tag{64}\]
Then
\[\star\gamma^{2}=\epsilon_{2013}\gamma^{0}\gamma^{1}\gamma^{3}\,. \tag{65}\]
Thus
\[\framebox{\star\gamma^{2}=\gamma^{0}\gamma^{1}\gamma^{3}=-i\gamma^{2}\gamma^{5}} \tag{66}\]
Finally
\[\star\gamma^{3}=\epsilon_{3012}\gamma^{0}\gamma^{1}\gamma^{2}\,. \tag{67}\]
Therefore,
\[\framebox{\star\gamma^{3}=-\gamma^{0}\gamma^{1}\gamma^{2}=-i\gamma^{3}\gamma^{5 }}\,. \tag{68}\]
In other words we have
\[\framebox{\star\gamma^{\mu}=-i\gamma^{\mu}\gamma^{5}} \tag{69}\]
Since \(\gamma^{\mu}\) is a vector field and \(\gamma^{\mu}\gamma^{5}\) is an axial vector field, we see that the dual of the vector field is the axial vector field. And by \(\{\gamma^{5},\gamma^{\mu}\}=0\), we have
\[\star\gamma^{\mu}=i\gamma^{5}\gamma^{\mu} \tag{70}\]
Therefore we have the hodge dual operator as
\[\framebox{\star=i\gamma^{5}} \tag{71}\]
As from \(\star\gamma^{0}=\gamma^{1}\gamma^{2}\gamma^{3}\), it follows that
\[\gamma^{0}\star\gamma^{0}=\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3} \tag{72}\]
It follows that
\[\gamma^{5}=i\gamma^{0}\star\gamma^{0} \tag{73}\]
In general, we find
\[\framebox{\gamma^{5}=i\gamma^{\mu}\star\gamma^{\mu}} \tag{74}\]
Since
\[\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}=\gamma^{3}\gamma^{2}\gamma^{0}\gamma^ {1}, \tag{75}\]
therefore we find that \(\gamma^{5}\) is a dual invariant in 4 dimension. In general
\[\gamma^{0}\gamma^{1}\cdots\gamma^{n}=(-1)^{\frac{n(n+1)}{2}}\gamma^{n}\cdots \gamma^{1}\gamma^{0}\,. \tag{76}\]
Next, we can promote to other basis. Notice that
\[\star(\gamma^{\nu_{1}}\gamma^{\nu_{2}})=\frac{1}{(4-2)!}\sqrt{|g|}|g_{p}|^{-1 }\epsilon_{\nu_{1}\nu_{2}\mu_{1}\mu_{2}}\gamma^{\mu_{1}}\gamma^{\mu_{2}}\,. \tag{77}\]
Then for example
\[\begin{split}\star(\gamma^{0}\gamma^{1})&=\frac{1}{ 2!}(1)\epsilon_{01\mu_{1}\mu_{2}}\gamma^{\mu_{1}}\gamma^{\mu_{2}}\\ &=\frac{1}{2!}(2!)(1)\epsilon_{0123}\gamma^{2}\gamma^{3}\,.\end{split} \tag{78}\]
Thus
\[\star(\gamma^{0}\gamma^{1})=\gamma^{2}\gamma^{3}\,. \tag{79}\]
Since the charge conjugation is given by the operator \(C=i\gamma^{2}\gamma^{0}\) and the time reversal operator is given by \(T=i\gamma^{1}\gamma^{3}\). We find that
\[\star(\gamma^{2}\gamma^{0})=\epsilon_{2013}\gamma^{1}\gamma^{3}=\gamma^{1} \gamma^{3}\,. \tag{80}\]
Therefore we prove that the charge conjugation and the time reversal operator is dual to each other
\[\begin{array}{c}\framebox{$\star C=T$}\end{array} \tag{81}\]
Finally, we would like to find out the dual of the \(\gamma^{5}\) operator. By definition,
\[\star\gamma^{5}=\star i\star\gamma^{0}\star\gamma^{1}\star\gamma^{2}\star\gamma ^{3}\,. \tag{82}\]
Explicitly, using the above results, we have
\[\begin{array}{c}\star\gamma^{5}=(\pm 1)(\gamma^{1}\gamma^{2}\gamma^{3})(- \gamma^{0}\gamma^{2}\gamma^{3})(\gamma^{0}\gamma^{1}\gamma^{3})(-\gamma^{0} \gamma^{1}\gamma^{2})\\ =(\pm 1)(-1)^{3}(-1)^{2}(-1)^{3}\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\, \gamma^{3}\gamma^{2}\gamma^{1}\gamma^{0}\,\gamma^{0}\gamma^{1}\gamma^{2}\gamma ^{3}\\ =(\pm 1)\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\,\gamma^{0}\gamma^{1}\gamma^ {2}\gamma^{3}\,\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\\ =\pm 1(-i\gamma^{5})^{3}\\ =\pm i\gamma^{5}\end{array} \tag{83}\]
Therefore, we have the following eigen-matrix equation
\[\star\gamma^{5}=\pm i\gamma^{5} \tag{84}\]
Therefore, we find that in fact \(\gamma^{5}\) is the eigen-matrix of the hodge dual operator. Next we find
\[\star\gamma^{5}\star\gamma^{5}=\star(\gamma^{5})^{\dagger}\star\gamma^{5}=\pm i \mathbf{1}\,. \tag{85}\]
With the definition of dual vector field as axial vector field, now we are interested in calculating the axial vector field strength. First we would like to compute the Dirac algebra of axial vector field:
\[\begin{array}{c}\{\star\gamma^{\mu},\star\gamma^{\nu}\}=\{-i\gamma^{\mu} \gamma^{5},-i\gamma^{\mu}\gamma^{5}\}\\ =-(\gamma^{\mu}\gamma^{5}\gamma^{\nu}\gamma^{5}+\gamma^{\nu}\gamma^{5}\gamma^ {\mu}\gamma^{5})\\ =(\gamma^{\mu}\gamma^{5}\gamma^{5}\gamma^{\nu}+\gamma^{\nu}\gamma^{5}\gamma^ {5}\gamma^{\mu})\\ =\{\gamma^{\mu}\gamma^{\nu}+\gamma^{\nu}\gamma^{\mu}\}\\ =\{\gamma^{\mu},\gamma^{\nu}\}\\ =2\eta^{\mu\nu}I\,.\end{array} \tag{86}\]
Hence the axial field and vector field both share the same Dirac algebra. Now consider the dual axial gauge field strength \(\star F_{\mu\nu}\):
\[\star F=d\star\gamma+\star\gamma\wedge\star\gamma\,. \tag{87}\]
Explicitly,
\[\begin{array}{c}\star F^{\mu\nu}=\partial^{\mu}\star\gamma^{\nu}-\partial^ {\nu}\star\gamma^{\mu}+\star\gamma^{\mu}\star\gamma_{\nu}-\star\gamma^{\nu} \star\gamma^{\mu}\\ =-(\gamma^{\mu}\gamma^{5}\gamma^{\nu}\gamma^{5}-\gamma^{\nu}\gamma^{5}\gamma^ {\mu}\gamma^{5})\\ =\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu}\\ =-4iS^{\mu\nu}\end{array} \tag{88}\]
Therefore we have
\[\star F^{\mu\nu}=F^{\mu\nu} \tag{89}\]
and simply
\[\framebox{$\star F=F$}\,. \tag{90}\]
Hence \(F\) is a dual invariant. It follows that the Lagrangian density is
\[\mathcal{L}=\star F_{\mu\nu}\star F^{\mu\nu}=F_{\mu\nu}F^{\mu\nu}\,. \tag{91}\]
### Dirac gamma representation of EM-field tensor
Notice that the object \(\gamma^{\mu}\gamma^{\nu}\) can be decomposed into symmetric part and anti-symmetric part, where the former is the symmetric metric tensor and the latter is proportional to the anti-symmetric tensor,
\[\gamma^{\mu}\gamma^{\nu}=\frac{1}{2}(\gamma^{\mu}\gamma^{\nu}+\gamma^{\nu}\gamma^ {\mu})+\frac{1}{2}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu})=\eta^{\mu \nu}-i\sigma^{\mu\nu}\,. \tag{92}\]
Next we construct the field strength tensor by Dirac matrices as follow:
\[F^{\mu\nu}=\frac{1}{2}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu})= \begin{pmatrix}0&\gamma^{0}\gamma^{1}&\gamma^{0}\gamma^{2}&\gamma^{0}\gamma^{3} \\ -\gamma^{0}\gamma^{1}&0&\gamma^{1}\gamma^{2}&\gamma^{1}\gamma^{3}\\ -\gamma^{0}\gamma^{2}&-\gamma^{1}\gamma^{2}&0&\gamma^{2}\gamma^{3}\\ -\gamma^{0}\gamma^{3}&-\gamma^{1}\gamma^{3}&-\gamma^{2}\gamma^{3}&0\end{pmatrix} \tag{93}\]
But as we find that
\[\gamma^{1}\gamma^{2}=\star(\gamma^{0}\gamma^{3}),\quad\gamma^{1}\gamma^{3}=- \star(\gamma^{0}\gamma^{2})\quad\text{and}\quad\gamma^{2}\gamma^{3}=\star( \gamma^{0}\gamma^{1}) \tag{94}\]
Therefore we find that the field strength tensor by Dirac matrices as
\[F^{\mu\nu}=\begin{pmatrix}0&\gamma^{0}\gamma^{1}&\gamma^{0}\gamma^{2}&\gamma^ {0}\gamma^{3}\\ -\gamma^{0}\gamma^{1}&0&\star(\gamma^{0}\gamma^{3})&-\star(\gamma^{0}\gamma^{ 2})\\ -\gamma^{0}\gamma^{2}&-\star(\gamma^{0}\gamma^{3})&0&\star(\gamma^{0}\gamma^{1} )\\ -\gamma^{0}\gamma^{3}&\star(\gamma^{0}\gamma^{2})&-\star(\gamma^{0}\gamma^{1} )&0\end{pmatrix} \tag{95}\]
This has exactly the same form with the EM-field tensor, which is, under the metric equal to \((-+++)\)convention
\[F^{\mu\nu}=\begin{pmatrix}0&E_{x}&E_{y}&E_{z}\\ -E_{x}&0&B_{z}&-B_{y}\\ -E_{y}&-B_{z}&0&B_{x}\\ -E_{z}&B_{y}&-B_{x}&0\end{pmatrix} \tag{96}\]
where we identify
\[E_{x}=\gamma^{0}\gamma^{1},\quad E_{y}=\gamma^{0}\gamma^{2},\quad E_{z}=\gamma ^{0}\gamma^{3} \tag{97}\]
and
\[B_{x}=\star(\gamma^{0}\gamma^{1}),\quad B_{y}=-\star(\gamma^{0}\gamma^{2}), \quad B_{z}=\star(\gamma^{0}\gamma^{3})\,. \tag{98}\]
It directly follows that
\[\boxed{B_{x}=\star E_{x},\quad B_{y}=-\star E_{y}\quad\text{and}\quad B_{z}= \star E_{z}}. \tag{99}\]
Therefore we proved that electric field and magnetic field are hodge dual to each other, this result makes sense as magnetic field is an axial vector while electric field is the polar vector, and axial vector and polar vector are dual to each other.
Define the normal direct sum \(\oplus\) as right direct sum \(\oplus_{R}\) as the sum along the diagonal blocks, and the define the left direct sum \(\oplus_{L}\) as the sum along the off-diagonal blocks. Under the Dirac representation, explicitly we have
\[E_{x}=\gamma^{0}\gamma^{1}=\begin{pmatrix}0&\sigma_{x}\\ \sigma_{x}&0\end{pmatrix},\quad E_{y}=\gamma^{0}\gamma^{2}=\begin{pmatrix}0& \sigma_{y}\\ \sigma_{y}&0\end{pmatrix},\quad E_{z}=\gamma^{0}\gamma^{3}=\begin{pmatrix}0& \sigma_{z}\\ \sigma_{z}&0\end{pmatrix}\,. \tag{100}\]
Therefore, for electric fields, we have
\[E^{i}=\gamma^{0}\gamma^{i}=\sigma^{i}\oplus_{L}\sigma^{i} \tag{101}\]
And for magnetic field,
\[B_{x}=\gamma^{2}\gamma^{3}=\begin{pmatrix}-i\sigma_{x}&0\\ 0&-i\sigma_{x}\end{pmatrix},\quad B_{y}=-\gamma^{1}\gamma^{3}=\begin{pmatrix}i \sigma_{y}&0\\ 0&i\sigma_{y}\end{pmatrix},\quad B_{z}=\gamma^{1}\gamma^{2}=\begin{pmatrix}-i \sigma_{z}&0\\ 0&-i\sigma_{z}\end{pmatrix}\,. \tag{102}\]
Thus we have
\[B^{j}=(-1)^{j+1}i(\sigma^{j}\oplus_{R}\sigma^{j})\,. \tag{103}\]
Then it follows that
\[\begin{cases}-i(\sigma_{x}\oplus_{R}\sigma_{x})&=\star(\sigma_{x}\oplus_{L} \sigma_{x})\\ -i(\sigma_{y}\oplus_{R}\sigma_{y})&=-\star\left(\sigma_{y}\oplus_{L}\sigma_{y} \right)\\ -i(\sigma_{z}\oplus_{R}\sigma_{x})&=\star(\sigma_{z}\oplus_{L}\sigma_{z}) \end{cases} \tag{104}\]
Therefore we see that
\[\boxed{\star\oplus_{L}=\pm i\oplus_{R}}. \tag{105}\]
The basically infers that left direct sum is hodge dual to right direct sum up to a factor of \(\pm i\).
## 6 Integral Formalism for Gamma fields
Since now we know that gamma field behaves as both a gauge field and a fermionic field, it is straight forward to study its integral formalism. For non-abelian gauge fields, according to [11], a path dependent gauge field phase factor from \(A=X\) to \(B=X+dx\) is given by
\[\phi_{AB}=\phi_{X(X+dx)}=I+A^{a}_{\mu}(t^{a})^{i}_{\ j}dx^{\mu}\equiv A^{\ i}_{ \mu\ j}dx^{\mu} \tag{106}\]
Furthermore, it is observed that the non-Abelian gauge field is traceless,
\[\mathrm{Tr}\,A_{\mu}=0\,. \tag{107}\]
And now consider a small loop ABCDA, then the phase factor becomes
\[\phi_{ABCDA}=I+F^{a}_{\mu\nu}(t^{a})^{i}_{\ j}dx^{\mu}\wedge dx^{\nu}\equiv I+ F^{\ i}_{\mu\nu}\,_{j}dx^{\mu}\wedge dx^{\nu}\,, \tag{108}\]
where
\[F^{\ i}_{\mu\nu}\,_{j}=\partial_{\mu}A^{\ i}_{\nu\ j}-\partial_{\nu}A^{\ i}_{ \mu\ j}+A^{\ i}_{\mu\ k}A^{\ k}_{\nu\ j}-A^{\ i}_{\nu\ k}A^{\ k}_{\mu\ j} \tag{109}\]
is the curvature tensor. According differential geometry, the curvature is defined by
\[\Omega^{i}_{\ j}=\frac{1}{2}F^{\ i}_{\mu\nu}\,_{j}dx^{\mu}\wedge dx^{\nu}\,, \tag{110}\]
and as we know that the first Chern class is defined by
\[c_{1}=\frac{i}{2\pi}\mathrm{Tr}\,\Omega\,, \tag{111}\]
Since the SU(N) Lie group generator \(t^{a}\) is traceless, then \(F^{\ i}_{\mu\nu\ i}=0\), therefore, the SU(N) lie group is a trivial principal bundle on the manifold.
Now we promote this idea to study the case of gamma fields. The phase is given by
\[\phi_{AB}=\phi_{X(X+dx)}=I+\gamma_{\mu\;b}^{a}dx^{\mu} \tag{112}\]
and notice that similar to 107, this amounts to give
\[\mbox{Tr}\,\gamma_{\mu}=0\,. \tag{113}\]
Then for a differential loop, we have
\[\phi_{ABCDA}=I+s_{\mu\nu\;b}^{\;\;\;a}dx^{\mu}dx^{\nu}\,, \tag{114}\]
where
\[s_{\mu\nu\;b}^{\;\;a}=\partial_{\mu}\gamma_{\nu\;b}^{\;\;a}-\partial_{\nu} \gamma_{\mu\;b}^{\;\;a}+\gamma_{\mu\;c}^{\;\;a}\gamma_{\nu\;b}^{\;\;c}-\gamma_ {\nu\;c}^{\;\;a}\gamma_{\mu\;b}^{\;\;c}=\gamma_{\mu\;c}^{\;\;a}\gamma_{\nu\;b}^ {\;\;c}-\gamma_{\nu\;c}^{\;\;a}\gamma_{\mu\;b}^{\;\;c}\,. \tag{115}\]
The curvature is defined by
\[\Omega^{i}_{\;\;j}=\frac{1}{2}s_{\mu\nu\;j}^{\;\;i}dx^{\mu}\wedge dx^{\nu}\,, \tag{116}\]
But since \(s_{\mu\nu\;a}^{\;\;a}=\mbox{Tr}\,(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^ {\mu})=0\), so this also yields vanishing first chern class. To relate the above result to the case of general relativity, we know that the curvature is defined by
\[R_{\mu\nu\;\sigma}^{\;\;\;\rho}=\partial_{\mu}\Gamma_{\nu\;\sigma}^{\;\;\rho}- \partial_{\nu}\Gamma_{\mu\;\sigma}^{\;\;\rho}+\Gamma_{\mu\;\lambda}^{\;\;\rho} \Gamma_{\nu\;\sigma}^{\;\;\lambda}-\Gamma_{\nu\;\lambda}^{\;\;\rho}\Gamma_{\mu \;\sigma}^{\;\;\lambda} \tag{117}\]
The curvature is defined by
\[\Omega^{\rho}_{\;\;\sigma}=\frac{1}{2}R_{\mu\nu\;\sigma}^{\;\;\;\rho}dx^{\mu} \wedge dx^{\nu}\,. \tag{118}\]
Notice that the trace of the Christoffel connection yields,
\[\mbox{Tr}\,\Gamma_{\mu}=\Gamma_{\mu\;\rho}^{\;\;\rho}=\partial_{\mu}\ln\sqrt{ -g}\,. \tag{119}\]
Therefore, the first chern class is given by
\[c_{1}=\frac{1}{2\pi}(\Gamma_{\mu\;\lambda}^{\;\;\rho}\Gamma_{\nu\;\rho}^{\; \;\lambda}-\Gamma_{\nu\;\lambda}^{\;\;\rho}\Gamma_{\mu\;\rho}^{\;\;\lambda}) dx^{\mu}\wedge dx^{\nu}\,. \tag{120}\]
Furthermore, the chern number, which is defined by the integral of curvature 2-form, defines the topological invariant winding number
\[c=\frac{1}{2\pi}\int(\Gamma_{\mu\;\lambda}^{\;\;\rho}\Gamma_{\nu\;\rho}^{\; \;\lambda}-\Gamma_{\nu\;\lambda}^{\;\;\rho}\Gamma_{\mu\;\rho}^{\;\;\lambda}) dx^{\mu}\wedge dx^{\nu}\,. \tag{121}\]
We would like to study the case when the trace of the connection is zero, i,e, \(\mbox{Tr}\,\Gamma_{\mu}=\partial_{\mu}\ln\sqrt{-g}=0\). Thus \(\ln\sqrt{-g}=c\). Then \(g_{\mu\nu}=c_{\mu\nu}\) is a constant metric. Then, traceless Christoffel connection amounts to vanishing curvature, thus vanishing Chern class and Chern number. Thus the trivial gauge bundle for \(A_{\mu}\) and \(\gamma_{\mu}\) corresponds to the flat metric.
## 7 Conclusion
In this short article, we provide a completely new perspective on \(\gamma^{\mu}\) matrix, for which can be considered as both a global gauge field and fermonic field, quantization of gamma field will be new particles, which is the nature intrinsic fifth force. Wick's theorem of gamma fields is established and is just the normal trace algebra of gamma matrices. Next, we have constructed a formalism for gamma matrices in the notion of differential geometry. Then we find that the electromagnetic field tensor can be isomorphically expressed in-terms of the gamma matrix representations. Finally, we study the integral formalism of Gamma fields. |
2305.19794 | From DK-STP to Non-square General Linear Algebras and General Linear
Groups | A new matrix product, called dimension-keeping semi-tensor product (DK-STP),
is proposed. Under DK-STP, the set of $m\times n$ matrices becomes a semi-group
$G({m\times n},\mathbb{F})$, and a ring, denoted by $R(m\times n,\mathbb{F})$.
Moreover, the Lie bracket can also be defined, which turns the ring into a Lie
algebra, called non-square (or STP) general linear algebra, denoted by
$\mathrm{gl}(m\times n, \mathbb{F})$. Then the action of semi-group $G(m\times
n,\mathbb{F})$ on dimension-free Euclidian space, denoted by
$\mathbb{R}^{\infty}$, is discussed. This action leads to discrete-time and
continuous time S-systems. Their trajectories are calculated, and their
invariant subspaces are revealed. As byproduct of this study, some important
concepts for square matrices, such as eigenvalue, eigenvector, determinant,
invertibility, etc., have been extended to non-square matrices. Particularly,
it is surprising that the famous Cayley-Hamilton theory can also been extended
to non-square matrices. Finally, a Lie group, called the non-square (or STP)
general Lie group and denoted by $\mathrm{GL}(m\times n,\mathbb{F})$, is
constructed, which has $\mathrm{GL}(m\times n,\mathbb{F})$ as its Lie algebra.
Their relations with classical Lie group $\mathrm{GL}(m,\mathbb{F})$ and Lie
algebra $\mathrm{gl}(m,\mathbb{F})$ are revealed. | Daizhan Cheng | 2023-05-31T12:34:48Z | http://arxiv.org/abs/2305.19794v2 | # From DK-STP to Non-square General Linear Algebra and General Linear Group
###### Abstract
A new matrix product, called dimension keeping semi-tensor product (DK-STP), is proposed. Under DK-STP, the set of \(m\times n\) matrices becomes a semi-group (\(G(m\times n,\mathbb{F})\)), and a ring, denoted by \(R(m\times n,\mathbb{F})\). Then the action of semi-group \(G(m\times n,\mathbb{F})\) on dimension-free Euclidian space, denoted by \(\mathbb{R}^{\infty}\), is discussed. This action leads to discrete-time and continuous time \(\mathbf{S}\)-systems. Their trajectories are calculated, and their invariant subspaces are revealed. Through this action, some important concepts for square matrices, such as eigenvalue, eigenvector, determinant, invertibility, etc., have been extended to non-square matrices. Particularly, it is surprising that the famous Cayley-Hamilton theory can also been extended to non-square matrices. Finally, the Lie bracket can also be defined, which turns the set of \(m\times n\) matrices into a Lie algebra, called non-square (or STP) general linear algebra, denoted by \(gl(m\times n,\mathbb{F})\). Moreover, a Lie group, called the non-square (or STP) general Lie group and denoted by \(GL(m\times n,\mathbb{F})\), is constructed, which has \(gl(m\times n,\mathbb{F})\) as its Lie algebra. Their relationship with classical Lie group \(GL(m,\mathbb{F})\) and Lie algebra \(gl(m,\mathbb{F})\) has also been revealed.
DK-STP, NS-matrix ring, (semi-)group action, dimension free Euclidian space, non-square general linear algebra, non-square general linear group.
## I Preliminaries
The past two decades have witnessed the development of STPs, which generalize the classical matrix (including vector) products to dimension-free matrix products [21, 36]. These STPs have received various applications, including Boolean networks [34], finite games [14], dimension-varying systems [11], engineering problems [30], finite automata [41], coding [43], etc. In addition to thousands of papers, there are already many STP monographs [5, 6, 7, 8, 11, 12, 16, 17, 19, 20, 31, 33, 35, 42], and books with STP chapter or appendix [2, 39].
Roughly speaking, up to this time there are mainly three kinds of STPs. They are matrix-matrix (MM)-STP, matrix-vector (MV)-STP, and vector-vector (VV)-STP, which are defined as follows: (Please refer to Appendix-1 for notations.)
**Definition I.1**:
* **MM-STP-1:**
* Let \(A\in\mathcal{M}_{m\times n}\), \(B\in\mathcal{M}_{p\times q}\) and \(t=\mathrm{lcm}(n,p)\). The first type MM-STP of \(A\) and \(B\) is defined as [6, 7] \[A\ltimes B:=\left(A\otimes I_{t/n}\right)\left(B\otimes I_{t/p}\right).\] (1)
**MM-STP-2:**
The second type MM-STP of \(A\) and \(B\) is defined as [11]
\[A\odot B:=\left(A\otimes J_{t/n}\right)\left(B\otimes J_{t/p}\right). \tag{2}\]
* **MV-STP-1:**
Let \(A\in\mathcal{M}_{m\times n}\), \(x\in\mathbb{R}^{p}\) and \(t=\mathrm{lcm}(n,p)\). The first type MV-STP of \(A\) and \(x\) is defined as [11]
\[A\vec{\times}x:=\left(A\otimes I_{t/n}\right)\left(x\otimes\mathbf{1}_{t/p} \right). \tag{3}\]
**MV-STP-2:**
The second type MV-STP of \(A\) and \(x\) is defined as [11]
\[A\vec{\odot}x:=\left(A\otimes J_{t/n}\right)\left(x\otimes\mathbf{1}_{t/p} \right). \tag{4}\]
* **VV-STP:**
Let \(x\in\mathbb{R}^{m}\), \(y\in\mathbb{R}^{n}\) and \(t=\mathrm{lcm}(m,n)\). The VV-STP of \(x\) and \(y\) is defined as [10]
\[x\stackrel{{\raisebox{-1.0pt}[0.0pt][0.0pt]{\scalebox{1.0}{$ \neg$}}}}{{\scalebox{1.0}{$\neg$}}}y:=(x\otimes\mathbf{1}_{t/m})^{T}(y \otimes\mathbf{1}_{t/n})\in\mathbb{R}. \tag{5}\]
In addition to aforementioned STPs, there are still some other STPs. First, In previous STPs, the main objects, such as matrix \(A\) and vector \(x\), are lying on left, so they are also called the left STPs. It is also very natural to put the main objects on right, then the obtained STPs are called the right STPs. The left STPs are assumed to be default STPs, because they have some nice properties superior than the right ones [7]. Precisely, we have [6, 7]
* **Right MM-STP-1:** \[A\rtimes B:=\left(I_{t/n}\otimes A\right)\left(I_{t/p}\otimes B\right).\] (6)
### Right MM-STP-2:
\[A\odot_{r}B:=\left(J_{t/n}\otimes A\right)\left(J_{t/p}\otimes B\right). \tag{7}\]
* **Right MV-STP-1:** \[A\vec{\otimes}x:=\left(I_{t/n}\otimes A\right)\left(\mathbf{1}_{t/p}\otimes x \right).\] (8)
### Right MV-STP-2:
\[A\vec{\otimes}_{r}x:=\left(J_{t/n}\otimes A\right)\left(\mathbf{1}_{t/p} \otimes x\right). \tag{9}\]
* **Right VV-STP:** \[x\ \vec{*}\ y:=(\mathbf{1}_{t/m}\otimes x)^{T}(\mathbf{1}_{t/n}\otimes y)\in \mathbb{R}.\] (10)
Second, instead of \(I_{n}\), \(J_{n}\), which are called matrix multiplier, or \(\mathbf{1}_{n}\), which is called vector multiplier, may we choose other kinds of multipliers to generate other kinds of STPs? The answer is "Yes." But so far the others are less useful [9].
For two matrices \(A,B\), if the column number of \(A\) equals the row number of \(B\), then the classical matrix product is defined. In this case we say that \(A\) and \(B\) satisfy dimension matching condition. All the STPs, including MM-STPs, MV-STPs, and VV-STPs, are generalizations of the corresponding classical products in linear algebra. That is, when the required dimension matching condition is satisfied, they coincide with the classical matrix (vector) products.
Moreover, a significant advantage of STPs is: they keep the fundamental properties of the classical MM, NV, or VV products available. This advantage makes the usage of STPs very convenient. Hence they received wide applications in many fields.
The basic idea for all STPs is the same, which can be described as follows: When the dimension matching condition for factor elements (matrices or vectors) does not satisfied, we use certain matrices, such as \(I_{n}\), to enlarge the matrices or certain vector, such as \(\mathbf{1}_{n}\), to enlarge the vectors through Kronicker product. Eventually, the enlarged matrices or vectors satisfy dimension matching condition, and then the conventional products of the enlarged matrices or vectors are considered as the STP of the original matrices or vectors. Roughly speaking, the enlargements change the sizes of the matrices or vectors, but they do not change the "information" contained in the original matrices or vectors. This fact makes STPs meaningful. That is, the STP represents the "product" of original matrices or vectors in certain sense.
In addition to many engineering or dynamic system related applications of STP, a challenging theoretical problem is how to describe the action of matrices of various dimensions on vector spaces of various dimensions. Because STPs have removed the dimension restriction of matrix-matrix or matrix-vector products, this action becomes dimension-varying (or overall, dimension-free). To explore such dimension-free actions, we first introduce some new concepts, which provide a framework for such dimension-free actions.
Consider the set of matrices with arbitrary dimensions as [10]
\[\mathcal{M}=\bigcup_{m=1}^{\infty}\bigcup_{n=1}^{\infty}\mathcal{M}_{m\times n}, \tag{11}\]
and the dimension-free Euclidian space is defined as
\[\mathbb{R}^{\infty}:=\sum_{n=1}^{\infty}\mathbb{R}^{n}. \tag{12}\]
Then \(G:=(\mathcal{M},\ltimes)\) becomes a monoid (i.e., semi-group with identity); the action of \(\mathcal{M}\) on \(\mathbb{R}^{\infty}\), as \(\vec{\times}:\mathcal{M}\times\mathbb{R}^{\infty}\rightarrow\mathbb{R}^{\infty}\) (or \(\vec{\odot}:\mathcal{M}\times\mathbb{R}^{\infty}\rightarrow\mathbb{R}^{\infty}\)), forms an S-system [32]; and the \(\vec{\cdot}\) is an inner product over \(\mathbb{R}^{\infty}\). 1 Recently, this kind of systems have been developed into dynamic systems over dimension-free manifold [18].
Footnote 1: Precisely speaking, the inner product over \(\mathbb{R}^{\infty}\), defined in [10] is
\[<x,y>_{\lnot}:=\frac{1}{t}x\,\vec{\cdot}y. \tag{13}\]
The purpose of this paper is to propose a new STP, called the dimension keeping STP (DK-STP) and denoted by \(\ltimes\). Dimension keeping means if both two factor matrices are of the same dimension, say, they are in \(\mathcal{M}_{m\times n}\), then their product remains to be of the same dimension. This surprising property makes semi-group \(G(m\times n,\mathbb{R}):=(\mathcal{M}_{m\times n},\,\ltimes)\) a ring, denoted by \(R(m\times n,\mathbb{F})\).
The group action of \(G(m\times n,\mathbb{R})\) on \(\mathbb{R}^{\infty}\) is then explored. Based on this action, the corresponding dynamic systems are also proposed and investigated in detail. As byproducts, some basic concepts of square matrices have been extended to non-square matrices. They are: eigenvalues, eigenvectors, determinant, invertibility, etc. The Cayley-Hamilton theorem has also be extended to non-square matrices.
Finally, a Lie bracket is defined to produce a Lie algebra, called non-square (or STP) general linear algebra, denoted by \(gl(m\times n,\mathbb{F})\). Certain properties are obtained. Starting from this Lie algebra, its corresponding non-square (or STP) general linear group, denoted by \(GL(m\times n,\mathbb{F})\) can be deduced. The outline of this paper is depicted in Figure 1.
The rest of this paper is outlined as follows:
Section 2 defines the DK-STP. A matrix, called bridge matrix, is defined. Using it a formula to calculate the DK-STP is obtained. Some elementary properties are also provided. Section 3 investigates further properties of the DK-STP. The ring \(R(m\times n,\mathbb{F})\) of \(\mathcal{M}_{m\times n}\) is considered in Section 4. Its sub-ring, the ring homomorphism and isomorphism of \(R(m\times n,\mathbb{F})\) are also investigated. Section 5 considers the action of \(G(m\times n,\mathbb{R})\) on dimension-free pseudo-vector space \(\mathbb{R}^{\infty}\). A matrix \(A\in\mathcal{M}_{m\times n}\) is considered as an operator on \(\mathbb{R}^{\infty}\). Then the operator norm, invariant subspace, etc. are considered. Moreover, the Cayley-Hamilton theorem has then been generalized to non-square matrices. As byproduct, the square restriction of non-square \(A\) is obtained, which leads to eigenvalues, eigenvectors, determinant, invertibility, etc. of non-square (NS-) matrices. Section 6 proposes the NS- (or STP) general linear algebra. Related topics such as sub-algebra, algebraic homomorphism, isomorphism, and Killing form etc. are discussed. Section 7 constructs general linear group for NS-matrices, which is denoted by \(GL(m\times n,\mathbb{F})\). It has clearly demonstrated that \(GL(m\times n,\mathbb{F})\) is second countable and Hausdorff topological space, \(mn\)-dimensional manifold, and a Lie group. Finally, it is proved that \(gl(m\times n,\mathbb{F})\) is its Lie algebra. Section 8 is a concluding remark, including some challenging problems which remain for further study.
A list of notations is presented in the Appendix-1 at the bottom of this paper.
## II DK-STP
**Definition II.1**: _Let \(A\in\mathcal{M}_{m\times n}\) and \(B\in\mathcal{M}_{p\times q}\), \(t=\mathrm{lcm}(n,p)\). The DK-STP of \(A\) and \(B\), denoted by \(A\times B\in\mathcal{M}_{m\times q}\), is defined as follows._
\[A\times B:=\left(A\otimes\mathbf{1}_{t/n}^{T}\right)\left(B\otimes\mathbf{1}_ {t/p}\right). \tag{11}\]
**Remark II.2**:
1. _It is easy to verify that when the dimension matching condition is satisfied, i.e.,_ \(n=p\)_, the DK-STP coincides with classical matrix product. Hence, similarly to two kinds of MM-STPs, the DK-STP is also a generalization of classical matrix product._
2. _The two kinds of MM-STPs are not suitable for matrix-vector product, because in general the results are not vectors. Hence they can not realize linear mappings over vector spaces, and the two corresponding MV-STPs have been established to perform linear mappings. Unlike them, DK-STP can realize MM-product and MV-product simultaneously._
3. _Comparing with the MM-STP defined in Definition_ 1.1 _[_6, 7_]__, this DK-STP has minimum size_ \(m\times q\)_, no matter whether the dimension matching condition is satisfied. That is why the product is named as dimension keeping STP._
4. _If two matrices_ \(A\) _and_ \(B\) _have the same dimension, the dimension of their DK-STP remains the same. This is a nice property._
**Remark II.3**:
1. _It is natural to define the right DK-STP as follows:_ _Let_ \(A\in\mathcal{M}_{m\times n}\) _and_ \(B\in\mathcal{M}_{p\times q}\)_,_ \(t=\mathrm{lcm}(n,p)\)_. The right DK-STP of_ \(A\) _and_ \(B\)_, denoted by_ \(A\times B\in\mathcal{M}_{m\times q}\)_, is defined as follows._ \[A\times B:=\left(\mathbf{1}_{t/n}^{T}\otimes A\right)\left(\mathbf{1}_{t/p} \otimes B\right).\] (12)
2. _To be more general, let_ \(W_{k}\in\mathbb{R}^{k}\)_,_ \(k=1,2,\cdots\) _be a set of column vectors, called weights, where_ \(W_{1}=1\)_, and_ \(W_{k}>0\)_,_ \(\forall k>0\)_. Then we can define the weighted (left) DK-STP as_ \[A\times_{w}B:=\left(A\otimes W_{t/n}^{T}\right)\left(B\otimes W_{t/p}\right).\] (13)
3. _Similarly, we can define the weighted right DK-STP as_ \[A\times_{w}B:=\left(W_{t/n}^{T}\otimes A\right)\left(W_{t/p}\otimes B\right).\] (14)
**Example II.4**: _The weight vectors can be chosen arbitrary. The following are some examples._
1. _Taking average, a reasonable definition for_ \(W_{k}\) _is_ \[W_{k}=\frac{1}{k}\mathbf{1}_{k},\quad k\geq 1.\] (15)
Fig. 1: Outline of This Paper
2. Taking normal distribution for \(W_{k}\). Note that \[\phi(u)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{u}e^{-\frac{u^{2}}{2}}dx.\] (16) Define \[W_{2k} := \left(\phi(-0.1k),\phi(-0.1(k-1)),\cdots,\right.\] \[\left.\phi(-0.1),\phi(-0.1),\cdots,\phi(-0.1k)\right),\] \[W_{2k+1} := \left(\phi(-0.1k),\phi(-0.1(k-1)),\cdots,\right.\] \[\left.\phi(-0.1),\phi(0),\phi(-0.1),\cdots,\right.\] \[\left.\phi(-0.1k)\right),\quad k=1,2,\cdots.\] (17) Then \[W_{1}=(1),\] \[W_{2}=(0.4602,0.4602),\] \[W_{3}=(0.4602,0.5,0.4602),\] \[W_{4}=(0.4207,0.4602,0.4602,0.4207),\] \[\cdots.\] (18)
Using the VV-STP, defined in (5), an alternative definition of DK-STP can be obtained.
**Definition II.5**: _Let \(A,B\in\mathcal{M}\), with \(A\in\mathcal{M}_{m\times n}\) and \(B\in\mathcal{M}_{p\times q}\). The DK-STP of \(A\) and \(B\), denoted by \(C=A\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$\times$}}}B\in\mathcal{M}_{m \times q}\), is defined as follows._
\[c_{i,j}=\mathrm{Row}_{i}(A)\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$ \cdot$}}}\mathrm{Col}_{j}(B),\quad i\in[1,m],\;j=[1,q]. \tag{19}\]
The equivalence of the two definitions can be verified by a straightforward computation.
**Proposition II.6**: _Definition II.1 and Definition II.5 are equivalent._
**Remark II.7**:
1. _The corresponding alternative definition of right DK-STP is as follows:_ _Let_ \(A,B\in\mathcal{M}\)_, with_ \(A\in\mathcal{M}_{m\times n}\) _and_ \(B\in\mathcal{M}_{p\times q}\)_. The right DK-STP of_ \(A\) _and_ \(B\)_, denoted by_ \(C=A\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$\times$}}}B\in\mathcal{M}_{m \times q}\)_, is defined as follows._ \[c_{i,j}=\mathrm{Row}_{i}(A)\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$ \cdot$}}}\mathrm{\vec{\ }}\mathrm{Col}_{j}(B),\quad i\in[1,m],\;j=[1,n].\] (20)
2. _It is also easy to verify that the definition (_12_) is equivalent to the definition (_19_)._
3. _Define the weighted VV-STP as follows: Let_ \(x\in\mathbb{R}^{m}\)_,_ \(y\in\mathbb{R}^{n}\)_,_ \(t=\mathrm{lcm}(m,n)\)_. Then the weighted VV-STP is defined by_ \[x\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$\cdot$}}}w\mathbin{\raisebox{- 1.075pt}{\scalebox{0.8}{$\cdot$}}}y:=\left(x\otimes W_{t/m}\right)^{T}\left(y \otimes W_{t/n}\right).\] (21)
4. _Let_ \(A,B\in\mathcal{M}\)_, with_ \(A\in\mathcal{M}_{m\times n}\) _and_ \(B\in\mathcal{M}_{p\times q}\)_. The alternative definition of weighted DK-STP denoted by_ \(C=A\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$\times$}}}_{w}B\in\mathcal{M}_{m \times q}\)_, is defined as follows._ \[c_{i,j}=\mathrm{Row}_{i}(A)\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$ \cdot$}}}\mathrm{\vec{\ }}_{w}\mathrm{Col}_{j}(B),\quad i\in[1,m],\;j=[1,q].\] (22)
5. _It is easy to verify that definition (_13_) is equivalent to definition (_21_)._
6. _Define the weighted right VV-STP as follows: Let_ \(x\in\mathbb{R}^{m}\)_,_ \(y\in\mathbb{R}^{n}\)_,_ \(t=\mathrm{lcm}(m,n)\)_. Then the weighted VV-STP is defined by_ \[x\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$\cdot$}}}w\mathbin{\raisebox{- 1.075pt}{\scalebox{0.8}{$\cdot$}}}y:=\left(W_{t/m}\otimes x\right)^{T}\left(W_{t/ n}\otimes y\right).\] (23)
7. _Let_ \(A,B\in\mathcal{M}\)_, with_ \(A\in\mathcal{M}_{m\times n}\) _and_ \(B\in\mathcal{M}_{p\times q}\)_. The alternative definition of weighted right DK-STP denoted by_ \(C=A\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$\times$}}}_{w}B\in\mathcal{M}_{m \times q}\)_, is defined as follows._ \[c_{i,j}=\mathrm{Row}_{i}(A)\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$ \cdot$}}}\mathrm{\vec{\ }}_{w}\mathrm{Col}_{j}(B),\quad i\in[1,m],\;j=[1,n].\] (24)
8. _It is easy to verify that definition (_14_) is equivalent to definition (_23_)._
Definition II.5 implies that the block-multiplication rule for DK-STP is available.
**Lemma II.8**: _Let \(x\in\mathbb{R}^{m}\), \(y\in\mathbb{R}^{n}\), and \(r=\gcd(m,n)\). Divide both \(x\) and \(y\) into \(r\) equal parts as \(x=(x_{1},x_{2},\cdots,x_{r})\) and \(y=(y_{1},y_{2},\cdots,y_{r})\). Then_
\[x\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$\cdot$}}}y=\sum_{i=1}^{r}x_{i} \mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$\cdot$}}}y_{i}. \tag{25}\]
_Proof._ Note that \(x_{i}\in\mathbb{F}^{m/r}\) and \(y_{i}\in\mathbb{F}^{n/r}\). Let \(t=\mathrm{lcm}(m,n)\) and \(t^{\prime}=\mathrm{lcm}(m/r,n/r)\). Then \(t^{\prime}=t/r\). Hence
\[\frac{t}{m}=\frac{t^{\prime}}{m/r}\;;\quad\frac{t}{n}=\frac{t^{\prime}}{n/r}.\]
Using it, a straightforward computation verifies (24). \(\Box\)
Using Lemma II.8, the following result is easily verifiable.
**Proposition II.9**: _Let \(A\in\mathcal{M}_{m\times n}\), \(B\in\mathcal{M}_{p\times q}\), and \(r=\gcd(p,q)\). Split \(A\) into \(r\) equal size rows and \(B\) into \(r\) equal size columns as_
\[A=\begin{bmatrix}A_{1,1}&A_{1,2}&\cdots&A_{1,r}\\ A_{2,1}&A_{2,2}&\cdots&A_{2,r}\\ \vdots&&&\\ A_{\ell,1}&A_{\ell,2}&\cdots&A_{\ell,r}\end{bmatrix}\]
_and_
\[B=\begin{bmatrix}B_{1,1}&B_{1,2}&\cdots&B_{1,\mu}\\ B_{2,1}&B_{2,2}&\cdots&B_{2,\mu}\\ \vdots&&&\\ B_{r,1}&B_{r,2}&\cdots&A_{r,\mu}\end{bmatrix},\]
_where \(\ell\) and \(\mu\) could be arbitrary. The row size of \(A_{i,j}\) (as well as the column size of \(B_{i,j}\)) do not need to be equal. Then the block multiplication rule is correct. That is, Let \(A\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$\times$}}}B=C=(C_{i,j})\). Then_
\[C_{i,j}=\sum_{k=1}^{r}A_{i,k}\mathbin{\raisebox{-1.075pt}{\scalebox{0.8}{$ \times$}}}B_{k,j},\quad i\in[1,\ell],j\in[1,\mu].\]
**Remark II.10**: \(\Box\)__
1. _It is ready to verify that the Lemma II.8 is also true for_ \(\vec{\,\,}_{w}\)_, hence Proposition II.9 is also true for weighted DK-STP._
2. _It is also easy to verify that the Lemma II.8 is not true for_ \(\vec{\,\,}\) _and_ \(\vec{\,\,}_{w}\)_, hence Proposition II.9 is not true for right DK-STP and weighted right DK-STP._
To explore further properties, we need the following lemma.
**Lemma II.11**: _Let \(A\in\mathcal{M}_{m\times n}\). Then_
1. \[A\otimes\mathbf{1}_{\alpha}^{T}=A\left(I_{n}\otimes\mathbf{1}_{\alpha}^{T} \right).\] (25)
2. \[A\otimes\mathbf{1}_{\beta}=\left(I_{m}\otimes\mathbf{1}_{\beta}\right)A.\] (26)
_Proof._
1. \[\begin{array}{l}\text{RHS of (\ref{eq:11})}=A\left(I_{n}\otimes\mathbf{1}_{ \alpha}^{T}\right)\\ =\left(A\otimes 1\right)\left(I_{n}\otimes\mathbf{1}_{\alpha}^{T}\right)\\ =A\otimes\mathbf{1}_{\alpha}^{T}=\text{LHS of (\ref{eq:11})}.\end{array}\] (27)
2. \[\begin{array}{l}\text{RHS of (\ref{eq:11})}=\left(I_{m}\otimes\mathbf{1}_{ \beta}\right)A\\ =\left(I_{m}\otimes\mathbf{1}_{\beta}\right)\left(A\otimes 1\right)\\ =A\otimes\mathbf{1}_{\beta}=\text{LHS of (\ref{eq:11})}.\end{array}\]
\(\Box\)__
Using Lemma II.11, we have the following proposition.
**Proposition II.12**: _Let \(A\in\mathcal{M}_{m\times n}\) and \(B\in\mathcal{M}_{p\times q}\), \(t=\operatorname{lcm}(n,p)\). Then_
\[\begin{array}{lcl}A\operatorname{\times}B&=&A\left(I_{n}\otimes\mathbf{1}_ {t/n}^{T}\right)\left(I_{p}\otimes\mathbf{1}_{t/p}\right)B\\ &:=&A\Psi_{n\times p}B,\end{array} \tag{28}\]
_where_
\[\Psi_{n\times p}=\left(I_{n}\otimes\mathbf{1}_{t/n}^{T}\right)\left(I_{p} \otimes\mathbf{1}_{t/p}\right)\in\mathcal{M}_{n\times p} \tag{29}\]
_is called a (left) bridge matrix of dimension \(n\times p\)._
**Remark II.13**: _Similar argument shows the following responding results._
1. \[\begin{array}{lcl}A\operatorname{\times}B&=&A\left(\mathbf{1}_{t/n}^{T} \otimes I_{n}\right)\left(\mathbf{1}_{t/p}\otimes I_{p}\right)B\\ &:=&A\Phi_{n\times p}B,\end{array}\] (30)
_where_
\[\Phi_{n\times p}=\left(\mathbf{1}_{t/n}^{T}\otimes I_{n}\right)\left(\mathbf{ 1}_{t/p}\otimes I_{p}\right)\in\mathcal{M}_{n\times p} \tag{31}\]
2. _is called a right bridge matrix of dimension_ \(n\times p\)_._
3. \[\begin{array}{lcl}A\operatorname{\times}_{w}B&=&A\left(I_{n}\otimes W_{t/n}^ {T}\right)\left(I_{p}\otimes W_{t/p}\right)B\\ &:=&A\Psi_{n\times p}^{w}B,\end{array}\] (32)
4. _is called a weighted bridge matrix of dimension_ \(n\times p\)_._
5. \[\begin{array}{lcl}A\operatorname{\times}_{w}B&=&A\left(W_{t/n}^{T}\otimes I _{n}\right)\left(W_{t/p}\otimes I_{p}\right)B\\ &:=&A\Phi_{n\times p}^{w}B,\end{array}\] (33)
_where_
\[\Psi_{n\times p}^{w}=\left(I_{n}\otimes W_{t/n}^{T}\right)\left(I_{p}\otimes W _{t/p}\right)\in\mathcal{M}_{n\times p} \tag{34}\]
_is called a right weighted bridge matrix of dimension_ \(n\times p\)_._
The following are some easily verifiable properties come from definitions:
**Proposition II.14**: \(\vskip 6.0pt plus 2.0pt minus 2.0pt\)__
1. _If_ \(n=p\)_, then_ \[A\operatorname{\times}B=AB.\]
2. \[\Psi_{\alpha\times\beta}^{T}=\Psi_{\beta\times\alpha}.\] (35)
3. \[(A\operatorname{\times}B)^{T}=B^{T}\operatorname{\times}A^{T}.\] (36)
4. \[\operatorname{rank}(A\operatorname{\times}B)\leq min(\operatorname{rank}(A), \operatorname{rank}(B)).\] (37)
**Remark II.15**: _Proposition II.14 remains true for_ \(\operatorname{\times}\)_,_ \(\operatorname{\times}_{w}\)_, and_ \(\operatorname{\times}_{w}\)_. That is,_
1. _If_ \(n=p\)_, then_ \[\begin{array}{l}A\operatorname{\times}B=AB;\\ A\operatorname{\times}_{w}B=AB;\\ A\operatorname{\times}_{w}B=AB.\end{array}\]
2. \[\begin{array}{l}\Phi_{\alpha\times\beta}^{T}=\Phi_{\beta\times\alpha};\\ \left[\Psi_{\alpha\times\beta}^{w}\right]^{T}=\Psi_{\beta\times\alpha}^{w};\\ \left[\Phi_{\alpha\times\beta}^{w}\right]^{T}=\Phi_{\beta\times\alpha}^{w}. \end{array}\] (38)
3. \[\begin{array}{l}(A\operatorname{\times}B)^{T}=B^{T}\operatorname{\times}A^{T}; \\ (A\operatorname{\times}_{w}B)^{T}=B^{T}\operatorname{\times}_{w}A^{T};\\ (A\operatorname{\times}_{w}B)^{T}=B^{T}\operatorname{\times}_{w}A^{T}.\end{array}\] (39)
4. \[\begin{array}{l}\operatorname{rank}(A\operatorname{\times}B)\leq min( \operatorname{rank}(A),\operatorname{rank}(B));\\ \operatorname{rank}(A\operatorname{\times}_{w}B)\leq min(\operatorname{ rank}(A),\operatorname{rank}(B));\\ \operatorname{rank}(A\operatorname{\times}_{w}B)\leq min(\operatorname{rank}(A), \operatorname{rank}(B)).\end{array}\] (40)
## III Properties of BK-STP
**Proposition III.1**:
1. (Distributivity) Let \(B,C\in\mathcal{M}_{m\times n}\). Then \[\begin{split} A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}(B+C)& =A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}B+A\mathbin{ \mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}C,\\ (B+C)\mathbin{\mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}A&=B\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}A+C\mathbin{ \mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}A.\end{split}\] (41)
2. (Associativity) Let \(A,B,C\in\mathcal{M}\). Then \[(A\mathbin{\mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}B)\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}C=A\mathbin{ \mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}(B\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}C).\] (42)
_Proof._
The proof of (i) is straightforward.
To prove (ii), let \(A\in\mathcal{M}_{m\times n}\), \(B\in\mathcal{M}_{p\times q}\), and \(C\in\mathcal{M}_{r\times s}\). Using Proposition 2.12, we have
\[\begin{split}&(A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}B)\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}C=(A\Psi_{n\times p}B)\Psi_{q\times r}C\\ &=A\Psi_{n\times p}(B\Psi_{q\times r}C)=A\mathbin{\mathchoice{ \raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}(B\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}C).\end{split}\]
**Remark III.2**: Similar argument shows that Proposition III.1 remains true for \(\mathbin{\mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{ \mathchar 1387$}{\mathchar 1387$}}\), \(\star_{w}\), and \(\times_{w}\) respectively.
**Proposition III.3**: _Given \(A,B,C,D\in\mathcal{M}\)._
1. _If_ \((B,C)\) _satisfy dimension matching condition, (i.e.,_ \(|\operatorname{Col}(B)|=|\operatorname{Row}(C)|\)_), then_ \[A\mathbin{\mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{ \mathchar 1387$}{\mathchar 1387$}}(BC)=(A\mathbin{\mathchoice{ \raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}B)C.\] (43)
2. _If_ \((A,B)\) _satisfy dimension matching condition, then_ \[(AB)\mathbin{\mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{ \mathchar 1387$}{\mathchar 1387$}}C=A(B\mathbin{\mathchoice{ \raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}X).\] (44)
3. _If both_ \((A,B)\) _and_ \((C,D)\) _satisfy dimension matching condition, then_ \[(AB)\mathbin{\mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{ \mathchar 1387$}{\mathchar 1387$}}(CD)=A(B\mathbin{\mathchoice{ \raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}C)D.\] (45)
_Proof._
1. First, assume \(B\in\mathcal{M}_{*\times s}\) and \(C\in\mathbb{R}^{s}\). Then \[\begin{split}& A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}(BC)=A\mathbin{ \mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}\left(\sum_{i=1}^{s}\operatorname{Col}_{i}(B)c_{i}\right)\\ &=\sum_{i=1}^{s}c_{i}A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}} \operatorname{Col}_{i}(B)\\ &=(A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}B)C.\end{split}\] Next, assume \(C\in\mathcal{M}_{*\times t}\). Then \[\begin{split}& A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}(BC)=A\mathbin{ \mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}\left[B\operatorname{Col}_{1}(C),\cdots,B\operatorname{ Col}_{t}(C)\right]\\ &=[A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}B\operatorname{Col}_{1}(C), \cdots,A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}B \operatorname{Col}_{t}(C)]\\ &=[(A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}B)\operatorname{Col}_{1}(C), \cdots,(A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}B)\operatorname{Col}_{t}(C)]\\ &=(A\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}B)C.\end{split}\]
2. Using (43) and (i), we have \[\begin{split}&[(AB)\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}C]^{T}=C^{T}\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}(AB)^{T}\\ &=C^{T}\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}(B^{T}A^{T})=[C^{T}\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}(B^{T})]A^{T}.\end{split}\] Taking transpose yields (44).
3. (45) follows from (43)-(44) immediately.
\(\Box\)
**Remark III.4**: Similar argument shows that Proposition III.3 remains true for \(\mathbin{\mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{ \mathchar 1387$}{\mathchar 1387$}},\mathbin{\mathchoice{\raisebox{0.0pt}{$ \mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{ \mathchar 1387$}}_{w}\), and \(\times_{w}\) respectively.
## IV DK-STP Ring
Recall that if \(A,B\in\mathcal{M}_{m\times n}\) then \(A\mathbin{\mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{ \mathchar 1387$}{\mathchar 1387$}}B\in\mathcal{M}_{m\times n}\). This fact makes \(\mathbin{\mathchoice{\raisebox{0.0pt}{$\mathchar 1387$}}{\mathchar 1387$}{\mathchar 1387$}{\mathchar 1387$}}\) a dimension invariant operator over \(\mathcal{M}_{m\times n}\). Taking Proposition III.1 into consideration, the following claim is obvious.
Then \[\mathcal{M}_{m\times(n\setminus\mathbf{F})}\subset\mathcal{M}_{m\times n}\] is a sub-ring.
3. Let \(\mathbf{r}=(r_{1},\cdots,r_{\alpha})\subset[1,m]\) and \(\mathbf{s}=(s_{1},\cdots,s_{\beta})\subset[1,n]\). \[\begin{split}&\mathcal{M}_{(m\setminus\mathbf{r})\times(n \setminus\mathbf{s})}:=\left\{A\in\mathbf{M}_{m\times n}\mid\right.\\ &\left.\mathrm{Col}_{r}(A)=0,\mathrm{Row}_{s}=0\quad r\in \mathbf{r},s\in\mathbf{s}\right\}.\end{split}\] (48) Then \[\mathcal{M}_{(m\setminus\mathbf{r})\times(n\setminus\mathbf{s})}\subset \mathcal{M}_{m\times n}\] is a sub-ring.
4. The above arguments are also true for \(R_{\times}\left(m\times n,\mathbb{F}\right)\), \(R_{\times}^{w}\left(m\times n,\mathbb{F}\right)\),\(R_{\times}^{w}\left(m\times n,\mathbb{F}\right)\).
Next, we consider the ring homomorphism.
**Definition IV.6**: _[_25_]_ _Let \((R_{i},+_{i},\ast_{i})\), \(i=1,2\) be two rings._
1. \(\phi:R_{1}\to R_{2}\) _is a ring homomorphism, if_ \[\phi(r+_{1}s)=\phi(r)+_{2}\phi(s),\quad r,s\in R_{1}.\] (49) _and_ \[\phi(r\ast_{1}s)=\phi(r)\ast_{2}\phi(s),\quad r,s\in R_{1}.\] (50)
2. \(\phi:R_{1}\to R_{2}\) _is a ring isomorphism, if it is a one-to-one and onto homomorphism. Moreover, its inverse_ \(\phi^{-1}:R_{2}\to R_{1}\) _is also a ring homomorphism._
3. _If_ \(\phi:R\to R\) _is a ring isomorphism, it is called an automorphism._
**Lemma IV.7**: \[\mathbf{1}_{p\times q}=\mathbf{1}_{p}\otimes\mathbf{1}_{q}^{T}=\mathbf{1}_{q}^ {T}\times\mathbf{1}_{p}.\] (51)
4. \[\mathbf{1}_{p\times q}\otimes\mathbf{1}_{s}=\mathbf{1}_{s}\otimes\mathbf{1}_{ p\times q}.\] (52)
5. \[\mathbf{1}_{p\times q}\otimes\mathbf{1}_{s}^{T}=\mathbf{1}_{s}^{T}\otimes \mathbf{1}_{p\times q}.\] (53)
6. \[J_{s}^{2}=J_{s}.\] (54)
Proof.:
1. It can be proved by a straightforward verification.
2. Using (i), we have \[\begin{split}&\mathbf{1}_{p\times q}\otimes\mathbf{1}_{s}=\left( \mathbf{1}_{p}\otimes\mathbf{1}_{q}^{T}\right)\otimes\mathbf{1}_{s}\\ &=\mathbf{1}_{p}\otimes\mathbf{1}_{s}\otimes\mathbf{1}_{q}^{T}\\ &=\mathbf{1}_{s}\otimes\mathbf{1}_{p\times q}\end{split}\] \[\begin{split}&=\mathbf{1}_{s}\otimes\mathbf{1}_{p\times q} \end{split}\]
3. The proof is similar to the one for (ii).
4. It can be verified by a straightforward calculation.
As special cases of (51)-(53) we have
\[J_{p}=\frac{1}{p}\mathbf{1}_{p}\otimes\mathbf{1}_{p}^{T}=\frac{1}{p}\mathbf{1 }_{p}^{T}\otimes\mathbf{1}_{p}. \tag{55}\]
\[J_{p}\otimes\mathbf{1}_{s}=\mathbf{1}_{s}\otimes J_{p}. \tag{56}\]
\[J_{p}\otimes\mathbf{1}_{s}^{T}=\mathbf{1}_{s}^{T}\otimes J_{p}. \tag{57}\]
The following Theorem is fundamental for \(R(m\times n,\mathbb{F})\) homomorphism.
**Theorem IV.8**: \(\mathbf{(i)}\):
1. _Let_ \(\pi_{1}:\mathcal{M}_{m\times n}\rightarrow\mathcal{M}_{sm\times sn}\) _be defined by_ \[\pi_{1}:A\mapsto J_{s}\otimes A.\] Then_ \(\pi_{1}\) _is a ring homomorphism._
2. _Let_ \(\pi_{2}:\mathcal{M}_{m\times n}\rightarrow\mathcal{M}_{sm\times sn}\) _be defined by_ \[\pi_{2}:A\mapsto A\otimes J_{s}.\] \(\mathbf{(ii)}\) _Then_ \(\pi_{2}\) _is a ring homomorphism._
3. \[\pi_{1}(R(m\times n,\mathbb{F}))\cong\pi_{2}(R(m\times n,\mathbb{F})).\] (58)
Proof.: Let \(A,B\in\mathcal{M}_{m\times n}\).
1. Since \(\pi_{i}\), \(i=1,2\) are linear mappings, it is obvious that \[\pi_{i}(A+B)=\pi_{i}(A)+\pi_{i}(B),\quad i=1,2.\] Set \(t=\mathrm{lcm}(m,n)\), then \[\begin{split}&\pi_{1}(A)\mathbin{\raisebox{-1.0pt}{$\times $}}\pi_{1}(B)=\\ &(J_{s}\otimes A\otimes\mathbf{1}_{t/n}^{T})(J_{s}\otimes B \otimes\mathbf{1}_{t/p})\\ &=J_{s}^{2}\otimes(A\mathbin{\raisebox{-1.0pt}{$\times$}}B)=J_{s} \otimes(A\mathbin{\raisebox{-1.0pt}{$\times$}}B)=\pi_{1}(A\mathbin{ \raisebox{-1.0pt}{$\times$}}B).\end{split}\] \(\mathbf{(ii)}\) \[\begin{split}&\pi_{2}(A)\mathbin{\raisebox{-1.0pt}{$\times$}}\pi_{2}(B) \\ &=\left(A\otimes J_{s}\otimes\mathbf{1}_{t/n}^{T}\right)\left(B \otimes J_{s}\otimes\mathbf{1}_{t/p}\right)\\ &=\left(A\otimes\mathbf{1}_{t/n}^{T}\otimes J_{s}\right)\left(A \otimes\mathbf{1}_{t/n}^{T}\otimes J_{s}\right)\\ &=\left(A\otimes\mathbf{1}_{t/n}^{T}\right)\left(B\otimes \mathbf{1}_{t/p}\right)\otimes J_{s}\\ &=\pi_{2}(A\mathbin{\raisebox{-1.0pt}{$\times$}}B).\end{split}\] We conclude that \[R(m\times n,\mathbb{F})\simeq R(sm\times sn,\mathbb{F}).\]
3. Define \[\varphi(A\otimes J_{s})=J_{s}\otimes A,\quad A\in\mathcal{M}_{m\times n}.\]
We show that \(\varphi\) is an isomorphism.
\[\varphi[(A\otimes J_{s})+(B\otimes J_{s})]\] \[=\varphi[(A+B)\otimes J_{s}]\] \[=J_{s}\otimes(A+B)\] \[=\varphi[A\otimes J_{s}]+\varphi[B\otimes J_{s}],\]
and
\[\varphi[(A\otimes J_{s})\,\raisebox{-1.72pt}{\scalebox{0.7}{$ \times$}}\,(B\otimes J_{s})]\] \[=\varphi[(A\otimes J_{s}\otimes\mathbf{1}_{t/n}^{T})(B\otimes J _{s}\otimes\mathbf{1}_{t/p})]\] \[=\varphi[(A\otimes\mathbf{1}_{t/n}^{T}\otimes J_{s})(B\otimes \mathbf{1}_{t/p}\otimes J_{s})]\] \[=\varphi[(A\otimes\mathbf{1}_{t/n}^{T})(B\otimes\mathbf{1}_{t/p }))\otimes J_{s}]]\] \[=J_{s}\otimes((A\otimes\mathbf{1}_{t/n}^{T})(B\otimes\mathbf{1}_ {t/p}))\] \[=(J_{s}\otimes A\otimes\mathbf{1}_{t/n}^{T})(J_{s}\otimes B \otimes\mathbf{1}_{t/p}))\] \[=\varphi(A)\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}\,\varphi(B).\] Hence \(\varphi\) is a ring homomorphism. To see \(\varphi\) is an isomorphism, one sees easily that \(\varphi\) is one-to-one and onto. Hence, we have only to show that \(\varphi^{-1}\) is also an homomorphism. We have \[\varphi^{-1}[(J_{s}\otimes A)+(J_{s}\otimes B)]\] \[=\varphi^{-1}[J_{s}\otimes(A+B)]\] \[=(A+B)\otimes J_{s}\] \[=\varphi^{-1}[J_{s}\otimes A]+\varphi^{-1}[J_{s}\otimes B],\] and \[\varphi^{-1}[(J_{s}\otimes A)\,\raisebox{-1.72pt}{\scalebox{0.7}{$ \times$}}\,(J_{s}\otimes B)]\] \[=\varphi^{-1}[(J_{s}\otimes A\otimes\mathbf{1}_{t/n}^{T})(J_{s} \otimes B\otimes\mathbf{1}_{t/p})]\] \[=\varphi^{-1}[J_{s}^{2}\otimes(A\otimes\mathbf{1}_{t/n}^{T})(B \otimes\mathbf{1}_{t/p})]\otimes J_{s}\] \[=((A\otimes\mathbf{1}_{t/n}^{T})(B\otimes\mathbf{1}_{t/p})) \otimes J_{s}\] \[=(A\otimes\mathbf{1}_{t/n}^{T}\otimes J_{s})(B\otimes\mathbf{1}_ {t/p}\otimes J_{s})\] \[=\varphi^{-1}(A)\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}\, \varphi^{-1}(B).\] Hence \(\varphi^{-1}\) is also a ring homomorphism. We conclude that \(\varphi\) is a ring isomorphism.
\(\Box\)
**Remark IV.9**: _It is easy to verify that the results of Theorem IV.8 are also true for \(R_{\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}}\,(m\times n,\mathbb{F})\), \(R_{\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}}^{w}\,(m\times n,\mathbb{F})\), and \(R_{\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}}^{w}\,(m\times n,\mathbb{F})\)._
The following theorem is fundamental for \(R(m\times n,\mathbb{F})\) isomorphism.
**Theorem IV.10**: _Consider the ring \(R(m\times n,\mathbb{R})\). Let \(t=\operatorname{lcm}(m,n)\), \(r=\gcd(m,n)\), \(a=m/r\), \(b=n/r\), and \(M_{r}\in\mathcal{O}_{r}\) be an orthogonal matrix, i.e., \(M_{r}^{T}=M_{r}^{-1}\). Then \(\psi:\mathcal{M}_{m\times n}\to\mathcal{M}_{m\times n}\), defined by_
\[A\mapsto(M_{r}\otimes I_{a})\,A\left(M_{r}^{T}\otimes I_{b}\right),\quad A \in\mathcal{M}_{m\times n}, \tag{59}\]
_is a ring automorphism._
_Proof._
First, we show \(\psi\) is a ring homomorphism. It is obvious that
\[\psi(A+B)=\psi(A)+\psi(B),\quad A,B\in\mathcal{M}_{m\times n}.\]
We prove
\[\psi(A\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}\,B)=\psi(A)\,\raisebox{-1. 72pt}{\scalebox{0.7}{$\times$}}\,\psi(B),\quad A\in\mathcal{M}_{m\times n}. \tag{60}\]
Note that
\[\psi(A\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}\,B)=[(M_{r} \otimes I_{a})A]\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}\,[B(M_{r}^{T} \otimes I_{b})]\] \[=(M_{r}\otimes I_{a})[A\,\raisebox{-1.72pt}{\scalebox{0.7}{$ \times$}}\,B](M_{r}^{T}\otimes I_{b})\] \[=[(M_{r}\otimes I_{a})A\Psi_{n\times m}B(M_{r}^{T}\otimes I_{b})],\]
and
\[\psi(A)\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}\,\psi(B)=[(M _{r}\otimes I_{a})A(M_{r}^{T}\otimes I_{b})\Psi_{n\times m}\] \[\quad[(M_{r}\otimes I_{a})B(M_{r}^{T}\otimes I_{b})]\] \[=(M_{r}\otimes I_{a})A[(M_{r}^{T}\otimes I_{b})\Psi_{n\times m}( M_{r}\otimes I_{a})B(M_{r}^{T}\otimes I_{b}).\]
Hence, to prove (60), it is enough to show
\[(M_{r}^{T}\otimes I_{b})\Psi_{n\times m}(M_{r}\otimes I_{a})=\Psi_{n\times m}. \tag{61}\] \[(M_{r}^{T}\otimes I_{b})\Psi_{n\times m}(M_{r}\otimes I_{a})\] \[=(M_{r}^{T}\otimes I_{b}\otimes\mathbf{1}_{a}^{T})(M_{r}\otimes I _{a}\otimes\mathbf{1}_{b})\] \[=I_{r}\otimes(I_{b}\otimes\mathbf{1}_{a}^{T})(I_{a}\otimes\mathbf{1 }_{b})\] \[=(I_{r}\otimes I_{b}\otimes\mathbf{1}_{a}^{T})(I_{r}\otimes I_{a} \otimes\mathbf{1}_{b})\] \[=(I_{n}\otimes\mathbf{1}_{a}^{T})(I_{m}\otimes\mathbf{1}_{b})\] \[=\Psi_{n\times m}.\]
Next, we show that \(\psi\) is a ring isomorphism. It is enough to show that \(\psi^{-1}\) exists and is also a ring homomorphism. Define \(\phi:\mathcal{M}_{m\times n}\to\mathcal{M}_{m\times n}\) by
\[A\mapsto\left(M_{r}^{T}\otimes I_{a}\right)A\left(M_{r}\otimes I_{b}\right), \quad A\in\mathcal{M}_{m\times n}.\]
Then it is obvious that \(\phi\) is also a ring homomorphism. Moreover, \(\phi=\psi^{-1}\). \(\Box\)
**Remark IV.11**: _When \(m=n\), Theorem IV.10 becomes the following: \((\mathcal{M}_{n\times n},\times,+)\) is a ring. Let \(M_{n}\in GL(n,\mathbb{R})\) be an orthogonal matrix. Then \(\phi:\mathcal{M}_{n\times n}\to\mathcal{M}_{n\times n}\), defined by_
\[\phi:A\mapsto M_{n}AM_{n}^{T},\]
_is a ring automorphism. This is a well known fact._
_From the proof of Theorem IV.10 it is obvious that if the \(r\) is replaced by any \(s>1\) and \(s|r\), the theorem remains true._
**Remark IV.12**: _Theorem IV.10 can be extended to other rings._
1. _Similar argument as for Theorem IV.10 shows that_ \(\psi\) _is also an automorphism for_ \(R_{\,\raisebox{-1.72pt}{\scalebox{0.7}{$\times$}}}^{w}\,(m\times n,\mathbb{R})\)_._
2. _Define_ \(\phi:\mathcal{M}_{m\times n}\to\mathcal{M}_{m\times n}\) _by_ \(A\mapsto(I_{a}\otimes M_{r})\,A\left(I_{b}\otimes M_{r}^{T}\right)\)_. Then a similar argument shows
that \(\phi\) is an automorphism for both \(R_{\,\raisebox{-1.0pt}{$\chi$}}\left(m\times n,\mathbb{R}\right)\) and \(R_{\,\raisebox{-1.0pt}{$\chi$}}^{w}\left(m\times n,\mathbb{R}\right)\).
3. When the complex case is considered, i.e., \(\mathbb{F}=\mathbb{C}\). We have only to replace \(M_{r}\in\mathcal{O}_{r}\) by \(M_{r}\in\mathcal{U}_{r}\) and \(M_{r}^{T}\) by \(M_{r}^{H}\). Then a similar argument shows that: \(\psi\) is an automorphism for both \(R_{\,\raisebox{-1.0pt}{$\chi$}}\left(m\times n,\mathbb{C}\right)\) and \(R_{\,\raisebox{-1.0pt}{$\chi$}}^{w}\left(m\times n,\mathbb{C}\right)\); \(\phi\) is an automorphism for both \(R_{\,\raisebox{-1.0pt}{$\chi$}}\left(m\times n,\mathbb{C}\right)\) and \(R_{\,\raisebox{-1.0pt}{$\chi$}}^{w}\left(m\times n,\mathbb{C}\right)\). The following is an obvious isomorphism.
**Proposition IV.13**:
1. _Let_ \(\mathbb{F}=\mathbb{R}\)_. Then the transpose_ \(A\mapsto A^{T}\) _as a mapping_ \(\varphi_{T}:M_{m\times n}\to M_{n\times m}\) _is an isomorphism. That is,_ \[R(m\times n,\mathbb{R})\cong R(n\times m,\mathbb{R}).\] (62)
2. _Let_ \(\mathbb{F}=\mathbb{C}\)_. Then the conjugate transpose_ \(A\mapsto A^{H}\) _as a mapping_ \(\varphi_{H}:M_{m\times n}\to M_{n\times m}\) _is an isomorphism. That is,_ \[R(m\times n,\mathbb{C})\cong R(n\times m,\mathbb{C}).\] (63)
**Remark IV.14**: _Proposition IV.13 is obviously true for \(R_{\,\raisebox{-1.0pt}{$\chi$}}\left(m\times n,\mathbb{F}\right)\), \(R_{\,\raisebox{-1.0pt}{$\chi$}}^{w}\left(m\times n,\mathbb{F}\right)\), and \(R_{\,\raisebox{-1.0pt}{$\chi$}}^{w}\left(m\times n,\mathbb{F}\right)\)._
## V Group Action of STP Semi-group on \(\mathbb{R}^{\infty}\)
### _Dimension-Free Euclidian Space_
This subsection presents a brief review for dimension-free Euclidian space, which provides a state space for dimension-varying dynamic systems. The dimension-varying linear (control) systems have been investigated in [9, 10, 12], the dimension-varying non-linear (control) systems have been investigated in [18].
Recall that the dimension-free Euclidian space is constructed by \(\mathbb{R}^{\infty}=\bigcup_{n=1}^{\infty}\mathbb{R}^{n}\).
**Definition V.1**: _[_18_]_ _Assume \(x,y\in\mathbb{R}^{\infty}\), which are specified as \(x\in\mathbb{R}^{m}\) and \(y\in\mathbb{R}^{n}\), and \(\operatorname{lcm}(m,n)=t\). Then the addition (subtraction) of \(x\) and \(y\) is defined by_
\[x\vec{\pm}y:=\left(x\otimes\mathbf{1}_{t/m}\right)\pm\left(y\otimes\mathbf{1} _{t/n}\right)\in\mathbb{R}^{t}. \tag{64}\]
With the addition (subtraction), defined by (64) and conventional scalar product, \(\mathbb{R}^{\infty}\) becomes a pseudo-vector space, which satisfies all the requirements of a vector space except that \(x\vec{-}y=0\) does not imply \(x=y\)[1].
**Definition V.2**: _[_18_]_ _Assume \(x,y\in\mathbb{R}^{\infty}\), which are specified as \(x\in\mathbb{R}^{m}\) and \(y\in\mathbb{R}^{n}\), and \(\operatorname{lcm}(m,n)=t\)._
1. _The inner product of_ \(x,\ y\) _is defined by_ \[\begin{array}{l}\left\langle x,y\right\rangle_{\mathcal{V}}:=\frac{1}{t}x \vec{\cdot}y\\ =\frac{1}{t}\left(x\otimes\mathbf{1}_{t/m}\right)^{T}\left(y\otimes\mathbf{1} _{t/n}\right).\end{array}\] (65)
2. _The norm of_ \(x\) _is defined by_ \[\|x\|_{\mathcal{V}}:=\sqrt{\|\left\langle x,x\right\rangle_{\mathcal{V}}\|}.\] (66)
3. _The distance of_ \(x\) _and_ \(y\) _is defined by_ \[d_{\mathcal{V}}(x,y):=\|x\vec{-}y\|_{\mathcal{V}}.\] (67)
With the distance, defined by (67), \(\mathbb{R}^{\infty}\) becomes a topological space with the distance deduced topology. (But it is not Hausdorff.)
\(x\) and \(y\) are said to be equivalent, if \(x\vec{-}y=0\) (or equivalently, \(d_{\mathcal{V}}(x,y)=0\)), denoted by \(x\leftrightarrow y\). Define
\[\Omega:=\mathbb{R}^{\infty}/\leftrightarrow. \tag{68}\]
Denote by
\[\bar{x}:=\{y\in\mathbb{R}^{\infty}\mid y\leftrightarrow x\}.\]
\[\bar{x}\vec{\pm}\bar{y}:=\overline{x\vec{\pm}\bar{y}},\quad\bar{x},\bar{y}\in\Omega. \tag{69}\]
Then (69) is properly defined. Moreover, with this addition (subtraction) and conventional scalar product, \(\Omega\) becomes a vector space. Moreover, define
\[\begin{cases}\left\langle\bar{x},\bar{y}\right\rangle_{\mathcal{V}}:=\left\langle x,y\right\rangle_{\mathcal{V}},\quad x\in\bar{x},\ y\in\bar{y},\\ \|\bar{x}\|_{\mathcal{V}}:=\|x\|_{\mathcal{V}},\quad x\in\bar{x},\\ d_{\mathcal{V}}(\bar{x},\bar{y}):=d_{\mathcal{V}}(x,y),\quad x\in\bar{x},\ y \in\bar{y}.\end{cases} \tag{70}\]
The inner product, norm, and distance on \(\Omega\) are all properly defined, which turn \(\Omega\) a (Hausdorff) topological vector space [28].
Recall that \(\mathcal{M}:=\bigcup_{m=1}^{\infty}\bigcup_{n=1}^{\infty}\mathcal{M}_{m\times n}\) is the set of matrices, which acts on \(\mathbb{R}^{\infty}\) by \(x\mapsto A\vec{\times}x\in\mathbb{R}^{\infty}\) as defined by (3), (or \(x\mapsto A\vec{\circ}x\in\mathbb{R}^{\infty}\) as defined by (5)). Hereafter, for statement ease, only the first type of action is considered. In fact, almost all the arguments are also applicable to the second type action.
The action (3) (as well as (5)) is called a group action, which means \((\mathcal{M},\ltimes)\) is a monoid, and the action satisfies the following properties:
\[(A\ltimes B)\vec{\times}x=A\vec{\times}(B\vec{\times}x); \tag{71}\]
and
\[E\vec{\times}x=x, \tag{72}\]
where \(E=1\in\mathcal{M}\) is the identity.
Using action (3), a dynamic system can be defined as
\[x(t+1)=A(t)\vec{\times}x(t),\quad x(0)=x_{0}\in\mathbb{R}^{\infty}, \tag{73}\]
which is called a semi-group system (or S-system) [32].
Moreover, it is also a dynamic system, that is, \(\mathbb{R}^{\infty}\) is a topological space, and for a fixed \(A\), \(x\mapsto A\vec{\times}x\) (as a mapping:
\(\mathbb{R}^{\infty}\to\mathbb{R}^{\infty}\)) is continuous [37]. The only inferior is: the state space \(\mathbb{R}^{\infty}\) is not Hausdorff. To overcome this, we turn to quotient space as follows.
Let \(A,B\in\mathcal{M}\). \(A,B\) are said to be equivalent, denoted by \(A\sim B\), if there exist identity matrices \(I_{m}\) and \(I_{n}\), such that
\[A\otimes I_{m}=B\otimes I_{n}.\]
Denote by
\[\langle A\rangle:=\{B\in\mathcal{M}\mid B\sim A\},\]
and the set of equivalence classes is denoted by
\[\Sigma:=\mathcal{M}/\sim. \tag{74}\]
Then the action of \(\mathcal{M}\) on \(\mathbb{R}^{\infty}\) can be transferred to the action of \(\Sigma\) on \(\Omega\) by
\[<A>\vec{\kappa}\bar{x}:=\left\langle A\vec{\kappa}x\right\rangle,\quad A\in \langle A\rangle\,,\ x\in\bar{x}. \tag{75}\]
The dynamic system (70) can also be transferred to \(\Omega\) as
\[\bar{x}(t+1)=\langle A(t)\rangle\vec{\kappa}\bar{x}(t),\quad\bar{x}(0)\in \Omega,\ \langle A(t)\rangle\in\Sigma, \tag{76}\]
which surely needs to be well posed, that can be proved.
Now (76) is a dynamic system over a vector space \(\Sigma\), which is also a Hausdorff space [18]. A detailed argument can be found in [11, 18].
### _DK-STP based Group Action of Matrices on \(\mathbb{R}^{\infty}\)_
In previous subsection one sees that unlike classical linear system over \(\mathbb{R}^{n}\), to construct dynamic systems over \(\mathbb{R}^{\infty}\), we need both MM-STP and MV-STP. Fortunately, when DK-STP is used, we go back to the classical situation, where one operator \(\,\times\,\) is enough for both matrix-matrix product and matrix-vector product. This is an advantage of DK-STP.
Consider the following dynamic system
\[x(t+1)=A(t)\,\times\,x(t),\quad x(0)=x_{0}\in\mathbb{R}^{\infty},\ A(t)\in \mathcal{M}. \tag{77}\]
According to Proposition 3.1, it is clear that (77) is an S-system. To see it is also a dynamic system, we have to estimate the norm of \(A\in\mathcal{M}\) with respect to DK-STP.
**Definition V.3**: _Let \(A\in\mathcal{M}_{m\times n}\subset\mathcal{M}\). The DK-norm of \(A\), denoted by \(\left\|A\right\|_{\,\times\,}\), is defined by_
\[\left\|A\right\|_{\,\times\,}=\sup_{0\neq x\in\mathbb{R}^{\infty}}\frac{\|A \,\times\,x\|_{\mathcal{V}}}{\|x\|_{\mathcal{V}}}. \tag{78}\]
This norm can be calculated as follows.
**Proposition V.4**: _The DK-norm of \(A\), defined by (78) is_
\[\left\|A\right\|_{\,\times\,}:=\sqrt{\frac{1}{m}\sum_{j=1}^{m}\|\operatorname{ Row}_{j}(A)\|_{\mathcal{V}}^{2}}. \tag{79}\]
Proof.: Denote by
\[y=(y_{1},\cdots,y_{m})^{T}=A\,\times\,x.\]
Using Schwarz inequality, we have
\[y_{j}=\operatorname{Row}_{j}^{T}(A)\,\vec{\cdot}\,x\leq\|\operatorname{Row}_{ j}(A)\|_{\mathcal{V}}\|x\|_{\mathcal{V}},\quad j\in[1,m].\]
Hence
\[\|y\|_{\mathcal{V}}\leq\frac{1}{\sqrt{m}}\sqrt{\sum_{j=1}^{m}\| \operatorname{Row}_{j}(A)\|_{\mathcal{V}}]^{2}\|x\|_{\mathcal{V}}^{2}}\] \[=\sqrt{\frac{1}{m}\sum_{j=1}^{m}\|\operatorname{Row}_{j}(A)\|_{ \mathcal{V}}]^{2}}\|x\|_{\mathcal{V}}.\]
That is,
\[\left\|A\right\|_{\,\times\,}\leq\sqrt{\frac{1}{m}\sum_{j=1}^{m}\| \operatorname{Row}_{j}(A)\|_{\mathcal{V}}^{2}}.\]
Note that when \(x=\mathbf{1}_{n}\) the equality reaches, which implies that
\[\left\|A\right\|_{\,\times\,}\geq\sqrt{\frac{1}{m}\sum_{j=1}^{m}\| \operatorname{Row}_{j}(A)\|_{\mathcal{V}}^{2}}.\]
We conclude that (78) satisfies (79).
Proposition V.4 ensures that (77) is a dynamic system.
To compare the DK-STP based dynamic system with MV-STP based dynamic system, we recall the following.
**Definition V.5**: _[_11_]_ _Consider the MV-STP based dynamic system (73) and assume \(A(t)=A\). A matrix \(A\in\mathcal{M}\), specified by \(A\in\mathcal{M}_{m\times n}\), is a (dimension) bounded operator if for any \(x_{0}\in\mathbb{R}^{\infty}\) there exist a \(T_{0}>0\) and an Euclidian space \(R^{n}\), depending on \(x_{0}\), such that the trajectory \(x(t,x_{0})\in\mathbb{R}^{n}\) for \(t\geq T_{0}\). Otherwise, \(A\in\mathcal{M}_{m\times n}\) is a (dimension) unbounded operator._
The following proposition shows how to judge if a matrix is bounded or not.
**Proposition V.6**: _[_11_]_ _Consider S-system (73) with constant \(A(t)=A\in\mathcal{M}_{m\times n}\). If \(m|n\), \(A\) is (dimension) bounded. Otherwise, \(A\) is (dimension) unbounded. Moreover, if \(A\) is (dimension) unbounded, then_
\[\lim_{n\to\infty}\dim(x(t,x_{0}))=\infty.\]
Unlike MV-STP based dynamic system, if we consider the DK-STP based system (77) with \(A(t)=A\), then the following proposition is obvious.
**Proposition V.7**: _Consider the DK-STP based system (77) with \(A(t)=A\). Assume \(A\in\mathcal{M}_{m\times n}\), then the trajectory \(x(t,x_{0})\in\mathbb{R}^{m}\), \(t\geq 1\). That is, \(\mathbb{R}^{m}\) is its invariant subspace._
Proof.: For any \(x(0)=x_{0}\in\mathbb{R}^{\infty}\), it follows from definition that \(x(1)\in\mathbb{R}^{m}\) and all \(x(t)\), \(t\geq 1\) remains in \(\mathbb{R}^{m}\).
Proposition V.7 ensures that under operator \(\,\times\,\) any \(A\in\mathcal{M}\) is a dimension bounded operator for system (77).
**Remark V.8**:
* Consider an S-system \[x(t+1)=A(t)\!\times\!x(t),\quad x(0)=x_{0}\in\mathbb{R}^{\infty},\;A(t)\in \mathcal{M}.\] (80) Similar arguments show that for \(A\in\mathcal{M}\) (79) remains true. Hence (80) is also a dynamic system.
* For weighted DK STPs, the corresponding S-systems can be constructed as \[x(t+1)=A(t)\!\times\!_{w}x(t),\quad x(0)=x_{0}\in\mathbb{R}^{\infty},\;A(t)\in \mathcal{M},\] (81) and \[x(t+1)=A(t)\!\times\!_{w}x(t),\quad x(0)=x_{0}\in\mathbb{R}^{\infty},\;A(t) \in\mathcal{M}\] (82) respectively. It is easy to verify that both (81) and (82) are dynamic systems.
**Definition V.9**: _Assume \(A\in\mathcal{M}_{m\times n}\). The restriction of \(A\) on \(\mathbb{R}^{m}\) is denoted by \(\Pi_{A}\), called the square restriction of \(A\). Precisely speaking, \(A|_{\mathbb{R}^{m}}=\Pi_{A}\), that is,_
\[A\!\times\!x=\Pi_{A}x,\quad\forall x\in\mathbb{R}^{m}. \tag{83}\]
**Proposition V.10**: _For each \(A\in\mathcal{M}_{m\times n}\), there exists a unique \(\Pi_{A}\in\mathcal{M}_{m\times m}\), such that (83) holds._
_Proof._ By the linearity, \(A|_{\mathbb{R}^{m}}\) is uniquely determined by its action on a basis of \(\mathbb{R}^{m}\). Consider
\[A\!\times\!\delta_{m}^{i}:=\xi_{i},\quad i\in[1,m].\]
Then we have
\[A\!\times\!I_{m}=[\xi_{1},\cdots,\xi_{m}]:=\Xi.\]
Using Proposition III.3, we have that
\[A\!\times\!x=A\!\times\!(I_{m}x)=(A\!\times\!I_{m})x=\Xi x.\]
It follows that \(\Xi\) is the unique \(\Pi_{A}\), satisfying (83). We, therefore, have
\[\Pi_{A}=A\!\times\!I_{m}=A\Psi_{n\times m}. \tag{84}\]
\(\Box\)
**Remark V.11**: _Similar argument shows that_
* (For Right DK-SPT:) Assume \(A\in\mathcal{M}_{m\times n}\). The restriction of \(A\) on \(\mathbb{R}^{m}\), called the right square restriction of \(A\) and denoted by \(\,\Pi_{A}\), satisfying_ \[A\!\times\!x=\Pi_{A}x,\quad\forall x\in\mathbb{R}^{m},\] (85) _is_ \[\Pi_{A}=A\Phi_{n\times m}.\] (86)
* (For Left Weighted DK-SPT:) Assume \(A\in\mathcal{M}_{m\times n}\). The restriction of \(A\) on \(\mathbb{R}^{m}\), called the left weighted square restriction of \(A\) and denoted by \(\Pi_{A}^{w}\), satisfying \[A\!\times\!_{w}x=\Pi_{A}^{w}x,\quad\forall x\in\mathbb{R}^{m},\] (87) is \[\Pi_{A}^{w}=A\Psi_{n\times m}^{w}.\] (88)
* (For Right Weighted DK-SPT:) Assume \(A\in\mathcal{M}_{m\times n}\). The restriction of \(A\) on \(\mathbb{R}^{m}\), called the right weighted square restriction of \(A\) and denoted by \(\,\Pi_{A}^{w}\), satisfying \[A\!\times\!_{w}x=\Pi_{A}^{w}x,\quad\forall x\in\mathbb{R}^{m},\] (89) is \[\Pi_{A}^{w}=A\Phi_{n\times m}^{w}.\] (90)
**Example V.12**: _Given_
\[A=\begin{bmatrix}1&2&-1&4\\ 3&1&0&-2\\ 5&-2&4&-1\end{bmatrix}\]
* Using (84), we have \[\Psi_{4\times 3}=\begin{bmatrix}3&0&0\\ 1&2&0\\ 0&2&1\\ 0&0&3\end{bmatrix},\] and \[\Pi_{A}=A\!\times\!I_{3}=A\Psi_{4\times 3}=\begin{bmatrix}5&2&11\\ 10&2&-6\\ 13&4&1\end{bmatrix}.\] Let \(x=(2,-1,3)^{T}\). Then a numerical computation shows that \[A\!\times\!x=\Pi_{A}x=(41,0,25)^{T}.\]
* Consider the right DK-STP, then \[\Phi_{4\times 3}=\begin{bmatrix}1&1&1\\ 1&1&1\\ 1&1&1\end{bmatrix},\] and \[\Pi_{A}=A\Phi_{4\times 3}=\begin{bmatrix}6&6&6\\ 2&2&2\\ 6&6&6\end{bmatrix}.\]
* Consider a weighted left DK-STP by using normal distribution for weights. From Example V.4 we have \[W_{3}=[0.4602,0.5,0.4602]^{T},\] \[W_{4}=[0.4207,0.4602,0.4603,0.4207]^{T}.\]
Then
\[\Psi_{4\times 3}^{w}=\begin{bmatrix}0.6355&0&0\\ 0.1936&0.4221&0\\ 0&0.4221&0.1936\\ 0&0&0.6355\end{bmatrix},\]
and
\[\Pi_{A}^{w}=A\Psi_{4\times 3}^{w}=\begin{bmatrix}1.0227&0.4221&2.3484\\ 2.1001&0.4221&-1.2710\\ 2.7902&0.8443&0.1389\end{bmatrix}.\]
4. Considering a weighted right DK-STP by using the same weights as in (iii), we have \[\Phi_{4\times 3}^{w}=\begin{bmatrix}0.1936&0.2301&0.2118\\ 0.1936&0.1936&0.2301\\ 0.2301&0.1936&0.1936\\ 0.2118&0.2301&0.1936\end{bmatrix},\] and \[\Pi_{A}^{w}=A\Phi_{4\times 3}^{w}=\begin{bmatrix}1.1979&1.3441&1.2528 \\ 0.3509&0.4237&0.4782\\ 1.2894&1.3076&1.1795\end{bmatrix}.\]
### _Generalized Cayley-Hamilton Theorem_
The following result is called the generalized Cayley-Hamilton theorem, which extends Cayley-Hamilton theorem to arbitrary matrices.
**Theorem V.13**: _(Generalized Cayley-Hamilton Theorem) Let \(A\in\mathcal{M}_{m\times n}\) and \(r=\min(m,n)\). Set_
\[\Pi(A):=\begin{cases}\Pi_{A},\quad r=m,\\ \Pi_{A^{T}},\quad r=n,\end{cases}\]
_and denote by \(p(x)=x^{r}+p_{r-1}x^{r-1}+\cdots+p_{0}\) the characteristic polynomial of \(\Pi(A)\). Then_
\[A^{<r+1>}+p_{r-1}A^{<r>}+\cdots+p_{0}A=0. \tag{91}\]
_Proof._ First, assume \(m\leq n\). By definition and using (84), we have
\[\Pi_{A}^{m}+p_{m-1}\Pi_{A}^{m-1}+\cdots+p_{0}I_{m}\] \[=(A\Psi_{(n,m)})^{m}+p_{m-1}(A\Psi_{(n,m)})^{m-1}+\cdots+p_{0}I_{m}\] \[=A^{<m>}\Psi_{(n,m)}+p_{m-1}A^{<m-1>}\Psi_{(n,m)}+\cdots+p_{0}I_{ m}=0.\]
Multiplying \(A\) on right side yields
\[A^{<m+1>}+p_{m-1}A^{<m>}+\cdots+p_{0}A=0,\]
which verifies (91).
Next, assume \(n<m\). Similar argument leads to
\[(A^{T})^{<n+1>}+p_{n-1}A^{<n>}+\cdots+p_{0}A^{T}=0.\]
Taking transpose on both sides yields (91). \(\Box\)
**Remark V.14**:
1. _From the proof one sees easily that: using the characteristic function of_ \(\Pi_{A}\) _we can have a (_91_) with_ \(r=m\) _and using the characteristic function of_ \(\Pi_{A^{T}}\) _we can have another (_91_) with_ \(r=n\)_. Both of them are correct, but we prefer to choose the lower order one._
2. _To see this is a generalization of classical Cayley-Hamilton Theorem, it is clear that when_ \(m=n\) _(_91_) degenerates to_ \[A^{n+1}+p_{n-1}A^{n}+\cdots+p_{0}A=0.\] (92) _Deleting_ \(A\) _yields the classical Cayley-Hamilton Theorem. (Even if_ \(A\) _is singular, it still can be deleted from (_91_). Because selecting a nonsingular sequence_ \(\{A_{n}\}\) _and let_ \(\lim_{n\to\infty}A_{n}=A\) _proves the required equality.)_
3. _We may define a formal identity_ \(I_{m\times n}\in R(m\times n,\mathbb{F})\)_, which satisfies_ \[I_{m\times n}\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\times$}}}A=A\mathbin{ \raisebox{0.0pt}{\scalebox{1.2}{$\times$}}}I_{m\times n}=A,\quad\forall A\in \mathcal{M}_{m\times n}.\] (93) _Then (_91_) can be written as_ \[A^{<r>}+p_{r-1}A^{<r-1>}+\cdots+p_{0}I_{m\times n}=0.\] (94) _But remember that here_ \(I_{m\times n}\) _is not a matrix, so (_94_) is only a convenient formal expression._
4. _Let_ \(p(x)=x^{r}+p_{r-1}x^{r-1}+\cdots+p_{0}\) _be any annihilating polynomial of_ \(\Pi_{A}\) _or_ \(\Pi_{A^{T}}\)_. Then (_91_) remain available. Note that now in general_ \(r\neq\min(m,n)\)_. Particularly, we are interested in the minimum annihilating polynomial._
**Remark V.15**: _Similar results are all correct for right DK-STP, weighted left DK-STP, and weighted right DK-STP. For instance, we consider the right DK-STP. We have Generalized Cayley-Hamilton Theorem for right DK-STP as follows: Let \(A\in\mathcal{M}_{m\times n}\) and \(r=\min(m,n)\). Set_
\[\Pi\left(A\right):=\left\{\begin{array}{l}\Pi_{A},\quad r=m,\\ \Pi_{A^{T}},\quad r=n,\end{array}\right.\]
_and denote by \(q(x)=x^{r}+q_{r-1}x^{r-1}+\cdots+q_{0}\) the characteristic polynomial of \(\Pi\left(A\right)\). Then_
\[A^{(r+1)}+p_{r-1}A^{(r)}+\cdots+p_{0}A=0. \tag{95}\]
_We give a numerical example._
**Example V.16**: _Using MatLab, a randomly chosen matrix \(A\in\mathcal{M}_{3\times 4}\) is_
\[A=\begin{bmatrix}0.9572&0.1419&0.7922&0.0357\\ 0.4854&0.4218&0.9595&0.8491\\ 0.8003&0.9157&0.6557&0.9340\end{bmatrix}.\]
1. _(Generalized Cayley-Hamilton Formula Based on (Left) DK-STP)_
We have \[\Pi_{A}=A\Psi_{4\times 3}=\begin{bmatrix}3.0134&1.8682&0.8993\\ 1.8779&2.7625&3.5069\\ 3.3166&3.1430&3.4577\end{bmatrix}.\] The characteristic function of \(\Pi_{A}\) is \[f(x)=x^{3}-ax^{2}+bx-c,\] where \[a=9.2336,\quad b=10.7830,\quad c=2.2366.\] Then it is ready to calculate that \[f(A)=A^{<4>}-aA^{<3>}+bA^{<2>}-cA=0.\]
2. (Generalized Cayley-Hamilton Formula Based on Right DK-STP) We have \[\Pi_{A}=A\Phi_{4\times 3}=\begin{bmatrix}1.9270&1.9270&1.9270\\ 2.7158&2.7158&2.7158\\ 3.3057&3.3057&3.3057\end{bmatrix}\] The characteristic function of \(\Pi_{A}\) is \[g(x)=x^{3}-ax^{2}+bx-c,\] where \[a=7.9485,\quad b=0,\quad c=0.\] Then it is ready to calculate that \[g(A)=A^{(4)}-aA^{(3)}=0.\]
**Definition V.17**: _Let \(A\in\mathcal{M}_{m\times n}\)._
1. _The_ \(\Pi\)_-determinant of_ \(A\) _is defined by_ \[Det(A):=\begin{cases}\det(\Pi_{A}),\quad m\leq n,\\ \det(\Pi_{A^{T}}),\quad m>n.\end{cases}\] (96)
2. \(A\) _is said to be_ \(\Pi\)_-invertible, if_ \(Det(A)\neq 0\)_._
**Proposition V.18**: _The \(\Pi\)-determinant \(Det\) has following properties._
1. \(Det\) _is a generalization of_ \(\det\)_. That is,_ \[Det(A)=\det(A),\quad A\in\mathcal{M}_{n\times n}.\] (97)
2. \[Det(A\times B)=Det(A)Det(B),\quad A,B\in\mathcal{M}.\] (98)
3. _If_ \(Det(A)\neq 0\)_, then_ \(A\) _is of full rank. That is, if_ \(A\in\mathcal{M}_{m\times n}\)_, then_ \[Det(A)\neq 0\Rightarrow\operatorname{rank}(A)=\min(m,n).\] (99)
The converse is not true.
Proof.:
1. Since for a square matrix \(A\) we have \(\Pi_{A}=A\), the conclusion follows.
2. Let \(A\in\mathcal{M}_{m\times n}\). Since \(\Pi_{A}=A\Psi_{n\times m}\). If \(\operatorname{rank}(A)<\min(m,n)\), then \(\Pi_{A}\) is singular. Hence, \(Det(A)=0\). 99 is verified. The following counterexample shows that the converse is not true. Consider \[A=\begin{bmatrix}1&-2&1\\ 1&0&0\end{bmatrix},\] which is of full rank. But \[\Pi_{A}=A\Psi_{3,2}=A\begin{bmatrix}2&0\\ 1&1\\ 0&2\end{bmatrix}=\begin{bmatrix}0&0\\ 2&0\end{bmatrix},\] which is singular.
**Theorem V.19**: _Assume \(A\in\mathcal{M}_{m\times n}\) is \(\Pi\)-invertible, then there exists a unique \(B\in\mathcal{M}_{m\times n}\), (specified by \(B_{1}\) for \(m\leq n\) and \(B_{2}\) for \(m>n\)) called the \(\Pi\)-inverse of \(A\), such that_
\[\begin{cases}B_{1}\xcap A\xcap I_{m}=I_{m},\quad m\leq n,\\ I_{n}\xcap A\xcap B_{2}=I_{n},\quad m>n.\end{cases} \tag{100}\]
Proof.: First, assume \(m\leq n\):
Let \(x^{m}+p_{m-1}x^{m-1}+\cdots+p_{0}\) be the characteristic function of \(\Pi_{A}\). Since \(A\) is \(\Pi\)-invertible, \(p_{0}\neq 0\). Then
\[\begin{split}&(A\Psi_{n\times m})^{m}+p_{m-1}(A\Psi_{n\times m})^ {m-1}+\cdots+p_{0}I_{m}\\ &=A^{<m>}\Psi_{n\times m}+p_{m-1}A^{<m-1>}\Psi_{n\times m}+\cdots\\ &\quad+p_{1}A\Psi_{n\times m}+p_{0}I_{m}\\ &=\left[(A^{<m>}+p_{m-1}A^{<m-1>}+\cdots+p_{1}A)\right]\Psi_{n\times m} \quad+p_{0}I_{m}\\ &=\left\{\left[A^{<m-1>}+p_{m-1}A^{<m-2>}+\cdots+p_{2}A\right]\right. \left.\times A\right.\\ &\quad+\left.p_{1}(\Psi_{m\times n}\Psi_{n\times m})^{-1}\Psi_{m\times n} \Psi_{n\times m}A\right\}\Psi_{n\times m}+p_{0}I_{m}\\ &=\left\{\left[A^{<m-1>}+p_{m-1}A^{<m-2>}+\cdots+p_{2}A\right. \\ &\quad+\left.p_{1}(\Psi_{m\times n}\Psi_{n\times m})^{-1}\Psi_{m\times n} \right]\Psi_{n\times m}A\right\}\Psi_{n\times m}+p_{0}I_{m}\end{split} \tag{101}\]
Set
\[\begin{split}& B_{1}:=A^{(-1)}=-\frac{1}{p_{0}}\left[A^{<m-1>}+p_{m -1}A^{<m-2>}+\cdots+p_{2}A\right.\\ &\quad+\left.p_{1}(\Psi_{m\times n}\Psi_{n\times m})^{-1}\Psi_{m \times n}\right],\end{split} \tag{102}\]
which is called the \(\Pi\)-inverse of \(A\). Then we have
\[(B_{1}\xcap A)\Psi_{n\times m}=B\xcap A\xcap I_{m}=I_{m}.\]
Next, assume \(m>n\):
Let the characteristic function of \(\Pi_{A^{T}}\) be \(x^{n}+q_{n-1}x^{n-1}+\cdots+q_{0}\). Using the previous proof, we have that
\[B_{2}^{T}\xcap A^{T}\xcap I_{n}=I_{n}.\]
Than is
\[I_{n}\xcap A\xcap B=I_{n},\]
where
\[\begin{array}{l}B_{2}^{T}:=(A^{T})^{(-1)}=-\frac{1}{q_{0}}\left[(A^{T})^{<n-1>}+ q_{n-1}(A^{T})^{<n-2>}+\cdots\right.\\ \left.+\cdots+q_{2}(A^{T})+q_{1}(\Psi_{n\times m}\Psi_{m\times n})^{-1}\Psi_{n \times m}\right].\end{array} \tag{102}\]
That is,
\[\begin{array}{l}B_{2}:=(A)^{(-1)}=-\frac{1}{q_{0}}\left[A^{<n-1>}+q_{n-1}A^{< n-2>}+\cdots\right.\\ \left.+q_{2}A+q_{1}\Psi_{m\times n}(\Psi_{n\times m}\Psi_{m\times n})^{-1} \right].\end{array} \tag{103}\]
\(\Box\)
**Remark V.20**: _If we use formal identity, then \(B_{1}\) and \(B_{2}\) can be expressed as_
\[\begin{array}{l}B_{1}:=A^{(-1)}=-\frac{1}{p_{0}}\left[A^{<m-1>}+p_{m-1}A^{< m-2>}+\cdots\right.\\ \left.+p_{2}A+p_{1}I_{m\times n}\right],\end{array} \tag{104}\]
_and_
\[\begin{array}{l}B_{2}:=(A)^{(-1)}=-\frac{1}{q_{0}}\left[A^{<n-1>}+q_{n-1}A^{< n-2>}+\cdots\right.\\ \left.+q_{2}A+q_{1}I_{n\times m}\right].\end{array} \tag{105}\]
_Moreover, (100) can be expressed in more elegent form as_
\[\begin{cases}A\times B_{1}\times I_{m}=B_{1}\times A\times I_{m}=I_{m},\quad m \leq n,\\ I_{n}\times B_{2}\times A=I_{n}\times A\times B_{2}=I_{n},\quad m>n.\end{array} \tag{106}\]
**Remark V.21**: _The formal identity \(I_{m\times n}\) is considered as the identity of the ring \(R(m\times n,\mathbb{F})\). Precisely speaking, the identity of semi-group \((\mathcal{M}_{m\times n},\,\mathbb{x}\,)\). We do not consider it as a member in group \((\mathcal{M}_{m\times n},+)\), because it may causes some trouble._
As a convention, we define
\[\Pi_{I_{m\times n}}:=I_{m}. \tag{107}\]
To express dimension-free eigenvectors, we set
\[\mathbb{C}^{\infty}:=\bigcup_{n=1}^{\infty}\mathbb{C}^{n}.\]
**Definition V.22**: _Let \(A\in\mathcal{M}_{m\times n}\) be a real or complex matrix._
1. \(\lambda\in\mathbb{C}\) _is called a_ \(\Pi\)_-eigenvalue of_ \(A\)_, if_ \[\begin{cases}Det(A-\lambda I_{m\times n})=0,\quad m\leq n,\\ Det(A^{T}-\lambda I_{n\times m})=0,\quad n\leq m.\end{cases}\] (108)
_The set of eigenvalues is denoted by_ \(\sigma(A)\)_._
2. _Let_ \(\lambda\in\sigma(A)\)_._ \(0\neq x\in\mathbb{C}^{\infty}\) _is called an eigenvector w.r.t. eigenvalue_ \(\lambda\)_, if_ \[\begin{cases}A\times x=\lambda x,\quad m\leq n,\\ A^{T}\times x=\lambda x,\quad n\leq m.\end{cases}\]
The following result is an immediate consequence of the above definition.
**Proposition V.23**: _Let \(A\in\mathcal{M}_{m\times n}\) be a real or complex matrix._
1. \(\lambda\in\mathbb{C}\) _is a_ \(\Pi\)_-eigenvalue of_ \(A\)_, if and only if,_ \(\lambda\) _is an eigenvalue of_ \(\Pi(A)\)_._
2. \(x\in\mathbb{C}^{\infty}\) _is a_ \(\Pi\)_-eigenvector of_ \(A\) _w.r.t._ \(\lambda\)_, if and only if,_ \(x\) _is an eigenvector of_ \(\Pi(A)\) _w.r.t._ \(\lambda\)_. Hence_ \(0\neq x\in\mathbb{C}^{r}\) _and_ \(r=\min(m,n)\)_._
**Remark V.24**: _Corresponding to right DK-SPT, left weighted DK-SPT, and right weighted DK-SPT, the II-eigenvalues and II-eigenvectors, \(\Pi_{w}\)-eigenvalues and \(\Pi_{w}\)-eigenvectors, and \(\Pi_{w}\)-eigenvalues and \(\Pi_{w}\)-eigenvectors can also be defined. Moreover, the Proposition V.23 remains true for each cases._
### _DK-STP based Continuous-time Systems_
This subsection considers DK-STP based continuous-time dynamic systems, which is defined as
\[\dot{x}(t)=A\,\mathbb{x}\,x(t),\quad x(t),x(0)=x_{0}\in\mathbb{R}^{\infty},A\in \mathcal{M}. \tag{109}\]
A straightforward computation verified its solution.
**Proposition V.25**: _The solution of (109) is_
\[x(t)=x_{0}\ddot{+}\sum_{i=1}^{\infty}\frac{t^{i}}{i!}A^{<i>}\,\mathbb{x}\,x_{0}. \tag{110}\]
It is natural to define an "exponential" function as
\[Exp(A):=I_{m\times n}+\sum_{i=1}^{\infty}\frac{1}{i!}A^{<i>}. \tag{111}\]
Since \(I_{m\times n}\) is a formal identity, we consider \(Exp(A)\) as a linear operator as
\[Exp(A):\mathbb{R}^{\infty}\rightarrow\mathbb{R}^{\infty}.\]
Though \(Exp\) is very similar to an exponential function, but precisely speaking, it is not an exponential function from \(\mathcal{M}_{m\times n}\) to \(\mathcal{M}_{m\times n}\). Hence, we denote it by \(Exp\), but not \(\exp\).
Using this notation, the solution of (109) can be expressed as
\[x(t)=Exp(At)\,\mathbb{x}\,x_{0}. \tag{112}\]
Assume \(A\in\mathcal{M}_{m\times n}\), then no matter what is the dimension of \(x_{0}\), we have \(x_{1}:=x(1)\in\mathbb{R}^{m}\). Hence the solution (110) can be expressed as follows:
\[\begin{array}{l}x(t)=x_{0}\ddot{+}\sum_{i=1}^{\infty}\frac{t^{i}}{i!}A^{<i>} \,\mathbb{x}\,x_{0}\quad=x_{0}\ddot{+}x_{1}\ddot{+}\sum_{i=2}^{\infty}\frac{t^ {i}}{i!}\Pi_{A}^{i-1}x_{1},\end{array} \tag{113}\]
where \(x_{1}=A\,\mathbb{x}\,x_{0}\in\mathbb{R}^{m}\).
Decompose
\[x_{1}=\xi_{0}+\eta_{0},\]
where
\[\xi_{0}\in\mathrm{Span}\{\mathrm{Col}(\Pi_{A})\},\quad\eta_{0}\in\mathrm{Span}^{ \perp}\{\mathrm{Col}(\Pi_{A})\}.\]
Then there exists a \(\xi\in\mathbb{R}^{m}\) such that
\[\Pi_{A}\xi=\xi_{0}.\]
If \(\Pi_{A}\) is nonsingular, then
\[\xi=\Pi_{A}^{-1}\xi_{0}=\Pi_{A}^{-1}x_{1}.\]
Then (113) becomes
\[\begin{array}{rcl}x(t)&=&x_{0}\ddag\sum\limits_{i=1}^{\infty}\frac{t^{i}}{i}A ^{<n>}\,\,\,x_{0}\\ &=&x_{0}\ddag x_{1}\ddag\sum\limits_{i=2}^{\infty}\frac{t^{i}}{i!}\Pi_{A}^{i} \xi\\ &=&x_{0}\ddag[x_{1}-\xi-\Pi_{A}\xi+\exp(\Pi_{A}t)\xi].\end{array} \tag{114}\]
Note that (114) is a closed-form (or finite term) solution.
**Remark V.26**: _Similarly, we can define_
1. _(for right DK-STP)_ \[\dot{x}(t)=A\,\mbox{\raisebox{0.86pt}{$\times$}}\,x(t),\quad x(t),x(0)=x_{0} \in\mathbb{R}^{\infty},A\in\mathcal{M};\] (115)
2. _(for left weighted DK-STP)_ \[\dot{x}(t)=A\,\mbox{\raisebox{0.86pt}{$\times$}}\,_{w}x(t),\quad x(t),x(0)=x_{0 }\in\mathbb{R}^{\infty},A\in\mathcal{M};\] (116)
3. _(for right weighted DK-STP)_ \[\dot{x}(t)=A\,\mbox{\raisebox{0.86pt}{$\times$}}\,_{w}x(t),\quad x(t),x(0)=x_{0 }\in\mathbb{R}^{\infty},A\in\mathcal{M};\] (117)
The arguments for DK-STP based dynamic system (109) remain available for (115)-(117).
To save space, hereafter we will note mention such extensions any more. But the extensions are all available for the rest of this paper.
## VI STP-based General Linear Algebra
We refer to [24, 40] for the concepts and basic properties of Lie algebra.
**Definition VI.1**: _[_3_]_ _A vector space \(V\) with a binary operator, called Lie bracket, is a Lie algebra, if_
1. _(Bi-linearity)_ \[[ax+by,z]=a[x,z]+b[y,z],\quad x,y,z\in V,\ a,b\in\mathbb{F}.\] (118)
2. _(Skew-symmetry)_ \[[x,y]=-[y,x],\quad x,y\in V.\] (119)
3. _(Jacobi Identity)_ \[[[x,y],z]+[[y,z],x]+[[z,x],y]=0,\quad x,y,z\in V.\] (120)
**Remark VI.2**: _[_3_]_ _Consider \(\mathcal{M}_{n\times n}\), and define_
\[[A,B]=AB-BA,\quad A,B\in\mathcal{M}_{n\times n}. \tag{121}\]
_Then \((\mathcal{M}_{n\times n},[\cdot,\cdot])\) is a Lie algebra, called the general linear algebra, and denoted by \(gl(n,\mathbb{F})\). Its corresponding Lie group is the general linear group \(GL(n,\mathbb{F})\)._
In the following one will see that the DK-STP can extend the Lie algebraic structure from the set of square matrices \(\mathcal{M}_{n\times n}\) to non-square case \(\mathcal{M}_{m\times n}\). Its corresponding Lie group will also be constructed later.
**Definition VI.3**: _Consider \(\mathcal{M}_{m\times n}\). Using \(\,\mbox{\raisebox{0.86pt}{$\times$}}\,\), a Lie bracket over \(\mathcal{M}_{m\times n}\) is defined as_
\[[A,B]\,\mbox{\raisebox{0.86pt}{$\times$}}\,:=A\,\mbox{\raisebox{0.86pt}{$ \times$}}\,B-B\,\mbox{\raisebox{0.86pt}{$\times$}}\,A,\quad A,B\in\mathcal{M}_ {m\times n}. \tag{122}\]
**Remark VI.4**: _If \(m=n\), then (122) is degenerated to (121). Hence, (122) is an extension of (121) to non-square case._
**Proposition VI.5**: \(\mathcal{M}_{m\times n}\) _with Lie bracket defined by (122) is a Lie algebra, called the STP general linear algebra, denoted by \(gl(m\times n,\mathbb{F})\)._
_Proof._ It can be verified by a straightforward computation. \(\Box\)
**Definition VI.6**: _[_24_]_ _Let \(V\) be a vector space with a bracket \([\cdot,\cdot]:V\to V\), and \(\mathcal{L}=(V,[\cdot,\cdot])\) be a Lie algebra. \(H\subset V\) be its subspace._
1. _If_ \(\mathcal{H}=(H,[\cdot,\cdot])\) _is also a Lie algebra,_ \(\mathcal{H}\) _is called a Lie subalgebra of_ \(\mathcal{L}\)_._
2. _If_ \(\mathcal{H}\subset\mathcal{L}\) _is a Lie subalgebra,_ \(\mathcal{H}\) _is called an ideal, if_ \[[x,\mathcal{H}]\subset\mathcal{H},\quad\forall x\in V.\] (123)
**Example VI.7**: _Consider \(gl(m\times n,\mathbb{F})\). Assume \(r=\gcd(m,n)\) and \(s|r\)._
1. _Denote by_ \[D_{s}:=\left\{A_{1}\dot{+}\cdots\dot{+}A_{s}\mid A_{i}\in\mathcal{M}_{m/s \times n/s},\ i\in[1,s]\right\},\] (124) _and define_ \[\mathcal{D}_{s}:=\left(D_{s},[\cdot,\cdot]\,\mbox{\raisebox{0.86pt}{$ \times$}}\,\right)\] (125) Then_ \(\mathcal{D}_{s}\) _is a Lie subalgebra of_ \(gl(m\times n,\mathbb{F})\)_._ _Using Proposition II.6, this claim is obvious._
2. _As a special case of (i), we define_ \[\mathcal{I}_{s}:=I_{s}\otimes\mathcal{M}_{m/s\times n/s}\subset\mathcal{D}_{s}.\] (126) Then it is easy to verify that_ \(\mathcal{I}_{s}\) _is a Lie subalgebra of_ \(\mathcal{D}_{s}\)_._
3. _Let_ \(1=r_{0}<r_{1}<\cdots<r_{t}=r\) _with_ \(r_{i}|r_{i+1}\)_. Then_ \(\mathcal{I}_{r_{t}}\subset\mathcal{I}_{r_{t-1}}\subset\cdots\subset\mathcal{I}_ {r_{0}}\) _is a set of nested Lie subalgebra._
Let \(\mathcal{L}\) be a Lie algebra. Denote its center as
\[Z(\mathcal{L}):=\{z\in\mathcal{L}\mid[z,x]=0,\forall x\in\mathcal{L}\}.\]
It is easy to verify that when \(m\neq n\):
\[Z(gl(m\times n,\mathbb{F}))=\{0\}.\]
This fact implies that \(R(m\times n,\mathbb{F})\) does not have identity when \(m\neq n\), because if there is an identity, \(e\neq 0\), then \(e\in Z(gl(m\times n,\mathbb{F}))\), which leads to a contradiction.
**Definition VI.8**: _Let \(\mathcal{L}_{i}=(V_{i},[\cdot,\cdot]_{i})\), \(i=1,2\) be two Lie algebras._
1. _If there exists a mapping_ \(\varphi:V_{1}\to V_{2}\)_, such that_ (i)__ \[\varphi(x_{1}+_{1}x_{2})=\varphi(x_{1})+_{2}\varphi(x_{2}),\quad x_{1},x_{2}\in V _{1};\] (127) (ii) \[\varphi[x_{1},x_{2}]_{1}=[\varphi(x_{1}),\varphi(x_{2})]_{2},\quad x_{1},x_{2} \in V_{1};\] (128) _Then_ \(\mathcal{L}_{1}\) _is homomorphic to_ \(\mathcal{L}_{2}\)_, and_ \(\varphi\) _is a homomorphism._
2. _If_ \(\varphi:V_{1}\mapsto V_{2}\) _is a one-to one and onto homomorphism, and_ \(\varphi^{-1}:V_{2}\mapsto V_{1}\) _is also a homomorphism, then_ \(\varphi\) _is called an isomorphism._
The following proposition can be verified by definition immediately.
**Proposition VI.9**: _Consider \(R(m\times n,\mathbb{F})\)._
1. _If_ \(H\subset R\) _is a sub-ring of_ \(R(m\times n,\mathbb{F})\)_, then_ \((H,[\cdot,\cdot]_{\times}\) _is a sub-algebra of_ \(gl(m\times n,\mathbb{F})\)_._
2. _If_ \(\pi:R(m\times n,\mathbb{F})\to R(m\times n,\mathbb{F})\) _is a ring isomorphism, then_ \(\pi:gl(m\times n,\mathbb{F})\to gl(m\times n,\mathbb{F})\) _is also a Lie algebra isomorphism._
Since the STP general linear algebra can clearly expressed into matrix form, many properties can be easily verified via matrix form expression. As an example, we consider its Killing form.
**Definition VI.10**: _[_24_]_ _Let \(\mathcal{L}\) be a Lie algebra._
1. _For a fixed_ \(A\in\mathcal{L}\) _the linear mapping_ \(ad_{A}:\mathcal{L}\to\mathcal{L}\)_, defined by_ \[ad_{A}:X\mapsto[A,X],\quad X\in\mathcal{L},\] (129) _is called the adjoint mapping of_ \(A\)_._
2. _A bilinear form on_ \(\mathcal{L}\)_, defined by_ \[(X,Y):=\mathrm{trace}(ad_{X}ad_{Y}),\quad X,Y\in\mathcal{L},\] (130) _is called the Killing form of_ \(\mathcal{L}\)_._
**Proposition VI.11**: _Consider \(gl(m\times n,\mathbb{F})\). Then_
\[\begin{array}{ll}(X,Y)&=&\mathrm{trace}\left\{\left[I_{n}\otimes(X\Psi_{n \times m})-(X^{T}\Psi_{m\times n})\otimes I_{m}\right]\right.\\ &&\left.\left[I_{n}\otimes(Y\Psi_{n\times m})-(Y^{T}\Psi_{m\times n}))\otimes I _{n}\right]\right\},\\ &&X,Y\in gl(m\times n,\mathbb{F}).\end{array} \tag{131}\]
Proof: Assume \(Z\in\mathcal{M}_{m\times n}\), and both pair \((A,Z)\) and pair \((Z,B)\) satisfy dimension matching condition. Using column stacking form, we have[5]
\[\begin{array}{l}V_{c}(AZ)=(I_{n}\otimes A)V_{c}(Z),\\ V_{c}(ZB)=(B^{T}\otimes I_{m})V_{c}(Z).\end{array} \tag{132}\]
Let \(A,B\in\mathcal{M}_{m\times n}\). Using (132), we have
\[\begin{array}{l}V_{c}(A\mp X)=V_{c}(A\Psi_{n\times mX})\\ =\left[I_{n}\otimes(A\Psi_{n\times m})\right]V_{c}(X).\end{array}\]
and
\[\begin{array}{l}V_{c}(X\mp A)=V_{c}(X\Psi_{n\times m}A)\\ =\left[\Psi_{m\times n}A^{T}\right)\otimes I_{m}\right]V_{c}(X).\end{array}\]
It follows that
\[\begin{array}{l}V_{c}(ad_{A}X)=\left\{\left[I_{n}\otimes(A\Psi_{n\times m}) \right]\\ -\left[(A\Psi_{n\times m})^{T}\otimes I_{m}\right]\right\}V_{c}(X).\\ =\left\{\left[I_{n}\otimes(A\Psi_{n\times m})\right]\right.\\ -\left[(\Psi_{m\times n}A^{T})\otimes I_{m}\right]\right\}V_{c}(X).\end{array}\]
Hence
\[\begin{array}{l}ad_{A}=\left[I_{n}\otimes(A\Psi_{n\times m})\right]\\ -\left[(A^{T}\Psi_{(m,n)})\otimes I_{m}\right].\end{array}\]
(131) follows immediately.
**Example VI.12**: _Assume_
\[A=\begin{bmatrix}1&0&-1\\ 0&1&1\end{bmatrix};\;B=\begin{bmatrix}0&1&0\\ -1&0&1\end{bmatrix};\;C=\begin{bmatrix}0&0&1\\ 0&1&0\end{bmatrix},\]
_check the following properties:_
(i)__
\[(A,B)=(B,A). \tag{133}\]
\[\begin{array}{l}\Psi_{3\times 2}=\left(I_{3}\otimes\mathbf{1}_{2}^{T} \right)\left(I_{2}\otimes\mathbf{1}_{3}\right)\\ =\begin{bmatrix}2&0\\ 1&1\\ 0&2\end{bmatrix}\end{array}\]
\[\begin{array}{l}ad_{A}=\left[I_{3}\otimes(A\Psi_{3\times 2})\right]-\left[(A^{T}\Psi_{3 \times 2})\otimes I_{2}\right]\\ =\begin{bmatrix}0&-2&-1&0&0&0\\ 1&1&0&-1&0&0\\ 0&0&1&-2&-2&0\\ 0&0&1&-2&0&-2\\ 2&0&0&0&0&-2\\ 0&2&0&0&1&1\end{bmatrix}\end{array}\]
\[\begin{array}{l}ad_{B}=\left[I_{3}\otimes(B\Psi_{3\times 2})\right]-\left[(B^{T} \Psi_{3\times 2})\otimes I_{2}\right]\\ =\begin{bmatrix}1&1&1&0&2&0\\ -2&2&0&1&0&2\\ -2&0&0&1&0&0\\ 0&-2&-2&1&0&0\\ 0&0&-1&0&-1&1\\ 0&0&0&-1&-2&0\end{bmatrix}\]
\[ad_{C}=[I_{3}\otimes(C\Psi_{3\times 2})]-\left[(C^{T}\Psi_{3 \times 2})\otimes I_{2}\right]\] \[=\left[\begin{array}{cccccc}0&2&0&0&0&0\\ 1&1&0&0&0&0\\ 0&0&-1&2&-2&0\\ 0&0&1&0&0&-2\\ -2&0&-1&0&0&2\\ 0&-2&0&-1&1&1\end{array}\right]\]
It is ready to verify that
\[(A,B)=(B,A)=35.\]
(ii)
\[(A+B,C)=(A,C)+(B,C). \tag{134}\]
It is easy to calculate that
\[\begin{array}{l}(A,C)=5,\\ (B,C)=-11,\\ (A+B,C)=-6,\end{array}\]
which verifies (132).
(iii)
\[(ad_{A}(B),C)+(B,ad_{A}(C))=0. \tag{135}\]
A straightforward computation shows
\[\begin{array}{l}(ad_{A}(B),C)=-60,\\ (B,ad_{A}(C))=60.\end{array}\]
The equation (135) is satisfied.
## VII STP-based General Linear Group
**Definition VII.1**: _[_40_]_ _Consider a Linear algebra \(\mathcal{L}=\{V,[\cdot,\cdot]\}\). Define_
\[C(\mathcal{L}):=\{x\in V\mid[x,y]=0,\;\forall y\in V\}, \tag{136}\]
_which is called the center of \(\mathcal{L}\)._
It is obvious that the center of \(\mathcal{L}\) is an ideal of \(\mathcal{L}\)[40].
**Example VII.2**:
1. _Consider_ \(gl(n,\mathbb{F})\)_. It is clear that_ \[C(gl(n,\mathbb{F}))=\{0\}.\] (137)
2. _Let_ \(M\) _be an_ \(n\)_-dimensional manifold. Consider the set of vector fields on_ \(M\)_, denoted by_ \(V(M)\)_. The Lie bracket of_ \(f(x),g(x)\in V(M)\) _is defined as_ _[_26_]_ _*__ \[[f(x),g(x)]:=\frac{\partial g(x)}{\partial x}f(x)-\frac{\partial f(x)}{ \partial x}g(x).\] (138)
_Now assume_ \(f(x)\in C(V(x))\)_. Taking_ \(g(x)=\frac{\partial}{\partial x_{i}}\) _and setting_ \([f(x),g(x)]=0\)_, a straightforward computation shows_ \(f(x)\) _is independent on_ \(x_{i}\)_,_ \(i\in[1,n]\)_. Hence_ \(f(x)=[a_{1},a_{2},\cdots,a_{n}]\) _is a constant vector. Next, setting_ \(g(x)=(\underbrace{0,\cdots,0}_{i-1},x_{i},\underbrace{0,\cdots,0}_{n-i})^{T}\) _and calculating_ \([f(x),g(x)]\) _yield_ \(a_{i}=0\)_,_ \(i\in[1,n]\)_. We conclude that_
\[C(V(M))=\{0\}. \tag{139}\]
3. _Consider_ \(gl(m\times n,\mathbb{F})\)_. Without loss of generality we assume_ \(m<n\)_. Assume_ \(A\in C(gl(m\times n,\mathbb{F}))\)_, then we have_ \[[A,X]_{\;\;\mathbb{X}\;}=A\Psi_{n\times m}X-X\Psi_{n\times m}A=0,\quad\forall X \in\mathcal{M}_{m\times n}.\] _In vector form we have_ \[\begin{array}{l}\left[I_{n}\otimes(A\Psi_{n\times m}-(\Psi_{n \times m}A)^{T}\otimes I_{m}\right]V_{c}(X)=0,\\ \forall X\in\mathcal{M}_{m\times n}.\end{array}\] _We conclude that_ \(A\in C(gl(m\times n,\mathbb{F}))\)_, if and only if,_ \[I_{n}\otimes(A\Psi_{n\times m})-(A^{T}\Psi_{m\times n})\otimes I_{m}=0.\] (140)
_Converting it into a linear system yields_
\[\Gamma_{m\times n}V_{r}(A)=0. \tag{141}\]
_Numerical calculation shows that (141) has no non-zero solution for some small_ \(m\) _and_ \(n\)_. (Please refer to Appendix-2 for its numerical form.) So we conclude that (a rigorous proof is expecting)_
\[C(gl(m\times n,\mathbb{F}))=\{0\}. \tag{142}\]
**Remark VII.3**:
1. _For a Lie algebra_ \(\mathcal{L}\)_, its center_ \(C(\mathcal{L})=\{0\}\) _is a necessary condition for the existence of its corresponding Lie group. Because it is easy to check that each_ \(x\in C\) _produces_ \(\exp(x)\)_, which is an identity of_ \(\exp(\mathcal{L})\)_._ \(|C(\mathcal{L})|>1\) _forces_ \(\exp(\mathcal{L})\) _to be non-group._
2. _What I am confusing is: why classical textbooks about Lie group and Lie algebra did not mention non-zero center? Is it possible that because only Lie group and Lie algebra of type (i) and (ii) of 7.2 have been considered in these books? Or there is no non-zero center for finite dimensional Lie algebras?_
**Definition VII.4**: _Consider \(\mathcal{M}_{m\times n}\). Define a product \(\circ:\mathcal{M}_{m\times n}\times\mathcal{M}_{m\times n}\to\mathcal{M}_{m \times n}\) by_ \[A\circ B:=A+B+A\,\mbox{\rm x}\,B.\] (143)
**Proposition VII.5**: \(g_{m\times n}:=(\mathcal{M}_{m\times n},\circ)\) _is a monoid._
_Proof._ Define
\[e_{m\times n}:=\mathbf{0}_{m\times n}. \tag{144}\]
A straightforward computation shows that
\[e_{m\times n}\circ A=A\circ e_{m\times n}=A,\quad A\in\mathcal{M}_{m\times n}. \tag{145}\]
Let \(e_{0}\) be another identity, then
\[e\circ e_{0}=e=e_{0}.\]
Hence \(e_{0}=e\) is the unique identity.
To see the associativity, we have
\[(A\circ B)\circ C=(A+B+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}B)\circ C\] \[=(A+B+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}B)+C+(A+B+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}B)\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}C\] \[=A+B+C+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}B+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+B\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}B\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}B+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+B\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}B\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}B+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+B\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}B+A\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}C+B\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}C+A\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}B+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}C+B\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+A\mathbin{\mathchoice{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}B+A\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}C+B\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}C+A\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}B+A\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}C+B\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}}C+A\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}B+A\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+B\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}{\hbox{\raise 1.0pt\hbox{$ \times$}}}}C+A\mathbin{\mathchoice{\hbox{ \raise 1.0pt\hbox{$\times$}}}{\hbox
Finally, we define a mapping
\[\pi:A\mapsto A+I_{m\times n},\]
denote
\[\pi(g_{m\times n}):=g_{m\times n}^{a}, \tag{158}\]
and set
\[\pi(g_{m\times n}^{0}):=GL(m\times n,\mathbb{F}). \tag{159}\]
The product on \(GL(m\times n,\mathbb{F})\) is defined naturally as
\[(I_{m\times n}+A)\circ(I_{m\times n}+B):=I_{m\times n}+A+B+A\,\overline{\times }\,B.\]
Then the following result is easily verifiable.
**Proposition VII.13**: \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Applying Theorem 7.16 to STP algebra and STP group, we have the following result.
**Corollary 7.17**: _Consider \(gl(m\times n,\mathbb{R})\), assume \(1<k=\operatorname{lcm}(m,n)\). Then \(gl(m/k\times n/k,\mathbb{R})\) is a subalgebra of \(gl(m\times n,\mathbb{R})\). Hence \(GL(m/k\times n/k,\mathbb{R})\) is a Lie subgroup of \(GL(m\times n,\mathbb{R})\)._
It is also true for and \(s>1\) and \(s|k\).
Finally, we consider the relationship of \(GL(m\times n,\mathbb{F})\) and \(GL(m,\mathbb{F})\).
**Theorem 7.18**: _Consider \(gl(m\times n,\mathbb{F})\) and \(gl(m,\mathbb{F})\)._
1. _Define_ \(\varphi:gl(m\times n,\mathbb{F})\to gl(m,\mathbb{F})\)_, defined by_ \[\varphi(A):=\Pi_{A}.\] (165) Then_ \(\varphi\) _is a Lie algebra homomorphism, that is,_ \[gl(m\times n,\mathbb{F})\simeq gl(m,\mathbb{F}).\] (166)
2. _Correspondingly, set_ \[GL(m\times n,\mathbb{F}):=Exp(gl(m\times n,\mathbb{F}),\] (167) \[GL(m,\mathbb{F}):=Exp(gl(m,\mathbb{F}).\] Then \(\varphi:GL(m\times n,\mathbb{F})\to GL(m,\mathbb{F})\) is a Lie group homomorphism, that is,_ \[GL(m\times n,\mathbb{F})\simeq GL(m,\mathbb{F}).\] (168)
_Proof._
1. Define \(\varphi:gl(m\times n,\mathbb{F})\simeq gl(m,\mathbb{F})\) by \(\varphi:A\mapsto\Pi_{A}\). Let \(A,B\in gl(m\times n,\mathbb{F})\). Then \[\varphi[A,B]\,_{\,\,\mathbb{X}}=\varphi(A\,\mathbb{x}\,B-B\, \mathbb{x}\,A)\] \[=(A\Psi_{n\times m}B-B\Psi_{n\times m})A\Psi_{n\times m}\] \[=\varphi(A)\varphi(B)-\varphi(B)\varphi(A)\] \[=[\varphi(A),\varphi(B)]\]
2. The proof is mimic to the proof of Theorem 3.7 of [22].
\(\Box\)
## VIII Concluding Remarks
In this paper a new STP, called (left) DK-STP and denoted by \(\,\mathbb{X}\,\), has been proposed. Using it, the corresponding ring, Lie algebra, and Lie group are presented. The algebraic objects concerned in this paper can be described as:
\[R(m\times n,\mathbb{F})\xrightarrow{\,\mathbb{X}\,}gl(m\times n,\mathbb{F}) \xrightarrow{\,Exp\,}GL(m\times n,\mathbb{F}).\]
The action of \(G(m\times n,\mathbb{F})\) on dimension-free vector space \(\mathbb{R}^{\infty}\) is also considered, which proposed discrete-time/continuous-time dynamic systems as
\[G(m\times n,\mathbb{F})\xrightarrow{\,\mathbb{X}\,}\mathbb{R}^{\infty} \rightarrow\,\,\text{S-system}\rightarrow\text{dynamic system}.\]
Meanwhile, by introducing the square restriction \(\Pi_{A}\in\mathcal{N}_{m\times n}\) of \(A\in\mathcal{M}_{m\times n}\), some interesting things have been obtained, including eigenvalue, eigenvector, determinant, invertibility, etc., for non square matrices. Particularly, the Cayley-Hamilton theory can also be extended to non-square matrices.
In addition, the right DK-STP, weighted (left) DK-STP, and weighted right DK-STP are also briefly introduced. They have similar properties as (left) DK-STP.
This paper may pave a road for further development of STP of matrices.
Though these new STPs shown many interesting properties, it can not be used to replace existing STPs, because they have quite different properties, which makes their functions different. Unlike existing STPs, application of these new STPs is still waiting for exploring.
There are many related topics remain for further study. The following are some of them.
1. Understanding \(gl(m\times n,\mathbb{F})\) and \(GL(m\times n,\mathbb{F})\). The investigation of non-square general linear group and general linear algebra is only a beginning. To reveal their more properties is theoretically important and interesting. Particularly, general "non-square" Lie group and Lie algebra may be developed. Say, group representation of nonlinear mapping \(\varphi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), etc.
2. Dimension-varying (Control) System:
Fig. 3: Subalgebra to Subgroup
Fig. 2: Mappings among (Semi-)Groups
Consider a control system: \[\begin{cases}\dot{x}(t)=Ax(t)+Bu(t),\quad x(t)\in\mathbb{R}^{n},\ u(t)\in\mathbb{R}^ {m},\\ y(t)=Cx(t),\quad y(t)\in\mathbb{R}^{p}.\end{cases}\] (169) If we allow dimension perturbation in \(u(t)\) and \(x(t)\). That is, \(x(t)\) may disturbed to \(\mathbb{R}^{n\pm r}\) or so. Then (169) can be considered as a nominal model. This happens from time to time for nature systems or artificial systems. For instance, Internet changes its size, because of varying number of users; In gene regularity network, the number of nodes are changing because of the birth or death of cells. If we use dimension keeping STP to the nominal model (169) as \[\begin{cases}\dot{x}(t)=A\,\mathbb{x}\,[x(t)\vec{+}\xi(t)]+B\,\mathbb{x}\,u(t) \vec{+}\eta(t)],\quad x(t)\in\mathbb{R}^{n},\quad x(t)\vec{+}\xi(t),\quad y(t) =C\,\mathbb{x}\,[x(t)\vec{+}\zeta(t)],\quad y(t)\in\mathbb{R}^{p},\quad\xi(t),\eta(t),\zeta(t)\in\mathbb{R}^{\infty},\end{cases}\] (170) where \(\xi(t)\), \(\eta(t)\), \(\zeta(t)\) are disturbances of different dimensions. Then the overall model does not need to adjust the dimensions of nominal model to meet the perturbation. This is another reason to name such SPT dimension keeping one. The properties of such systems are worthy for further investigations.
3. Analytic Functions of Non-square Matrices. Using DK-STP, the analytic functions of non-square matrices are properly defined. Their properties with applications need to be investigated. Say, for non-square matrix \(A\), using Taylor expansion with DK-STP power \(A^{<n>}\) to replace \(A^{n}\), we can also prove Eular formula \[e^{iA}=\cos(A)+i\sin(A),\quad A\in\mathcal{M}_{m\times n}.\]
|
2309.09652 | Speeding Up Speech Synthesis In Diffusion Models By Reducing Data
Distribution Recovery Steps Via Content Transfer | Diffusion based vocoders have been criticised for being slow due to the many
steps required during sampling. Moreover, the model's loss function that is
popularly implemented is designed such that the target is the original input
$x_0$ or error $\epsilon_0$. For early time steps of the reverse process, this
results in large prediction errors, which can lead to speech distortions and
increase the learning time. We propose a setup where the targets are the
different outputs of forward process time steps with a goal to reduce the
magnitude of prediction errors and reduce the training time. We use the
different layers of a neural network (NN) to perform denoising by training them
to learn to generate representations similar to the noised outputs in the
forward process of the diffusion. The NN layers learn to progressively denoise
the input in the reverse process until finally the final layer estimates the
clean speech. To avoid 1:1 mapping between layers of the neural network and the
forward process steps, we define a skip parameter $\tau>1$ such that an NN
layer is trained to cumulatively remove the noise injected in the $\tau$ steps
in the forward process. This significantly reduces the number of data
distribution recovery steps and, consequently, the time to generate speech. We
show through extensive evaluation that the proposed technique generates
high-fidelity speech in competitive time that outperforms current
state-of-the-art tools. The proposed technique is also able to generalize well
to unseen speech. | Peter Ochieng | 2023-09-18T10:35:27Z | http://arxiv.org/abs/2309.09652v2 | Speeding Up Speech Synthesis in Diffusion Models by Reducing Data Distribution Recovery Steps via Content Transfer.
###### Abstract
Diffusion based vocoders have been criticised for being slow due to the many steps required during sampling. Moreover, the model's loss function that is popularly implemented is designed such that the target is the original input \(x_{0}\) or error \(\epsilon_{0}\). For early time steps of the reverse process, this results in large prediction errors, which can lead to speech distortions and increase the learning time. We propose a setup where the targets are the different outputs of forward process time steps with a goal to reduce the magnitude of prediction errors and reduce the training time. We use the different layers of a neural network (NN) to perform denoising by training them to learn to generate representations similar to the noised outputs in the forward process of the diffusion. The NN layers learn to progressively denoise the input in the reverse process until finally the final layer estimates the clean speech. To avoid 1:1 mapping between layers of the neural network and the forward process steps, we define a skip parameter \(\tau>1\) such that an NN layer is trained to cumulatively remove the noise injected in the \(\tau\) steps in the forward process. This significantly reduces the number of data distribution recovery steps and, consequently, the time to generate speech. We show through extensive evaluation that the proposed technique generates high-fidelity speech in competitive time that outperforms current state-of-the-art tools. The proposed technique is also able to generalize well to unseen speech.
diffusion vocoder speech synthesis denoise time-domain embeddings
## 1 Introduction
The use of deep generative models is prevalent in speech synthesis [15][3][13][24][14][12]. These models use generative adversarial network (GAN) [5] or likelihood-based techniques. GAN-based models such as [12] and [14] exploit the training objective to make the model generate data that are indistinguishable from the training data. While GAN based models can generate high quality speech, they are difficult to train due to instability during the training process [19]. Likelihood speech synthesis-based techniques are composed of autoregressive models such as [21][9][18][35], flow-based models [25][10][7] and variational auto-encoders (VAE) based models [17]. Autoregressive speech synthesis models generate speech in a sequential nature, where the current sample to be generated is conditioned on the previously generated samples. Due to the sequential nature of speech generation, these models require many computations to generate a sample. This limits their ability to be deployed in application where faster real time generation is required. Flow based model utilises specialised architectures to model a normalised probability model. These architectures require optimisation of many parameters during training, and hence can be computationally expensive. VAE based models on the other hand do not work well with high dimensional data [2]. Another type of likelihood-based generative model that is becoming popular for speech synthesis is the diffusion probability model (DPM) [32]. It has been explored in speech synthesis in [15][3][13]. DPMs are composed of two main processes i.e., the forward and reverse process. The forward process involves sequentially adding Gaussian noise to a given distribution until eventually it becomes identical to white noise, i.e., pure Gaussian noise. The reverse process starts with white noise
and recovers the data distribution by sampling. To learn a given target distribution, DPMs require a significant number of diffusion steps during training, resulting in many reverse steps to recover the data distribution during sampling time. Due to this, speech synthesis tools using diffusion generative model are slow, a property that prohibits their real-world deployment. Recognizing this limitation, speech synthesis tools that employ diffusion generative models employ a number of techniques to reduce sampling steps. WaveGrad [3] uses a grid search algorithm (GS) to reduce the sampling noise schedule. The use of grid search to shorten the noise schedule has been criticised for being computationally prohibitive when many noising steps \(N\) are used [15]. BDDM [15] reduces the sampling schedule by first training a generative model for speech synthesis using \(T\) steps, then uses the optimised score network to train a scheduling network to learn a shorter noise schedule \(N<<T\) to be used during sampling. In this work, we explore the idea of using content transfer to speed up speech synthesis in DPM. Content transfer which was first used in [4] as part of style transfer, involves training layers of a neural network to minimize the distance between representations of a desired style (or content) and a white noise and iteratively transform white noise to the desired style or content. Motivated by this, we also use neural network layers to learn to generate representations of a given audio generated by a given time-step \(t\) of the forward process. Since the forward process can have many steps \(T\), we restrict the layers of the neural network used in the reverse process to \(N=\frac{T}{\tau}\) where \(\tau>1\). Intuitively, we use the layers of the neural network to reduce the noise schedule of the forward process by training a neural network such that its single layer can remove cumulative noise injected in \(\tau\) steps during the forward process. Unlike [15] which optimises two sets of parameters, we train the model to optimise only a single parameter set \(\theta\) therefore hypothesise that the proposed method will significantly reduce sampling time and, consequently, audio generation time.
## 2 Denoising diffusion probabilistic model
Given an observed sample \(x\) of unknown distribution, the diffusion probabilistic model (DPM) aims to model the true distribution of the data \(p(x)\). The modelled distribution \(p(x)\) can then be used to generate new samples at will. DPM defines a forward process as
\[q(x_{1:T}\mid x_{0})=\prod_{i=1}^{T}q(x_{t}\mid x_{t-1}) \tag{1}\]
Here, latent variables and true data are represented as \(x_{t}\) with \(t=0\) being the true data. The encoder \(q(x_{t}\mid x_{t-1})\) seeks to convert the data distribution into a simple tractable distribution after the \(T\) diffusion steps. \(q(x_{t}\mid x_{t-1})\) models the hidden variables as linear Gaussian models with mean and standard deviation modelled as hyperparameters [6] or as learnt variables [20][11]. The Gaussian encoder is parameterized with mean \(u_{t}(x_{t})=\sqrt{\alpha_{t}}x_{t-1}\) and variance \(\Sigma_{t}(x_{t})=(1-\alpha_{t})I\) hence
\[q(x_{t}\mid x_{t-1})=\mathcal{N}(x_{t};\sqrt{\alpha_{t}}x_{t-1},(1-\alpha_{t} )I) \tag{2}\]
\(\alpha_{t}\) evolves with time \(t\) based on a fixed or learnable schedule such that the final distribution \(p(x_{T})\) is a standard Gaussian. The reverse process which seeks to recover the data distribution from the white noise is modelled as
\[p_{\theta}(x_{0:T})=p(x_{T})\prod_{i=1}^{T}p_{\theta}(x_{t-1}\mid x_{t}) \tag{3}\]
where
\[p(x_{T})=\mathcal{N}(x_{T};0,I) \tag{4}\]
The encoder essentially describes a steady noisification of an input over time by adding Gaussian noise until eventually it becomes identical to pure noise. It is completely modelled as a Gaussian with a defined mean and variance parameters at each timestep hence it is not learned. The goal of DPM is therefore to model the reverse process \(p_{\theta}(x_{t-1}\mid x_{t})\) so that it can be exploited to generate new data samples. After the DPM has been optimized, a sampling procedure entails sampling Gaussian noise from \(p(x_{T})\) and iteratively running the denoising transitions \(p_{\theta}(x_{t-1}\mid x_{t})\) for T steps to generate \(x_{0}\). The DPM can be optimised through the evidence lower bound (ELBO).
\[\begin{split}&\log p(x)=E_{q}(x_{1}\mid x_{0})[\log p_{\theta}(x_ {0}\mid x_{1})]-D_{KL}(q(x_{T}\mid x_{0})||p(x_{T}))-\\ &\sum_{t=2}^{T}E_{q}(x_{t}\mid x_{0})[D_{KL}(q(x_{t-1}\mid x_{t}, x_{0})||p_{\theta}(x_{t-1}\mid x_{t}))]\end{split} \tag{5}\]
Using the property of isotropic Gaussians, [6] shows that \(x_{t}\) can be conditioned directly on \(x_{0}\) as:
\[x_{t}=\sqrt{\bar{\alpha}}x_{0}+\sqrt{(1-\bar{\alpha_{t}})}e_{0} \tag{6}\]
hence
\[q(x_{t}\mid x_{0})=\mathcal{N}(x_{t};\sqrt{\bar{\alpha}}x_{0},(1-\bar{\alpha_ {t}})I) \tag{7}\]
In equation 5, the third term on the right is the denoising term that seeks to model \(p_{\theta}(x_{t-1}\mid x_{t})\) to match the ground truth \(q(x_{t-1}\mid x_{t},x_{0})\). In [6], \(q(x_{t-1}\mid x_{t},x_{0})\) is derived as:
\[q(x_{t-1}\mid x_{t},x_{0})\propto\mathcal{N}(\frac{\sqrt{\alpha}(1-\bar{\alpha }_{t-1})x_{t}+\sqrt{\bar{\alpha}}_{t-1}(1-\alpha_{t})x_{0}}{(1-\bar{\alpha})}, \frac{(1-\alpha_{t})(1-\bar{\alpha}_{t-1})}{(1-\bar{\alpha})}I) \tag{8}\]
In order to match \(p_{\theta}(x_{t-1}\mid x_{t})\) to \(q(x_{t-1}\mid x_{t},x_{0})\) in the reverse process, the mean of \(p_{\theta}(x_{t-1}\mid x_{t})\) should be made to match that of \(q(x_{t-1}\mid x_{t},x_{0})\) hence the mean of \(p_{\theta}(x_{t-1}\mid x_{t})\) is parameterized as
\[u_{\theta}(x_{t},t):=\frac{\sqrt{\alpha}(1-\bar{\alpha}_{t-1})x_{t}+\sqrt{\bar {\alpha}}_{t-1}(1-\alpha_{t})\hat{x}_{\theta}(x_{t},t)}{(1-\bar{\alpha})} \tag{9}\]
Here, the score network \(\hat{x}_{\theta}(x_{t},t)\) is parameterized by a neural network and it seeks to predict \(x_{0}\) from a noisy input \(x_{t}\) and time index \(t\). The optimization problem can therefore be simplified as:
\[L=\mathbb{E}_{t,\epsilon}[||\hat{x}_{\theta}(\sqrt{\alpha}x_{0}+\sqrt{(1- \bar{\alpha_{t}}}\epsilon,t)-x_{0}||_{2}^{2}] \tag{10}\]
The loss function is composed of the neural network \(\hat{x}_{\theta}(x_{t},t)\) that is conditioned on the discrete time \(t\) and noisy input \(x_{t}\) to predict the original ground truth input \(x_{0}\). By expressing
\[x_{0}=\frac{x_{t}-\sqrt{1-\bar{\alpha}_{t}}\epsilon_{0}}{\sqrt{\bar{\alpha}}} \tag{11}\]
an equivalent optimization of modelling a neural network \(\hat{\epsilon}_{\theta}(x_{t},t)\) to predict the source noise can be derived [6].
\[L=\mathbb{E}_{t,\epsilon}[||\hat{\epsilon}_{\theta}(\sqrt{\bar{\alpha}}x_{0}+ \sqrt{(1-\bar{\alpha_{t}}}\epsilon,t)-\epsilon_{0}||_{2}^{2}] \tag{12}\]
## 3 Related work
Deep neural network generative techniques for speech synthesis (vocoders) are either implemented using likelihood technique or generative adversarial network [5]. Likelihood methods are composed of autoregressive, VAE, flow, and diffusion-based vocoders. Autoregressive models such as [21][9][18] and [35] are models that generate speech sequentially. The models learn the joint probability over speech data by factorizing the distribution into a product of conditional probabilities over each sample. Due to their sequential nature of speech generation, autoregressive models require a large number of computations to generate a sample. This limits their ability to be deployed in application where faster real time generation is required. However, there are models such as [22], [7] and [18] which propose techniques to speed up speech generation in autoregressive models. Another likelihood-based speech synthesis technique is the flow-based models [28] used in [25][10][7]. These models use a sequence of invertible mappings to transform a given probability density. During sampling, flow-based models generate data from a probability distribution through the inverse of these transforms. Flow based models implement specialized models that are is complicated to train [34]. Denoising diffusion probabilistic models (DDPM) have recently been exploited in speech synthesis using tools such as PriorGrad [16], WaveGrad [3], BDDM [15] and DiffWave [13]. These models exploit a neural network that learns to predict the source noise that was used in the noisification process during the forward process. Diffusion-based vocoders can generate speech with very high voice quality but are slow due to the high number of sampling steps. Tools such as BDDM [15] propose techniques to speed up speech generation while using diffusion models. Our proposed work also looks at how to speed up speech synthesis in diffusion models. Finally, GAN based models such as [12] and [14] exploit the training objective to make the model generate data that is indistinguishable from the training data. While GAN based models can generate high quality speech, they are difficult to train due to instability during the training process [19]. A complete review of the vocoders can be found in [34].
## 4 Problem definition
To enable faster sampling and potentially avoid sampling with hundreds and thousands of steps, we propose a reduced noise scheduling network which sequentially learns to recover data distribution in much more fewer steps that the steps used to inject the noise during the forward process. Given the forward process of diffusion parameterized by noise schedule \(\alpha\in R^{T}\) where \(0<\alpha_{t}<1\) with \(1\leq t\leq T\), we seek to establish a shortened noise scheduling \(\hat{\alpha}\in R^{N}\) where \(0<\hat{\alpha}_{t}<1\) with \(1\leq t\leq N\). The reduced noise schedule \(\hat{\alpha}\) is to be established when only the forward diffusion noise schedule \(\alpha\) given. Equation 10 and 12 are designed such that the the targets are the original input \(x_{0}\) and error \(\epsilon_{0}\) respectively. We hypothesise that for early time steps of the reverse process, this will result in large prediction errors which can lead to speech distortions [36] and increase the learning time. We propose a setup where the targets are the different outputs of the forward process time steps with the goal of reducing the magnitude of prediction errors. Through this, we hope to generate high fidelity speech and achieve faster convergence during training.
## 5 Proposed Method
Our proposed method is based on content transfer between audio in the forward process and audio generated by the layers of neural network in the reverse process. The number of steps \(N\) used to recover the data distribution is fixed by selecting a skip parameter \(1\leq\tau<T\) such that \(N=\frac{T}{\tau}\). If \(\tau=1\), then each time step \(t\) in the forward process is mapped to a neural network layer in the reverse process. However, the goal is to fast track the reverse of the diffusion process, and hence we aim to select \(\tau>1\) that significantly reduces the sampling time. By doing this, a layer \(n\) that is mapped to a time step \(t\) of the forward process can eliminate noise injected from \(t=t-\tau\) to \(t=t\) in the diffusion process.
### Representation generation
The representations discussed in this paper are generated by Wav2Vec 2.0 [1]. Wav2Vec 2.0 is a speech pre-trained model that is composed of two main blocks. The first block, feature extractor is made of seven 1D convolution layers and a normalization layer. It accepts raw audio waveform and generates a representation \(Z=\{z_{1},\cdots,z_{n}\}\) with 20 ms stride between samples where each sample has a receptive field of 25ms. The second block is composed of a transformer with 24 layers that establish a contextual representation \(C=\{c_{1},\cdots c_{n}\}\) of a given audio. We do not use the quantisation module part of Wav2Vec 2.0 that is employed during self-supervised pre-training. For the forward process of the diffusion process, we use Wav2Vec to generate representations of an audio resulting from time step \(t\) (see figure 1). For the reverse process, the Wav2Vec layer-wise representations are used. We describe the details in the next section.
### Unconditional speech generation
The forward process proceeds similarly to Equation 1. Recall that in equation 5, the third term on the right is the denoising term with the goal to learn a transition step \(p_{\theta}(x_{t-1}\mid x_{t})\) that estimates the tractable ground truth \(q(x_{t-1}\mid x_{t},x_{0})\). When the KL divergence of the two denoising steps is minimised, denoising is achieved. In our case, we reparametize the third term of equation 5 as:
\[\begin{split}&\sum_{t=2}^{T}E_{q}(x_{n}\mid x_{n-1})[D_{KL}(q(x_{n- 1}\mid x_{n})||p_{\theta}(\hat{x}_{n-1}\mid\hat{x}_{n}))]=\\ &\sum_{t=2}^{T}E_{q}(x_{t+\tau}\mid x_{t})[D_{KL}(q(x_{t}\mid x_ {t+\tau})||p_{\theta}(\hat{x}_{t}\mid\hat{x}_{t+\tau}))]\end{split} \tag{13}\]
Figure 1: An overview of the unconditioned audio generation. An audio file is first processed by the forward process (10 steps in this case including input). An audio generated at a selected time step \(t\) is processed by a pre-trained model and its representation stored. In the reverse process, white noise \(X_{N}\) is accepted by the first layer of the neural network and processed through the layers. For each layer we store its generated representation. A layer is mapped to a given time step \(t\). If a layer \(l\) is mapped to time step \(t\), we minimize the mean squared error between their respective embeddings.
The fundamental change is that we condition \(x_{n}\) on \(x_{n-1}\) rather than \(x_{0}\) and the model \(p_{\theta}(\hat{x}_{n-1}\mid\hat{x}_{n})\) is parameterized by estimates \(\hat{x}_{n-1}\) and \(\hat{x}_{n}\) rather than actual \(x_{n-1}\) and \(x_{n}\). Using Bayes theorem
\[q(x_{n-1}\mid x_{n})=\frac{q(x_{n}\mid x_{n-1})q(x_{n-1})}{q(x_{n})} \tag{14}\]
Using Markov property equation 14 becomes:
\[q(x_{n-1}\mid x_{n})=\frac{q(x_{n}\mid x_{n-1})q(x_{n-1}\mid x_{n-2})}{q(x_{n} \mid x_{n-1})}=q(x_{n-1}\mid x_{n-2}) \tag{15}\]
Where
\[q(x_{n-1}\mid x_{n-2})\propto\mathcal{N}(x_{n-1};\sqrt{\alpha_{n-1}}x_{n-2},( 1-\alpha_{n-1})I) \tag{16}\]
Equation 13 can now be rewritten as:
\[\sum_{t=2}^{T}E_{q}(x_{n}\mid x_{n-1})[D_{KL}(q(x_{n-1}\mid x_{n-2})||p_{ \theta}(\hat{x}_{n-1}\mid\hat{x}_{n}))] \tag{17}\]
The goal is to model \(p_{\theta}(\hat{x}_{n-1}\mid\hat{x}_{n})\) to estimate \(q(x_{n-1}\mid x_{n-2})\) established during the forward process. Due to this, we formulate the loss function as follows:
\[L=\operatorname*{argmin}_{\theta}D_{KL}(q(x_{n-1}\mid x_{n-2})||p_{\theta}( \hat{x}_{n-1}|\hat{x}_{n})) \tag{18}\]
\[L=\operatorname*{argmin}_{\theta}D_{KL}(\mathcal{N}(x_{n-1};\mu_{q},\Sigma_{q }(n))||\mathcal{N}(\hat{x}_{n-1};\hat{\mu}_{\theta},\Sigma_{q}(n))) \tag{19}\]
where \(\mu_{q}=\sqrt{\alpha_{n-1}}x_{n-2}\), and \(\Sigma_{q}(n)=1-\alpha_{n-1}\). \(p_{\theta}(\hat{x}_{n-1}|\hat{x}_{n})\) is supposed to be modelled to have a similar distribution to that of \(q(x_{n-1}\mid x_{n-2})\) as much as possible [6]. Hence, the distribution of \(p_{\theta}(\hat{x}_{n-1}|\hat{x}_{n})\) is modelled as a Gaussian with mean \(\hat{\mu}_{\theta}=\sqrt{\alpha_{n-1}}\hat{x}_{\theta}(\hat{x}_{n-1},n-2)\) and variance \(\Sigma_{q}(n)\). With \(\hat{x}_{\theta}(\hat{x}_{n-1},n-2)\) being parameterized by a neural network that seeks to predict \(x_{n-2}\) from the estimate \(\hat{x}_{n-1}\) and the time step \(n-2\). Based on this, equation 19 can be expressed as:
\[L=\operatorname*{argmin}_{\theta}\frac{1}{2\sqrt{\alpha_{n-1}}}[||\sqrt{\alpha _{n-1}}\hat{x}_{\theta}(\hat{x}_{n-1},n-2)-\sqrt{\alpha_{n-1}}x_{n-2}||_{2}^{2}] \tag{20}\]
\[L=\operatorname*{argmin}_{\theta}\frac{1}{2}\sqrt{\alpha_{n-1}}[||\hat{x}_{ \theta}(\hat{x}_{n-1},n-2)-x_{n-2}||_{2}^{2}] \tag{21}\]
Equation 22 can be generalized as
\[L=\operatorname*{argmin}_{\theta}\frac{1}{2}\sqrt{\alpha_{n+1}}[||\hat{x}_{ \theta}(\hat{x}_{n+1},n)-x_{n}||_{2}^{2}] \tag{22}\]
Therefore, optimizing the loss function boils down to learning a neural network \(\hat{x}_{\theta}\) to predict \(x_{n}\) established during the forward process. The neural network should be conditioned on an estimate \(\hat{x}_{n+1}\) and time step \(n\) to predict \(x_{n}\). Unlike the loss in equation 10 where the neural network \(\hat{x}_{\theta}(x_{t},t)\) conditioned on a random noisy input \(x_{t}\) predicts the original noiseless input \(x_{0}\), the loss in equation 23, conditions the neural network on an estimate \(\hat{x}_{n+1}\) of the previous step \(n+1\) of the reverse process to predict the noisy output generated at a time step \(n\) in the forward process. Using the loss function in equation 22, to recover the original input \(x_{0}\), we will need to design a neural network conditioned on estimate \(\hat{x}_{1}\) and minimise the loss:
\[L=\operatorname*{argmin}_{\theta}\frac{1}{2}\sqrt{\alpha_{n+1}}[||\hat{x}_{ \theta}(\hat{x}_{1},0)-x_{0}||_{2}^{2}] \tag{23}\]
Ideally, to recover the original input \(x_{0}\), we will need to design \(N\) neural networks each for each time step. To avoid this, we exploit \(N\) layers \(\hat{x}_{\theta}^{l}(.)\) of NN to perform the time step predictions of the reverse process. Hence, equation 23 is implemented as:
\[L=\operatorname*{argmin}_{\theta}\frac{1}{2}\sqrt{\alpha_{n+1}}[||\hat{x}_{ \theta}^{l}(\hat{x}_{n+1})-x_{n}||_{2}^{2}] \tag{24}\]
where \(\hat{x}_{n+1}\) is the prediction of the previous neural network layer, i.e., \(\hat{x}_{\theta}^{l-1}(\hat{x}_{n+2})\). We no longer condition the neural network layer on the discrete time \(n\) since the layers encode time (i.e., the time steps are encoded by the neural network layers which are implemented sequentially). Intuitively, since \(n=t+\tau\), the network layer \(p_{\theta}(\hat{x}_{n-1}\mid\hat{x}_{n})\) is supposed to remove the cumulative noise injected in multiple steps \(\tau\) during the forward process. Based on this, in the unconditioned implementation, the reverse process starts with white noise \(x_{N}\sim N(0,I)\) and takes \(N=\frac{T}{\tau}\) steps (layers) to recover
the data distribution. During implementation, the white noise \(x_{N}\) is passed through the first layer of the Wav2Vec and each of the representation \(\hat{x}_{n}\in R^{k\times h}\) encoded by the 24 layers of Wav2Vec is stored. Here, \(k\) represents the number of samples generated by the feature extractor of the Wav2Vec. We then minimize the loss between the \(L2\) normalized embeddings \(\hat{x}_{n}\in R^{k\times h}\) and the embeddings \(x_{n}\in R^{k\times h}\) ( established by fine-tuned Wav2Vec) of the audio generated by time step \(n\) during the forward process using mean squared error.
\[L_{n}=||\hat{x}_{\theta}^{l}(\hat{x}_{n+1})-x_{n}||_{2}^{2}=\sum_{i=1}^{k}2-2 \frac{<\hat{x}_{n_{i}},x_{n_{i}}>}{||\hat{x}_{n_{i}}||_{2}.||x_{n_{i}}||_{2}} \tag{25}\]
The gradients of loss with respect to the layer's activations are then computed using standard error back-propagation.
### Conditional speech generation
To enable the model to generate speech that follow a given acoustic features, we condition the loss in equation 24 on acoustic features \(y\) as:
\[L=||\hat{x}_{\theta}^{l}(\hat{x}_{n+1},y)-x_{n}||_{2}^{2} \tag{26}\]
Therefore, we design the score network \(\hat{x}_{\theta}^{l}(.,.)\) such that it can process both noisy estimate \(\hat{x}_{n+1}\) and acoustic features \(y\). To achieve this, we exploit feature-wise linear modulation (FiLM) [23] which has also been used in [3]. Through FiLM, we adaptively influence neural network layer estimates by applying affine transformation to layer activation based on the input Mel spectrogram \(y\) (see Figure 2).
\[FiLM(\hat{x}_{n-1})=\gamma\odot\hat{x}_{n-1}+\beta \tag{27}\]
where \(\gamma\) and \(\beta\in R^{k\times h}\) modulates \(\hat{x}_{n+1}\) based on a certain Mel-spectrogram \(y\) and \(\odot\) is the hadamard product.
## 6 Evaluation
This section discusses how we developed and evaluated the proposed technique which we refer as **DiCon** (**D**inoising by **C**onnt transfer).
**Dataset**: In keeping with the trend in the speech synthesis domain and to allow for comparison with other existing
Figure 2: An overview of the conditioned audio generation. Compared to unconditioned audio generation in figure 1, the conditioned audio generation includes upsampling blocks that accepts Mel-spectrogram \(y\) that contains acoustic features of the audio to be generated. The upsampling blocks process the Mel-spectrogram to generate \(\beta=f(y)\) and \(\gamma=f(y)\). Both \(\beta\) and \(\gamma\) are used to modulate the activation of the neural network layer according to equation 27.
tools, we used the most popular datasets i.e., LJSpeech dataset for single speaker evaluation and VCTK dataset for multi-speaker evaluation. LJSpeech dataset consists of 13,100 audio clips sampled at 22KHz. The audio clips are from a female speaker that vary in length from 1 to 10 seconds with a total of about 24hrs. For multispeaker, we used the VCTK dataset, which is sampled at 48KHz and consists of 109 English speakers with various accents. VCTK was downsampled to 22KHz. Similar to [3], for LJSpeech, we used 12,764 utterance subset which is approximately 23 hours for training the model and evaluated it on test set of 130 utterances subset. For multi-speaker evaluation, we used the data split used in [15] of 100 speakers for training and 9 were used for evaluation. From each audio we extracted a 128-dimensional Mel spectrogram features. Similar to [3] we used a 50-ms Hanning window, 12.5-ms frame shift, and a 2048-point FFT with upper and lower frequencies of 20 Hz and 12 kHz lower.
**Training parameters**: The model was trained using a single NVDIA V100 GPU. We used Adam Optimiser and the cyclical learning rate [31] with a minimum learning rate of \(1e-4\) and a maximum of \(1e-1\). We used a batch size of 32 and trained for 1M steps. Similar to [3], for conditioned audio generation we used Mel-spectrogram extracted from ground truth audio as conditioning audio features during training while during testing we used Mel-spectrogram generated by Tacotron 2 model [30]. To generate the FiLM parameters \(\beta\) and \(\gamma\), we use the upsampling blocks proposed in [3] and use the parameters to modulate the activations of a given layer as described in equation 27.
**Baseline models**: We compared the proposed method with other state-of-the-art vocoders. We used models that have publicly available implementations where we can generate a sample for human evaluation. The baseline models used include WaveNet [21]1, WaveGlow [24]2, MelGAN [14]3, HiFi-GAN [12]4, WaveGrad [3]5, DiffWave [13]6, BDDM [15]7 and FastDiff [8]8.
Footnote 1: [https://github.com/i9y9/wavenet_vocoder](https://github.com/i9y9/wavenet_vocoder)
Footnote 2: [https://github.com/NVIDIA/waveglow](https://github.com/NVIDIA/waveglow)
Footnote 3: [https://github.com/descriptimc/melgan-neurips](https://github.com/descriptimc/melgan-neurips)
Footnote 4: [https://github.com/jk876/hitfi-gan](https://github.com/jk876/hitfi-gan)
Footnote 5: [https://github.com/tencencant-ailab/bddm](https://github.com/tencencant-ailab/bddm)
Footnote 6: [https://github.com/tencencent-ailab/bddm](https://github.com/tencencent-ailab/bddm)
Footnote 7: [https://github.com/tencencent-ailab/bddm](https://github.com/tencencent-ailab/bddm)
Footnote 8: [https://FastDiff.github.io/](https://FastDiff.github.io/)
**Metrics**: For subjective evaluation, we used the Mean Opinion Score (MOS) metric to evaluate the performance of the proposed model compared to the baseline tools. For each model we collected samples generated by the model. We also randomly selected samples from original audio samples. Each of the samples was presented to human evaluators one at time for them to rate the quality of speech on its naturalness on a 5-point Mean Opinion Score (MOS) scale. The scores used were Bad: 1, Poor: 2, Fair: 3, Good: 4, Excellent: 5 with a rating increment of 0.5. A single evaluator was required to rate 10 samples. Human evaluators were contracted via Amazon Mechanical Turk where they were required to wear headphones and be English speakers. For objective evaluation, we use Short-time objective intelligibility (STOI) [33], perceptual evaluation of speech quality (PESQ) algorithm [29], Deep Noise Suppression MOS (DNSMOS) which is a reference-free metric that evaluates perceptual speech quality [26]. It is a DNN based model trained on human ratings obtained by using an online framework for listening experiments based on ITU-T P.808. We also use SIG, BAK, and OVRL: The non-intrusive speech quality assessment model DNSMOS P.835 [27]is based on a listening experiment according to ITU-T P.835 and evaluates speech based on three MOS scores: speech quality (SIG), background noise quality (BAK), and the overall quality (OVRL) of speech. To evaluate the speed of speech generation we used real-time factor (RTF).
**Model Configurations**: To train the model, we experimented with different number of steps in the forwards process while the reverse steps were kept constant at 24. We experimented with forward step (fsteps) of 1200, 960, 720 and 240 while the reverse steps (rsteps) were kept constant at 24 hence we selected a skip parameter \(\tau=\{50,40,30,10\}\) respectively. The model accepts a 0.3 second input of audio. For the forward process \(\alpha_{i}\) increases linearly from \(\alpha_{1}\) to \(\alpha_{N}\) defined as \(Linear(\alpha_{1},\alpha_{N},N)\) such as \(Linear(1\times 10^{-4},0.005,1200)\).
### Results
#### 6.1.1 Single speaker
For conditional speech generation on a single speech dataset, the subjective MOS and objective STOI, PESQ, DNSMOS, SIG, BAK and OVRL results are shown in table 1. Table 1 also shows the speech generation time RTF. For MOS, the best performing configuration of the proposed technique DiCon(1200,24) differs from the best performing tool DiffWave(200 steps) by a margin of 0.03. The MOS value of DiCon(1200,24) has 0.30 difference with that of ground truth. DiCon(1200,24) registers the best results in all the objective metrics of STOI, PESQ, DNSMOS, SIG, BAK and OVRL. With regards to speed of speech generation, all the configuration of the proposed model are competitive with the observation that as the step size \(\tau\) becomes smaller the speed of generation increases. We hypothesize that this is because a neural network layer has a reduced load of the amount of noise it is supposed to remove. We also note that
the more forward steps, the better quality of audio the model can generate. However, more forward steps make the audio generation slower.
#### 6.1.2 Multi-speaker
The results of the performance of the proposed technique on the multi-speaker dataset are shown in table 2. For this dataset, the proposed technique can generalize to unseen speakers and DiCom(1200,24) configuration has the best MOS score of 4.39 which has a gap of 0.17 from the ground truth. It also registers the best performance on all the objective metrics.
#### 6.1.3 Unconditional speech generation
Here, the model was trained using multi-speaker dataset. To generate a speech sample, we sample white noise at random and process it through the trained model without conditioning it on any acoustic features. The results for unconditional speech generation are shown in table 3. For short clips of 0.3s DiCon(fsteps:1200 rsteps 24 ) attains the MOS score of 3.08. Listening to the audio clips, we noticed a phenomenon where the clips begin by generating coherent sounding sentences, but the coherence drops with time. We will investigate the reason for this phenomenon in our future work. However, the model can generate clean sounding speeches, i.e., almost free of noise or artefacts. This is captured by the high BAK score that measures the background noise.
## 7 Conclusion
This paper presents DiCon, a technique for speeding up speech generation in diffusion models using neural network layers. We exploit the layers of the neural network to progressively recover the data distribution from white noise. Using content transfer, we demonstrate how an NN network layer can be exploited to implicitly perform denoising. We use a skip parameter \(\tau\) to guide the mapping of NN layers to the forward process, and hence reduce the number of
\begin{table}
\begin{tabular}{|l|c c c c c c c c|} \hline \multicolumn{10}{|c|}{**VL5bect test-dataset**} \\ \hline
**Model** & **MOS(\(\uparrow\))** & **STOI(\(\uparrow\))** & **PESQ (\(\uparrow\))** & **RTF (\(\downarrow\))** & **DNSMOS (\(\uparrow\))** & **SIG (\(\uparrow\))** & **BAK (\(\uparrow\))** & **OVRL (\(\uparrow\))** \\ \hline Ground truth & 4.68\(\pm\)0.15 & - & - & - & 4.77 & 4.67 & 4.82 & 4.70 \\ \hline BDDM(12 steps) & 4.37\(\pm\)0.15 & 0.967 & 3.68 & 0.543 & 4.45 & 4.24 & 4.44 & 4.34 \\ DiffWave(200 steps) & **4.41\(\pm\)** 0.13 & 0.966 & 3.62 & 5.9 & 4.38 & 4.32 & 4.43 & 4.40 \\ WaveGrad(1000 steps) & 4.34\(\pm\) 0.15 & 0.909 & 3.41 & 38.2 & 4.41 & 4.29 & 4.34 & 4.32 \\ HIFI-GAN & 4.29\(\pm\) 0.14 & 0.957 & 3.27 & 0.0134 & 4.14 & 4.09 & 4.17 & 4.15 \\ MelGAN & 3.52\(\pm\) 0.12 & 0.946 & 2.67 & 0.00396 & 3.62 & 3.39 & 3.78 & 3.33 \\ WaveGlow & 3.11\(\pm\) 0.14 & 0.961 & 3.17 & 0.0198 & 3.67 & 3.34 & 3.01 & 3.06 \\ WaveNet & 3.51\(\pm\) 0.15 & 0.921 & 2.93 & 318.6 & 3.71 & 3.66 & 3.83 & 2.45 \\ \hline DiCom(fsteps:1200 steps 24) & 4.38 \(\pm\) 0.12 & **0.977** & **3.74** & 0.0042 & **4.49** & **4.43** & **4.52** & **4.41** \\ DiCom(fsteps:960 rsteps 24 ) & 4.31\(\pm\) 0.15 & 0.9573 & 3.70 & 0.00371 & 4.45 & 4.39 & 4.40 & 4.38 \\ DiCom(fsteps:720 rsteps 24 ) & 4.13\(\pm\) 0.15 & 0.948 & 3.68 & 0.002912 & 4.21 & 4.18 & 4.25 & 4.20 \\ DiCom(fsteps:240 rsteps 24 ) & 3.77\(\pm\) 0.13 & 0.8932 & 3.13 & **0.00182** & 3.87 & 3.78 & 3.86 & 3.83 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation results of the conditioned version of the proposed method and how it compares to other state of the art tools on the evaluation metrics when single-speaker dataset is used.
\begin{table}
\begin{tabular}{|l|c c c c c c c c|} \hline \multicolumn{10}{|c|}{**VCTK test-dataset**} \\ \hline
**Model** & **MOS(\(\uparrow\))** & **STOI(\(\uparrow\))** & **PESQ (\(\uparrow\))** & **RTF (\(\downarrow\))** & **DNSMOS (\(\uparrow\))** & **SIG (\(\uparrow\))** & **BAK (\(\uparrow\))** & **OVRL (\(\uparrow\))** \\ \hline Ground Truth & 4.56\(\pm\)0.05 & - & - & - & - & 4.78 & 4.73 & 4.80 & 4.77 \\ \hline BDDM(12 steps) & 4.33\(\pm\)0.05 & 0.9610 & 3.61 & 0.438 & 4.47 & 4.38 & 4.43 & 4.32 \\ DiffWave(200 steps) & 4.37\(\pm\) 0.04 & 0.9678 & 3.68 & 5.9 & 4.44 & 4.28 & 4.36 & 4.36 \\ WaveGrad(1000 steps) & 4.31\(\pm\) 0.05 & 0.9630 & 3.43 & 38.2 & 4.38 & 4.21 & 4.44 & 4.26 \\ HIFI-GAN & 4.12\(\pm\) 0.05 & 0.943 & 3.51 & 0.0134 & 4.21 & 4.19 & 4.33 & 4.19 \\ MelGAN & 3.42\(\pm\) 0.05 & 0.8965 & 2.65 & 0.00396 & 3.67 & 3.48 & 3.52 & 3.37 \\ WaveGlow & 3.38\(\pm\) 0.04 & 0.8702 & 2.56 & 0.0198 & 3.79 & 3.61 & 3.88 & 3.48 \\ WaveNet & 3.73\(\pm\) 0.05 & 0.8989 & 2.98 & 318.6 & 3.91 & 3.74 & 3.90 & 3.85 \\ \hline DiCom(fsteps:1200 rsteps 24) & **4.39\(\pm\)** 0.05 & **0.981** & **3.81** & 0.0042 & **4.53** & **4.41** & **4.56** & **4.47** \\ DiCom(fsteps:960 rsteps 24) & 4.26\(\pm\) 0.05 & 0.9500 & 3.67 & 0.00371 & 4.49 & 4.39 & 4.58 & 4.41 \\ DiCom(fsteps:720 rsteps 24) & 4.09\(\pm\) 0.03 & 0.9430 & 3.61 & 0.002912 & 4.11 & 4.33 & 4.40 & 4.14 \\ DiCom(fsteps:240 rsteps 24) & 3.08\(\pm\) 0.05 & 0.8709 & 2.78 & 0.00182 & 3.35 & 3.38 & 3.65 & 3.19 \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation results of the conditioned version of the proposed method and how it compares to other state of the art tools on the evaluation metrics when multi-speaker dataset is used.
distribution recovery steps. In conditional speech generation, we use FiLM to infuse the acoustic features of a given speech into the denoising process. Based on evaluation, we demonstrate that the proposed technique generates superior quality speech samples at a competitive speed.
|
2309.09134 | Total Variation Distance Meets Probabilistic Inference | In this paper, we establish a novel connection between total variation (TV)
distance estimation and probabilistic inference. In particular, we present an
efficient, structure-preserving reduction from relative approximation of TV
distance to probabilistic inference over directed graphical models. This
reduction leads to a fully polynomial randomized approximation scheme (FPRAS)
for estimating TV distances between same-structure distributions over any class
of Bayes nets for which there is an efficient probabilistic inference
algorithm. In particular, it leads to an FPRAS for estimating TV distances
between distributions that are defined over a common Bayes net of small
treewidth. Prior to this work, such approximation schemes only existed for
estimating TV distances between product distributions. Our approach employs a
new notion of $partial$ couplings of high-dimensional distributions, which
might be of independent interest. | Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, Dimitrios Myrisiotis, A. Pavan, N. V. Vinodchandran | 2023-09-17T02:12:36Z | http://arxiv.org/abs/2309.09134v2 | # Total Variation Distance Estimation Is as Easy as Probabilistic Inference+
###### Abstract
In this paper, we establish a novel connection between total variation (TV) distance estimation and probabilistic inference. In particular, we present an efficient, structure-preserving reduction from relative approximation of TV distance to probabilistic inference over directed graphical models. This reduction leads to a fully polynomial randomized approximation scheme (FPRAS) for estimating TV distances between distributions over any class of Bayes nets for which there is an efficient probabilistic inference algorithm. In particular, it leads to an FPRAS for estimating TV distances between distributions that are defined by Bayes nets of bounded treewidth. Prior to this work, such approximation schemes only existed for estimating TV distances between product distributions. Our approach employs a new notion of _partial_ couplings of high-dimensional distributions, which might be of independent interest.
## 1 Introduction
Machine learning and data science heavily rely on probability distributions that are widely used to capture dependencies among large number of variables. Such _high-dimensional distributions_ naturally appear in various domains including neuroscience [1, 10], bioinformatics [1], text and image processing [14], and causal inference [15]. Substantial research has been devoted to developing models that represent high-dimensional probability distributions succinctly. One prevalent approach is through graphical models. In a graphical model, a graph describes the conditional dependencies among variables and the probability distribution is factorized according to the adjacency relationships in the graph [17]. When the underlying graph is a directed graph, the model is known as a Bayesian network or Bayes net.
Two fundamental computational tasks on distributions are _distance computation_ and _probabilistic inference_. In this work, we establish a novel connection between these two seemingly different computational tasks. Using this connection, we design new relative error approximation algorithms for estimating the statistical distance between Bayes net distributions with bounded treewidth.
Distance computation.The distance computation problem is the following: Given descriptions of two probability distributions \(P\) and \(Q\), compute \(\rho(P,Q)\) for a distance measure \(\rho\). A distance measure of central importance is the _total variation (TV) distance_ (also known as _statistical distance_ or _statistical difference_). Let \(P\) and \(Q\) are distributions over a finite domain \(\mathcal{D}\). The total variation distance between \(P\) and \(Q\), denoted by \(d_{\mathrm{TV}}(P,Q)\), is defined as
\[d_{\mathrm{TV}}(P,Q)=\max_{S\subseteq\mathcal{D}}(P(S)-Q(S))\,.\]
The total variation distance satisfies many basic properties which makes it a versatile and fundamental measure for quantifying the dissimilarity between probability distributions. First, it has an explicit probabilistic interpretation: The TV distance between two distributions is the maximum gap between the probabilities assigned to a single event by the two distributions. Second, it satisfies many mathematically desirable properties: It is bounded in \([0,1]\), it is a metric, and it is invariant with respect to bijections. Total variation distance also measures the minimum probability that \(X\neq Y\) among all couplings \((X,Y)\) between \(P\) and \(Q\). Because of these reasons, the total variation distance is a central distance measure employed in a wide range of areas including probability and statistics, machine learning, information theory, cryptography, data privacy, and pseudorandomness.
Probabilistic inference.There are several related computational tasks that fall under the umbrella of the term probabilistic inference. We use the following: Given (a representation of) random variables \(X_{1},\ldots,X_{n}\) and (a representation of) sets \(S_{1},\ldots,S_{n}\) such that for all \(i\) the set \(S_{i}\) is a subset of the range of \(X_{i}\), compute the probability \(\mathbf{Pr}[X_{1}\in S_{1},\ldots,X_{n}\in S_{n}]\).
Probabilistic inference in graphical models is a fundamental computational task with a wide range of applications that spans disciplines including statistics, machine learning, and artificial intelligence (e.g., [20]). Various algorithms have been proposed for this problem, encompassing both exact approaches like message passing [14], variable elimination [13], and junction-tree propagation [12], as well as approximate techniques such as loopy belief propagation, variational inference-based methods [20], and particle-based algorithms (refer to Chapter 13 of [15] and the references therein). Computational hardness results have also been established in several works [16, 1, 13, 14].
### Our contributions
Our main contribution is a _structure-preserving_ reduction from the TV distance estimation problem to the probabilistic inference problem over Bayes nets. In particular, we exhibit an efficient probabilistic reduction such that for two Bayes nets \(P\) and \(Q\) defined over a directed acyclic graph (DAG) \(G\), the reduction makes probabilistic inference queries to a Bayes net \(\mathcal{L}\) defined over the _same_ DAG \(G\) and returns a relative approximation of the \(d_{\mathrm{TV}}(P,Q)\).
**Theorem 1** (Informal).: _There is a polynomial-time randomized algorithm that takes a DAG \(G\), two Bayes nets \(P\) and \(Q\) over \(G\), and parameters \(\varepsilon,\delta\) as inputs and behaves as follows. The algorithm makes probabilistic inference oracle queries to a Bayes net over the same DAG \(G\) and outputs an \((1+\varepsilon)\)-relative approximation of \(d_{\mathrm{TV}}(P,Q)\) with probability at least \(1-\delta\)._
**Remark 2**.: It is known that probabilistic inference computation over Bayes nets is a #P-hard problem and hence exact \(d_{\mathrm{TV}}\) computation reduces to probabilistic inference over Bayes nets [16]. A salient feature of our reduction is that it _preserves the structure_ of the Bayes net. This leads to efficient \(d_{\mathrm{TV}}\) estimation algorithms for any class of Bayes nets that admits efficient probabilistic inference algorithms. Note that exact \(d_{\mathrm{TV}}\) computation is #P-complete even for product distributions for which inference computation is straightforward [1].
As a corollary, we obtain a fully polynomial time randomized approximation scheme (FPRAS) for relatively approximating the TV distance between Bayes nets over any class of DAGs for which an efficient probabilistic inference algorithm exists. The well-known variable elimination algorithm can be used for efficient probabilistic inference for Bayes nets over DAGs with treewidth \(O(\log n)\). This leads to a new FPRAS for TV distance estimation for Bayes nets over DAGs with logarithmic treewidth.
**Corollary 3** (Informal).: _There is an FPRAS for estimating the TV distance between two Bayes nets of treewidth \(O(\log n)\) that are defined over the same DAG of \(n\) nodes._
Prior to our work, such approximation schemes were known only for product distributions, which are Bayes nets over a graph with no edges [10]. In particular, designing an FPRAS for estimating TV distance between Bayes nets over trees (which are graphs with treewidth 1) was an open question. Our result resolves this question. It is known that for Bayes nets over general DAGs we cannot hope to have an FPRAS for relatively approximating TV distance. In particular, [1] shows that it is NP-hard to decide whether \(d_{\mathrm{TV}}(P,Q)\) is zero or not when \(P\) and \(Q\) are arbitrary Bayes nets over DAGs of in-degree 2. In spite of this impossibility result, Corollary 3 shows that it is indeed possible to obtain an FPRAS for a large class of Bayes nets, namely Bayes nets of \(O(\log n)\) treewidth.
Our next set of results focuses on the case when one of the distributions is the uniform distribution. We first prove that the exact computation of the TV distance between a Bayes net distribution and the uniform distribution is \(\#\mathsf{P}\)-complete. To complement this result, we show that there is an FPRAS that estimates the TV distance between the uniform distribution and _any_ Bayes net distribution.
**Theorem 4**.: _It is \(\#\mathsf{P}\)-complete to compute the TV distance between a Bayes net that has bounded in-degree and the uniform distribution._
**Theorem 5** (Informal).: _There is an FPRAS for estimating the TV distance between a Bayes net and the uniform distribution._
### Related work
Koller and Friedman [14] provide a comprehensive overview of probabilistic graphical models. They discuss the general principles and philosophies behind graphical models, including Bayesian networks and Markov networks.
Distance computation.Recently, Bhattacharyya, Gayen, Meel, Myrisiotis, Pavan, and Vinodchandran [1] initiated the study of the computational complexity aspects of TV distance over graphical models. In that work, they proved that exactly computing the TV distance between product distributions is \(\#\mathsf{P}\)-complete, that it is NP-hard to decide whether the TV distance between two Bayes nets of in-degree 2 is equal to 0 or not, and also gave an FPTAS for approximating the TV distance between an arbitrary product distribution and the uniform distribution. In a subsequent work, Feng, Guo, Jerrum and Wang [10] gave an FPRAS for approximating the TV distance between two arbitrary product distributions.
TV distance estimation was also studied previously from a more complexity-theoretic and cryptographic viewpoint. Sahai and Vadhan [11] established in a seminal work that additively approximating the TV distance between two distributions that are samplable by Boolean circuits is hard for SZK (Statistical Zero Knowledge). Goldreich, Sahai, and Vadhan [1] showed that the problem of deciding whether a distribution samplable by a Boolean circuit is close or
far from the uniform distribution is complete for the complexity class NISZK (Non-Interactive Statistical Zero Knowledge).
Additive approximation of TV distance is much easier. Canonne and Rubinfeld [13] showed how to additively estimate TV distance between distributions that can be efficiently sampled and whose probability mass functions can be efficiently evaluated. Clearly, Bayes nets satisfy both conditions (where "efficient" means as usual polynomial in the number of parameters). Bhattacharyya, Gayen, Meel and Vinodchandran [1] extended this idea to develop polynomial-time algorithms for additively approximating the TV distance between two bounded in-degree Bayes nets using a polynomial number of samples from each.
Probabilistic inference.There is a significant body of work dedicated to exact probabilistic inference. As we mentioned earlier, some algorithmic paradigms that have been developed for the task of probabilistic inference are message passing [14], variable elimination [15], and junction-tree propagation [11]. Recently, Klinkenberg, Blumenthal, Chen, and Katoen [12] presented an exact Bayesian inference method for inferring posterior distributions encoded by probabilistic programs featuring possibly unbounded looping behaviors. Similarly, Klinkenberg, Winkler, Chen, and Katoen [13], explore the theory of generating functions and investigate its usage in the exact quantitative reasoning of probabilistic programs.
With the advent of big data and the increasing complexity of models, traditional exact inference methods may become computationally infeasible. Approximate inference techniques, such as variational inference and sampling methods like Markov Chain Monte Carlo, provide efficient and scalable alternatives to tackle these challenges. Minka [15] introduces the expectation propagation algorithm for approximate Bayesian inference. This method unifies two previous techniques: Assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation. Murphy, Weiss, and Jordan [16] investigate the effectiveness of loopy belief propagation. They present empirical results showing the performance of the algorithm and discuss its limitations and trade-offs. Ranganath, Gerrish, and Blei [12] introduce black box variational inference, a flexible and scalable approach for approximate Bayesian inference. Their paper presents a general framework for approximating posterior distributions and discusses applications in latent variable models. Blei, Kucukelbir, and McAuliffe [1] provide a comprehensive review of variational inference, a family of methods for approximate Bayesian inference. They cover the principles of variational inference, present different algorithmic approaches, and discuss its applications in machine learning.
### Organization
The rest of the paper is organized as follows. We provide a technical overview of our results in Section 2 and some background material in Section 3. We prove the main results as follows: We show Theorem 1 in Section 4; Corollary 3 in Section 4.3; Theorem 4 in Section 5.1; Theorem 5 in Section 5.2. We conclude in Section 6.
## 2 Technical overview
We present in this section some intuition regarding the technical aspects of our results.
### Proof of Theorem 1
Our approach is to carefully define an estimator function \(f\) and a distribution \(\pi\) so that \(\mathbf{E}_{\pi}[f]=d_{\mathrm{TV}}(P,Q)/Z\) where \(Z\) is an efficiently computable normalization constant. The algorithm proceeds by estimating \(\mathbf{E}_{\pi}[f]\), multiplies it by \(Z\), and returns the value. Probabilistic inference queries are used to compute \(Z\) and to sample from the distribution \(\pi\).
There are several challenges in this approach, including setting up the estimator function \(f\) and the distribution \(\pi\) so that: (i) \(\mathbf{E}_{\pi}[f]\) is large enough for an empirical estimate to be a good relative approximation and (ii) probabilistic inference queries can be used to efficiently sample from \(\pi\) and to compute \(Z\).
Our starting point is a well-known connection between the TV distance and _couplings_.
**Definition 6**.: Suppose \(P\) and \(Q\) are two arbitrary distributions on a common finite set \([\ell]\) (where \(\ell>0\)). A _coupling_ of \(P\) and \(Q\) is a distribution on pairs \((X,Y)\) such that \(X\sim P\) and \(Y\sim Q\). An _optimal coupling_ of \(P\) and \(Q\) is a distribution on pairs \((X,Y)\) such that (1) \(X\sim P\), \(Y\sim Q\), and (2) for any \(w\in[\ell]\), \(\mathbf{Pr}[X=Y=w]=\min(P(w)\,,Q(w))\).
Couplings are closely related to TV distance. For any coupling \((X,Y)\) between \(P\) and \(Q\), \(d_{\mathrm{TV}}(P,Q)\leq\mathbf{Pr}[X\neq Y]\). Additionally, for an optimal coupling as defined above, \(\mathbf{Pr}[X\neq Y]\) exactly equals \(d_{\mathrm{TV}}(P,Q)\).
Therefore, a natural strategy to compute \(d_{\mathrm{TV}}(P,Q)\) is to use the probabilistic inference oracle to compute \(\mathbf{Pr}_{\mathcal{O}}[X\neq Y]\) over an optimal coupling \(\mathcal{O}\) of \(P\) and \(Q\). Indeed, assuming for simplicity that the alphabet size \(\ell=2\),
\[\mathbf{Pr}[X\neq Y]=1-\mathbf{Pr}[(X_{i},Y_{i})\in\{(1,1),(2,2)\}\text{ for all }i]\,,\]
and so, if \(\mathcal{O}\) was also a Bayes net over the same DAG as \(P\) and \(Q\), we could use the given probabilistic inference oracle to exactly compute \(\mathbf{Pr}_{\mathcal{O}}[X\neq Y]\).
Perhaps surprisingly at first glance, optimal couplings generally do not have the same structure as the base distributions. In fact, even for \(n\)-variate _product distributions_\(P\) and \(Q\), optimal couplings do not factorize. A natural candidate for an optimal coupling of product distributions is a _local coupling_, a joint distribution \(\mathcal{L}\) on \((X,Y)=(X_{1},\dots,X_{n},Y_{1},\dots,Y_{n})\) where each \((X_{i},Y_{i})\) is independently sampled from an optimal coupling of the \(i\)-th marginals of \(P\) and \(Q\). However, local couplings are generally1 not optimal.
Footnote 1: For example, say \(P=\mathsf{Ber}(2/3)\otimes\mathsf{Ber}(2/3)\), while \(Q=\mathsf{Ber}(1/3)\otimes\mathsf{Ber}(1/3)\). Here, if \(\mathcal{O}\) is optimal, \(\Pr_{\mathcal{O}}[(X_{1},X_{2})\neq(Y_{1},Y_{2})]=d_{\mathrm{TV}}(P,Q)=1/3\), while if \(\mathcal{L}\) is local, \(\Pr_{\mathcal{L}}[(X_{1},X_{2})\neq(Y_{1},Y_{2})]=1-\Pr_{\mathcal{L}}[X_{1}=Y_ {1}]\cdot\Pr_{\mathcal{L}}[X_{2}=Y_{2}]=1-(1-d_{\mathrm{TV}}(\mathsf{Ber}(2/3), \mathsf{Ber}(1/3)))^{2}=5/9\).
In a very recent work, Feng, Guo, Jerrum, and Wang [10] showed that, when \(P\) and \(Q\) are product distributions, approximating \(\mathbf{Pr}_{\mathcal{L}}[X\neq Y]\) nevertheless always leads to a good approximation for \(\mathbf{Pr}_{\mathcal{O}}[X\neq Y]\) where \(\mathcal{L}\) and \(\mathcal{O}\) are local and optimal couplings respectively of \(P\) and \(Q\). More precisely, denoting by \(\alpha\) the ratio \(\mathbf{Pr}_{\mathcal{O}}[X\neq Y]/\mathbf{Pr}_{\mathcal{L}}[X\neq Y]\), what [10] showed are that for any two \(n\)-dimensional product distributions \(P\) and \(Q\):
* \(\alpha\) is at least \(\Omega(1/n)\);
* there is an unbiased estimator \(\hat{\alpha}\in[0,1]\) of \(\alpha\) that can be efficiently evaluated.
The Chernoff bound then implies that averaging \(O(n)\) independent copies of \(\hat{\alpha}\) gives a relative approximation of \(\alpha\). Since \(\mathbf{Pr}_{\mathcal{L}}[X\neq Y]=1-\prod_{i=1}^{n}(1-d_{\mathrm{TV}}(P_{i},Q _{i}))\) is easy to compute, an FPRAS for \(d_{\mathrm{TV}}(P,Q)=\alpha\cdot\mathbf{Pr}_{\mathcal{L}}[X\neq Y]\) follows for product distributions.
Generalizing this approach to Bayes nets over general DAGs poses several challenges. The main issues arise even when we consider very simple Bayes nets--directed path graphs. Let
and \(Q\) be Bayes nets over a directed path of length \(n\). That is, for both \(P\) and \(Q\), the \(i\)-th and \((i-2)\)-th marginals are conditionally independent given the \((i-1)\)-th marginal. In terms of factorizations, for all \(w\in[\ell]^{n}\):
\[P(w)=\prod_{i=1}^{n}\mathop{\bf Pr}_{P}[w_{i}|w_{i-1}]\qquad\mbox{and}\qquad Q( w)=\prod_{i=1}^{n}\mathop{\bf Pr}_{Q}[w_{i}|w_{i-1}].\]
The inputs to the distance estimation problem are the conditional probability distributions \(P_{i|i-1}(b|c)=\mathop{\bf Pr}_{P}[w_{i}=b|w_{i-1}=c]\) and \(Q_{i|i-1}(b|c)=\mathop{\bf Pr}_{Q}[w_{i}=b|w_{i-1}=c]\) for all \(i\in[n]\) and \(c\in[\ell]\). The goal is to output an \((1+\varepsilon)\)-approximation of \(d_{\rm TV}(P,Q)\) with probability at least \(1-\delta\).
As in the case of product distributions, suppose we seek a coupling \(\mathcal{L}\) of \(P\) and \(Q\) that also forms a Bayes net over the directed path. In other words, we would like a coupling \(\mathcal{L}\) generating \((X_{1},\ldots,X_{n},Y_{1},\ldots,Y_{n})\) such that each \((X_{i},Y_{i})\) is independent of \((X_{i-2},Y_{i-2})\) conditioned on \((X_{i-1},Y_{i-1})\). However, there is an immediate problem: Namely, \(X_{i}\) and \(X_{i-2}\) may be dependent given \(X_{i-1}\) through the path \(X_{i-2}\to Y_{i-1}\to X_{i}\), and similarly \(Y_{i}\) and \(Y_{i-2}\) may be dependent given \(Y_{i-1}\) through the path \(Y_{i-2}\to X_{i-1}\to Y_{i}\). Hence, it may not be possible2 to ensure that \((X_{1},\ldots,X_{n})\) form a copy of \(P\) and \((Y_{1},\ldots,Y_{n})\) form a copy of \(Q\), as is required for a coupling.
Footnote 2: Note that this issue does not arise for product distributions as there are no paths to speak of.
Our main conceptual innovation is that we _drop the requirement that \(\mathcal{L}\) forms a coupling_ and allow \(\mathcal{L}\) to be a _local partial coupling_. A local partial coupling of \(P\) and \(Q\) is a distribution \(\mathcal{L}\) over \((X,Y)\in[\ell]^{n}\times[\ell]^{n}\) satisfying the following three properties:
* \(\mathcal{L}\) is a Bayes net over a directed path of length \(n\) with marginals \((X_{i},Y_{i})\) at node \(i\);
* \(X\sim P\);
* for any \(b,c_{1},c_{2}\in[\ell]\), it is the case that \[\mathop{\bf Pr}[X_{i}=Y_{i}=b|X_{i-1}=c_{1},Y_{i-1}=c_{2}]=\min(P_{i|i-1}(b|c_ {1}),Q_{i|i-1}(b|c_{2})).\]
Note that the above conditions do not place a condition on the distribution of \(Y\). When \(P\) and \(Q\) are arbitrary Bayes net distributions described via DAGs, these conditions can be generalized from the path to general DAGs in a straightforward manner.
Such an \(\mathcal{L}\) can always be constructed by using (iii) to define the conditional probability of \(X_{i}=Y_{i}=b\) and by adjusting the rest of the probability mass to ensure that for all \(b,c_{1},c_{2}\):
\[\mathop{\bf Pr}[X_{i}=b|X_{i-1}=c_{1},Y_{i-1}=c_{2}]=P_{i|i-1}(b|c_{1}).\]
The fact that \(P\) is a Bayes net on a path then implies that \(X\sim P\).
Define \(Z=\mathop{\bf Pr}_{\mathcal{L}}[X\neq Y]\) for \(\mathcal{L}\) as above. Since \(\mathcal{L}\) is a path Bayes net, as mentioned earlier, we can use probabilistic inference over the path (by classic variable elimination) to efficiently compute \(Z\) exactly. The main technical result we establish about \(Z\) is that
\[Z\leq 2n\cdot d_{\rm TV}(P,Q).\]
Our proof of this inequality is elementary but crucially uses properties (ii) and (iii) in the definition of \(\mathcal{L}\) above. Now we can follow the same strategy as [10] by defining an unbiased estimator \(\hat{\alpha}\) for \(\alpha=d_{\rm TV}(P,Q)/Z\) and empirically estimating the expectation of \(\hat{\alpha}\).
To define this estimator, we observe that we can define \(d_{\rm TV}(P,Q)=\sum_{w}g^{*}(w)\) while \(Z=\sum_{w}g(w)\), where
\[g^{*}(w)=P(w)-\min(P(w),Q(w)),\qquad g(w)=P(w)-\prod_{i}\min(P_{i|i-1}(w_{i}|w _{i-1}),Q_{i|i-1}(w_{i}|w_{i-1})).\]
Here, \(0\leq g^{*}(w)\leq g(w)\) for all \(w\). We then use a classic importance sampling technique (mentioned, e.g., in [10]) to estimate ratios of sums:
\[\frac{\sum_{w}g^{*}(w)}{\sum_{w}g(w)}=\mathop{\mathbf{E}}_{w\sim\pi}\Bigl{[} \frac{g^{*}(w)}{g(w)}\Bigr{]}\]
where the distribution \(\pi\) has mass function \(\pi(w)=g(w)/\sum_{w}g(w)\). Using the fact that \(g^{*}(w)/g(w)\) lies in the interval \([0,1]\) and has expectation over \(\pi\) equal to \(\alpha\geq 1/2n\), we can conclude our analysis by the Chernoff bound. This approach requires that we are able to sample efficiently from \(\pi\), which we show is possible using calls to a probabilistic inference oracle.
### Proofs of the rest of the results
We outline here the main proof ideas of the rest of our results.
Proof of Corollary 3.The proof of Corollary 3 is an application of Theorem 1. To make use of Theorem 1, we establish that probabilistic inference (i.e., computing \(\mathop{\mathbf{Pr}}[X_{1}\in S_{1},\ldots,X_{n}\in S_{n}]\)) can be efficiently implemented for Bayes nets of constant alphabet size and logarithmic treewidth (Lemma 26). It is known that a tree decomposition of graphs that have logarithmic treewidth can be computed in polynomial time [12]. The variable elimination algorithm of [13] shows that inference can be done in polynomial time given a tree decomposition, provided that the treewidth of the Bayes net is logarithmic.
Proof of Theorem 4.Theorem 4 is proved by showing a reduction from #SAT to computing the TV distance between an appropriately defined Bayes net and the uniform distribution. This is achieved by creating a Bayes net that captures the circuit structure of a Boolean formula \(F\) of which we want to compute its number of satisfying assignments. The CPTs of this Bayes net mimic the function of the logical gates (AND, OR, NOT) of \(F\).
Proof of Theorem 5.Theorem 5 is proved by giving an algorithm that exploits the following property of TV distance. Let \(P\) be a Bayes net over \(n\) variables that has maximum in-degree \(d\) and alphabet size \(\ell\). In this case \(d_{\mathrm{TV}}(P,\mathbb{U})\) is equal to
\[\frac{1}{2}\sum_{x}|P(x)-\mathbb{U}(x)| =\sum_{x}\max(0,P(x)-\mathbb{U}(x))=\sum_{x}\mathbb{U}(x)\max \biggl{(}0,\frac{P(x)}{\mathbb{U}(x)}-1\biggr{)}\] \[=\mathop{\mathbf{E}}_{x\sim\mathbb{U}}\Bigl{[}\max\biggl{(}0, \frac{P(x)}{\mathbb{U}(x)}-1\biggr{)}\Bigr{]}=\mathop{\mathbf{E}}_{x\sim \mathbb{U}}\bigl{[}\max(0,P(x)\,\ell^{n}-1)\bigr{]}\,.\]
This yields a natural estimator for \(d_{\mathrm{TV}}(P,\mathbb{U})\), whereby we draw samples \(x_{1},\ldots,x_{m}\sim\mathbb{U}\) and then compute and output
\[\frac{1}{m}\sum_{i=1}^{m}\max(0,P(x_{i})\,\ell^{n}-1)\,.\]
The crux of our analysis is to show that the quantity \(\max(0,P(x)\,\ell^{n}-1)\) is between \(0\) and \(1+O\Bigl{(}d_{\mathrm{TV}}(P,\mathbb{U})\,\ell^{d+1}n\Bigr{)}\). This enables us to use a value of \(m\) that is in \(O\Bigl{(}\mathrm{poly}\Bigl{(}n\ell^{d},1/\varepsilon,\log(1/\delta)\Bigr{)} \Bigr{)}\), whereby \(\varepsilon\) is the accuracy error and \(\delta\) is the confidence error of the FPRAS. Note that the running time is polynomial in the input length, as any description of the Bayes net \(P\) has size at least \(n+\ell^{d+1}\).
Preliminaries
We use \([n]\) to denote the set \(\{1,\ldots,n\}\) and \(\log\) to denote \(\log_{2}\). Throughout the paper, we shall assume that all probabilities are represented as rational numbers of the form \(a/b\). We denote the uniform distribution by \(\mathbb{U}\).
The following concentration inequality will be useful in our proofs.
**Lemma 7** (Hoeffding's inequality).: _Let \(X_{1},\ldots,X_{n}\) be independent random variables such that \(a_{i}\leq X_{i}\leq b_{i}\) for all \(1\leq i\leq n\). Then_
\[\mathbf{Pr}\!\left[\left|\sum_{i=1}^{n}X_{i}-\mathbf{E}\!\left[\sum_{i=1}^{n}X _{i}\right]\right|\geq t\right]\leq 2\exp\!\left(-\frac{2t^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}} \right).\]
We shall use the following notion of an approximation algorithm.
**Definition 8** (Fpras).: A function \(f:\left\{0,1\right\}^{*}\rightarrow\mathbb{R}\) admits a _fully polynomial-time randomized approximation scheme (FPRAS)_ if there is a _randomized_ algorithm \(\mathcal{A}\) such that for all \(n\) and all inputs \(x\in\left\{0,1\right\}^{n}\), \(\varepsilon>0\), and \(\delta>0\), \(\mathcal{A}\) outputs a \((1+\varepsilon)\)-relative approximation of \(f(x)\), i.e., a value \(v\) that lies in the interval \([f(x)/(1+\varepsilon),(1+\varepsilon)f(x)]\), with probability \(1-\delta\). The running time of \(\mathcal{A}\) is polynomial in \(n,1/\varepsilon,1/\delta\).
### Bayes nets
For a directed acyclic graph (DAG) \(G\) and a node \(v\) in \(G\), let \(\Pi(v)\) denote the set of parents of \(v\).
**Definition 9** (Bayes nets).: A _Bayes net_ is specified by a directed acyclic graph (DAG) over a vertex set \([n]\) and a collection of probability distributions over symbols in \([\ell]\), as follows. Each vertex \(i\) is associated with a random variable \(X_{i}\) whose range is \([\ell]\). Let \(\Pi(X_{i})\) denote the set of random variables associated with \(\Pi(i)\) (by overloading the notation \(\Pi\)). Each node \(i\) of \(G\) has a CPT (Conditional Probability Table) that describes the following: For every \(x\in[\ell]\) and every \(y\in[\ell]^{k}\), where \(k\) is the size of \(\Pi(i)\), the CPT has the value of \(\Pr[X_{i}=x|\Pi(X_{i})=y]\) stored. Given such a Bayes net, its associated probability distribution \(P\) is given by the following: For all \(x\in[\ell]^{n}\),
\[P(x)=\underset{P}{\mathbf{Pr}}[X=x]=\prod_{i=1}^{n}\underset{P}{\mathbf{Pr}} \Big{[}X_{i}=x_{i}|X_{\Pi(X_{i})}=x_{\Pi(X_{i})}\Big{]}\,.\]
Here \(X\) is the joint distribution \((X_{1},\ldots,X_{n})\) and \(x_{\Pi(X_{i})}\) is the projection of \(x\) to the indices in \(\Pi(i)\).
Note that \(P(x)\) can be computed in linear time by using the CPTs of \(P\) to retrieve the probabilities \(\mathbf{Pr}_{P}\Big{[}X_{i}=x_{i}|X_{\Pi(X_{i})}=x_{\Pi(X_{i})}\Big{]}\).
An important and useful notion is that of the moralization of a Bayes net.
**Definition 10** (Moralization of Bayes nets).: Let \(B\) be a Bayes net over a DAG \(G\). The _moralization of \(B\)_ is the undirected graph that is obtained from \(G\) as follows. For every node \(u\) of \(G\) and any pair \((v,w)\) of its parents \(\Pi(u)\) if \(v\) and \(w\) are not connected by some edge in \(G\), then add the edge \((v,w)\). (Note that after this step the parents of every node of \(G\) form a clique.) Finally, make all edges of \(G\) undirected.
We shall require the following simple observation.
**Lemma 11**.: _Given a Bayes net over \(n\) nodes, its moralization can be computed in time \(O(\operatorname{poly}(n))\)._
Proof.: Let \(B\) be a Bayes net over a DAG \(G\) that has \(n\) nodes. Let \(v\) be a node of \(G\) and let \(\Pi(v)\) be the set of the parents of \(v\). We can construct a clique among the nodes of \(\Pi(v)\) in time \(O\big{(}n^{2}\big{)}\), since \(|\Pi(v)|\leq n\). Therefore we can construct all of the required cliques in time \(n\cdot O(n^{2})=O(n^{3})\). Finally, we can make all directed edges of \(G\) undirected in time \(O\big{(}n^{2}\big{)}\). This yields a total running time of \(O\big{(}n^{3}\big{)}\).
### Total variation distance
The following notion of distance is central in this work.
**Definition 12** (Total variation distance).: For probability distributions \(P,Q\) over a finite sample space \(\mathcal{D}\), the _total variation distance_ of \(P\) and \(Q\) is
\[d_{\mathrm{TV}}(P,Q)=\max_{S\subseteq\mathcal{D}}(P(S)-Q(S))\,.\]
Note that \(d_{\mathrm{TV}}(P,Q)\) also equals \(\frac{1}{2}\sum_{w\in\mathcal{D}}|P(w)-Q(w)|=\sum_{w\in\mathcal{D}}\max(0,P(w) -Q(w))\).
### Probabilistic inference
The notions of probabilistic inference and probabilistic inference oracle (for Bayes nets) are central in this work.
For us, _probabilistic inference_ is the following computational task: Given (a representation of) random variables \(X_{1},\ldots,X_{n}\) and (a representation of) sets \(S_{1},\ldots,S_{n}\) such that for all \(i\) the set \(S_{i}\) is a subset of the range of \(X_{i}\), compute the probability \(\mathbf{Pr}[X_{1}\in S_{1},\ldots,X_{n}\in S_{n}]\).3
Footnote 3: Note that a notion of probabilistic inference that has previously been considered [1] is the following: Given random variables \(X_{1},\ldots,X_{n}\), a set \(I=\{i_{1},\cdots,i_{k}\}\subseteq[n]\), values \(x_{i_{1}},\ldots,x_{i_{k}}\) that belong to the ranges of \(X_{i_{1}},\ldots,X_{i_{k}}\), respectively, and an event \(E\), compute the probability \(\mathbf{Pr}[(X_{i_{1}},\ldots,X_{i_{k}})=(x_{i_{1}},\ldots,x_{i_{k}})\,|E]\).
Let us now define probabilistic inference (oracle) queries.
**Definition 13** (Probabilistic inference query over Bayes nets).: A _probabilistic inference query_ takes a description of a Bayes net distribution \(P\) over \(n\) nodes and alphabet size \(\ell\) and descriptions of sets \(S_{1},\ldots,S_{n}\), where for all \(1\leq i\leq n\), \(S_{i}\subseteq[\ell]\), and returns in time \(O(1)\) the value of \(\mathbf{Pr}_{P}[X_{1}\in S_{1},\ldots,X_{n}\in S_{n}]\).
### Treewidth and tree decompositions
We require the definition of treewidth.
**Definition 14**.: A _tree decomposition of an undirected graph_\(G=(V,E)\) is a tree \(T\) with nodes \(X_{1},\ldots,X_{n}\), where each \(X_{i}\) is a subset of \(V\), satisfying the following properties (the term node is used to refer to a vertex of \(T\) to avoid confusion with vertices of \(G\)):
1. The union of all sets \(X_{i}\) equals \(V\). That is, each graph vertex is contained in at least one tree node.
2. If \(X_{i}\) and \(X_{j}\) both contain a vertex \(v\), then all nodes \(X_{k}\) of \(T\) in the (unique) path between \(X_{i}\) and \(X_{j}\) contain \(v\) as well. Equivalently, the tree nodes containing vertex \(v\) form a connected subtree of \(T\).
3. For every edge \((v_{1},v_{2})\) in the graph, there is a subset \(X_{i}\) that contains both \(v_{1}\) and \(v_{2}\). That is, vertices are adjacent in the graph only when the corresponding subtrees have a node in common.
The _width of a tree decomposition_ is the size of its largest set \(X_{i}\) minus one. The _treewidth \(\operatorname{tw}(G)\) of a graph \(G\)_ is the minimum width among all possible tree decompositions of \(G\).
We shall also extend the notion of treewidth to Bayes nets, as follows.
**Definition 15**.: The _treewidth of a Bayes net_ is defined to be equal to the treewidth of its moralization.
We require the following two theorems, Theorem 16 and Theorem 17, respectively; Theorem 16 is about a tree decomposition algorithm and Theorem 17 is about the variable elimination algorithm.
**Theorem 16** (Tree decomposition [14]).: _There is a \(O\big{(}w3^{3w}n^{2}\big{)}\)-time algorithm that finds a tree decomposition of width \(4w+1\), if the treewidth of the input graph is at most \(w\)._
We will make use of the variable elimination algorithm to efficiently implement probabilistic inference queries for bounded treewidth Bayes nets.
**Theorem 17** (Variable elimination; following Zhang and Poole [13]).: _There is an algorithm, called the variable elimination algorithm, for the following task: Given a Bayes net \(B\) over variables \(X_{1},\ldots,X_{n}\in[\ell]\), sets \(S_{1},\ldots,S_{n}\subseteq[\ell]\), the moralization \(M_{B}\) of \(B\), and a tree decomposition \(\mathcal{T}\) of width \(w\) of \(M_{B}\), compute the probability \(\mathbf{Pr}_{B}[X_{1}\in S_{1},\ldots,X_{n}\in S_{n}]\). The running time of this algorithm is \(O(n\ell^{w})\)._
## 4 Structure-preserving reduction from TV distance estimation to probabilistic inference
In this section, we prove Theorem 1 and Corollary 3. In the following, let \(T(G,\ell)\) be the running time of some implementation of a probabilistic inference oracle for a Bayes net over a DAG \(G\) that has alphabet size \(\ell\).
**Theorem 1** (Formal).: _There is a polynomial-time randomized algorithm that takes a DAG \(G\), two Bayes nets \(P\) and \(Q\) over \(G\) (as CPTs) that have alphabet size \(\ell\), and parameters \(\varepsilon,\delta\) as inputs and behaves as follows._
_The algorithm makes probabilistic inference queries for a Bayes net over the same DAG \(G\) that has alphabet size \(\ell^{2}\) and outputs an \((1+\varepsilon)\)-relative approximation of \(d_{\mathrm{TV}}(P,Q)\) with probability at least \(1-\delta\). The running time of this algorithm is \(T(G,\ell^{2})\cdot O\big{(}n^{3}\varepsilon^{-2}\ell\log\delta^{-1}\big{)}\) and the number of its probabilistic inference queries is \(O\big{(}n^{3}\varepsilon^{-2}\ell\log\delta^{-1}\big{)}\)._
The rest of the section is devoted to proving Theorem 1 and is organized as follows. We first introduce the ingredients that are necessary for describing the algorithm. In Section 4.1, we show how the algorithm can be implemented using probabilistic inference queries. Finally, in Section 4.2 we establish its correctness.
Let \(P\) and \(Q\) be two Bayes net distributions defined over a DAG \(G\) with \(n\) nodes and alphabet \([\ell]\). Without loss of generality, assume that the nodes are topologically ordered as in the sequence \(1,2,\ldots,n\).
Our approach is to define an estimator function \(f\) and a distribution \(\pi\) so that \(\mathbf{E}_{\pi}[f]=d_{\mathrm{TV}}(P,Q)/Z\) where \(Z\) is a normalization constant. The algorithm proceeds by estimating \(\mathbf{E}_{\pi}[f]\), multiplies it by \(Z\), and returns the value. The algorithm uses probabilistic inference queries to compute \(Z\) and to sample from the distribution \(\pi\).
Let \(w\) be an element of the sample space, i.e, a \(n\)-symbol string over \([\ell]\). Given \(1\leq i\leq n\), \(\Pi(i)\) denotes the set of parents of \(i\) in \(G\) and let \(w_{\Pi(i)}\) denote the projection \(w\) at the parents of node \(i\) in \(G\). We first define a function \(h\) over \([\ell]^{n}\times[n]\) as follows:
\[h(w,i):=\min\Bigl{(}P_{i|\Pi(i)}\Bigl{(}w_{i}|w_{\Pi(i)}\Bigr{)}\,,Q_{i|\Pi(i )}\Bigl{(}w_{i}|w_{\Pi(i)}\Bigr{)}\Bigr{)}\,.\]
Descriptions of \(f\), \(Z\), and \(\pi\).The _estimator function_\(f\) is defined as follows:
\[f(w):=\frac{\max(0,P(w)-Q(w))}{g(w)}\qquad\text{where}\qquad g(w):=P(w)-\prod_{ i=1}^{n}h(w,i)\]
for all \(w\). It is straightforward to show that \(f\) is computable in time \(O(n)\). We define \(Z:=\sum_{w\in[\ell]^{n}}g(w)\) to be a normalization constant. Finally, the distribution \(\pi\) is specified by the probability function \(\pi(w):=g(w)/Z\) for all \(w\).
Description of \(\mathcal{L}\).We now define a Bayes net distribution \(\mathcal{L}\) over the graph \(G\) which is used to make inference queries by the algorithm. The distribution \(\mathcal{L}\) is over the alphabet \([\ell]^{2}\) and is a joint distribution \((X,Y)\) where \(X\) and \(Y\) take value over \([\ell]^{n}\). We specify a CPT for \((X,Y)\). For this, we need to specify for every \(i\) and \(b,z\in[\ell]\) the probability \(\Pr[(X_{i},Y_{i})=(b,z)]\) conditioned on the values \(\Pi(i)\) take. We will first describe the probability where both \(X_{i}\) and \(Y_{i}\) take the same value \(b\). For every \(c_{1},c_{2}\in[\ell]\),
\[\mathbf{Pr}\Bigl{[}(X_{i},Y_{i})=(b,b)\,|\,\Bigl{(}X_{\Pi(i)},Y_{\Pi(i)}\Bigr{)} =(c_{1},c_{2})\Bigr{]}=\min\Bigl{(}P_{i|\Pi(i)}(b|c_{1})\,,Q_{i|\Pi(i)}(b|c_{2 })\Bigr{)}\,.\]
Define the remaining probabilities to ensure that the marginal \(X\) is distributed according to \(P\). That is, for every \(z\neq b\) assign \(\mathbf{Pr}\Bigl{[}(X_{i},Y_{i})=(b,z)\,|\,\Bigl{(}X_{\Pi(i)},Y_{\Pi(i)}\Bigr{)} =(c_{1},c_{2})\Bigr{]}\) so that the following holds:
\[\sum_{z:z\neq b} \mathbf{Pr}\Bigl{[}(X_{i},Y_{i})=(b,z)\,|\,\Bigl{(}X_{\Pi(i)},Y_{ \Pi(i)}\Bigr{)}=(c_{1},c_{2})\Bigr{]}\] \[=P_{i|\Pi(i)}(b|c_{1})-\min\Bigl{(}P_{i|\Pi(i)}(b|c_{1})\,,Q_{i| \Pi(i)}(b|c_{2})\Bigr{)}\]
Now we are ready to describe the algorithm (see Algorithm 1).
```
0: Bayes nets \(P,Q\) over DAG \(G\) with \(n\) nodes, parameters \(\varepsilon,\delta\).
0: The output \(\mathtt{Est}\) is an \((1+\varepsilon)\)-approximation of \(d_{\mathrm{TV}}(P,Q)\), with probability at least \(1-\delta\).
1: Construct the Bayes net distribution \(\mathcal{L}\) over \(G\)
2: Compute \(Z\) by making one probabilistic inference query using \(\mathcal{L}\)
3:\(m\gets Cn^{2}\varepsilon^{-2}\log\delta^{-1}\qquad\text{(for some sufficiently large $C>0$)}\)
4:\(F\gets 0\)
5:for\(i\gets 1\)to\(m\)do
6: Sample \(w^{i}\sim\pi\) by making probabilistic inference queries using \(\mathcal{L}\)
7:\(F\gets F+f(w^{i})\)
8:endfor
9:\(\mathtt{Est}\gets ZF/m\)
10:returnEst
```
**Algorithm 1** FPRAS for \(d_{\mathrm{TV}}\) estimation using a probabilistic inference oracle.
### Implementing the algorithm with probabilistic inference queries
This subsection is devoted to showing that the sampling from the distribution \(\pi\) and the computation of the normalization constant \(Z\) can be done by making probabilistic inference queries. Recall that \(\mathcal{L}\) is joint distribution \((X,Y)\). We start with the following crucial observation which states that the marginal \(X\) (in \(\mathcal{L}\)) is distributed according to the distribution \(P\).
**Observation 18**.: _For every \(b,c_{1},c_{2}\in[\ell]\),_
\[\mathbf{Pr}\Big{[}X_{i}=b|\left(X_{\Pi(i)},Y_{\Pi(i)}\right)=(c_{1},c_{2}) \Big{]}=P_{i|\Pi(i)}(b|c_{1})\,.\]
Proof.: We have
\[\mathbf{Pr}\Big{[}X_{i}=b|\left(X_{\Pi(i)},Y_{\Pi(i)}\right)=(c_{ 1},c_{2})\Big{]} =\sum_{z\in[\ell]}\mathbf{Pr}\Big{[}(X_{i},Y_{i})=(b,z)\,|\left(X _{\Pi(i)},Y_{\Pi(i)}\right)=(c_{1},c_{2})\Big{]}\] \[=\min\Bigl{(}P_{i|\Pi(i)}(b|c_{1})\,,Q_{i|\Pi(i)}(b|c_{2})\Bigr{)}\] \[\qquad+P_{i|\Pi(i)}(b|c_{1})-\min\Bigl{(}P_{i|\Pi(i)}(b|c_{1})\,, Q_{i|\Pi(i)}(b|c_{2})\Bigr{)}\] \[=P_{i|\Pi(i)}(b|c_{1})\,.\qed\]
Therefore, \(X\) factorizes like \(P\) with its conditional probabilities matching that of \(P\) and hence \(X\sim P\). This realizes the notion of a local partial coupling as was earlier discussed in Section 2.1 and satisfies all three properties: (i) \(\mathcal{L}\) is a Bayes net distribution over the same DAG \(G\) (that is used to describe distributions \(P\) and \(Q\)), (ii) \(X\sim P\), and (iii) in the joint distribution \((X,Y)\), the conditional probabilities are equal to the minimum of the two conditional probabilities associated to \(P\) and \(Q\) as it is the case in standard couplings.
In Claim 19 we relate the normalization constant \(Z\) of the distribution \(\pi\) to the marginals \(X\) and \(Y\) of the distribution \(\mathcal{L}\). Moreover, we also relate the generalized normalization constant
\[Z_{b_{1},\ldots,b_{k}}:=\sum_{w:(w_{1},\ldots,w_{k})=(b_{1},\ldots,b_{k})}g(w )\,,\]
for \(b_{1},\ldots,b_{k}\in[\ell]\), to the marginals \(X\) and \(Y\) of the distribution \(\mathcal{L}\). We need this generalized normalization constant to show that sampling from the distribution \(\pi\) (Claim 21) can be efficiently done via probabilistic inference queries.
**Claim 19**.: _It is the case that_
\[Z=\mathbf{Pr}[X\neq Y]\qquad\text{and}\qquad Z_{b_{1},\ldots,b_{k}}=\mathbf{ Pr}[X\neq Y,X_{1}=b_{1},\ldots,X_{k}=b_{k}]\]
_for any \(b_{1},\ldots,b_{k}\in[\ell]\)._
Proof.: Since \(X\sim P\) and for that matter \(P(w)=\mathbf{Pr}[X=w]\), we have
\[g(w) =P(w)-\prod_{i=1}^{n}\min\Bigl{(}P_{i|\Pi(i)}\Bigl{(}w_{i}|w_{\Pi( i)}\Bigr{)}\,,Q_{i|\Pi(i)}\Bigl{(}w_{i}|w_{\Pi(i)}\Bigr{)}\Bigr{)}\] \[=P(w)-\prod_{i=1}^{n}\mathbf{Pr}\Big{[}(X_{i},Y_{i})=(w_{i},w_{i} )\,|\,\Bigl{(}X_{\Pi(i)},Y_{\Pi(i)}\Bigr{)}=\Bigl{(}w_{\Pi(i)},w_{\Pi(i)}\Bigr{)} \Bigr{]}\]
\[=P(w)-\mathbf{Pr}[X=Y=w]\] \[=\mathbf{Pr}[X=w]-\mathbf{Pr}[X=Y=w]\] \[=\mathbf{Pr}[X=w]-\mathbf{Pr}[Y=w|X=w]\cdot\mathbf{Pr}[X=w]\] \[=\mathbf{Pr}[X=w]\left(1-\mathbf{Pr}[Y=w|X=w]\,\right)\] \[=\mathbf{Pr}[X=w]\,\mathbf{Pr}[Y\neq w|X=w]\] \[=\mathbf{Pr}[X=w,Y\neq w]\,.\]
Therefore, we have that
\[Z=\sum_{w}g(w)=\mathbf{Pr}[X\neq Y]\]
and
\[Z_{b_{1},\dots,b_{k}} =\sum_{w:(w_{1},\dots,w_{k})=(b_{1},\dots,b_{k})}g(w)\] \[=\sum_{w:(w_{1},\dots,w_{k})=(b_{1},\dots,b_{k})}\mathbf{Pr}[X=w,Y\neq w]\] \[=\mathbf{Pr}[X\neq Y,X_{1}=b_{1},\dots,X_{k}=b_{k}]\,.\qed\]
The following claim says that \(Z\) and \(Z_{b_{1},\dots,b_{k}}\) can be easily computed given access to a probabilistic inference oracle for \(\mathcal{L}\).
**Claim 20**.: _It is the case that \(Z\) and \(Z_{b_{1},\dots,b_{k}}\) (for any \(b_{1},\dots,b_{k}\in[\ell]\)) can be computed in time \(O(1)\) by making \(O(1)\) probabilistic inference queries to the Bayes net distribution \(\mathcal{L}\)._
Proof.: Note that \(Z=\mathbf{Pr}[X\neq Y]\) is equal to \(1-\mathbf{Pr}[X=Y]\). Therefore it suffices to compute \(\mathbf{Pr}[X=Y]\) by using a probabilistic inference oracle. This can done by observing that \(\mathbf{Pr}[X=Y]\) is equal to \(\mathbf{Pr}[(X_{1},Y_{1})\in S_{1},\dots,(X_{n},Y_{n})\in S_{n}]\) for \(S_{1}=\dots=S_{n}=\left\{(1,1)\,,\dots,(\ell,\ell)\right\}\).
Now note that \(Z_{b_{1},\dots,b_{k}}=\mathbf{Pr}[X\neq Y,X_{1}=b_{1},\dots,X_{k}=b_{k}]\) is equal to
\[\mathbf{Pr}[X_{1}=b_{1},\dots,X_{k}=b_{k}]-\mathbf{Pr}[X=Y,X_{1}=b_{1},\dots,X _{k}=b_{k}]\,.\]
What is left is to show how to compute these two probabilities by using a probabilistic inference oracle. We have that \(\mathbf{Pr}[X_{1}=b_{1},\dots,X_{k}=b_{k}]\) is equal to \(\mathbf{Pr}[(X_{1},Y_{1})\in S_{1},\dots,(X_{n},Y_{n})\in S_{n}]\) for \(S_{i}=\left\{(b_{i},1)\,,\dots,(b_{i},\ell)\right\}\) for all \(1\leq i\leq k\) and \(S_{k+1}=\dots=S_{n}=[\ell]^{2}\).
Similarly, we have that
\[\mathbf{Pr}[X=Y,X_{1}=b_{1},\dots,X_{k}=b_{k}]=\mathbf{Pr}[(X_{1},Y_{1})\in S _{1},\dots,(X_{n},Y_{n})\in S_{n}]\]
for \(S_{i}=\left\{(b_{i},b_{i})\right\}\) for all \(1\leq i\leq k\) and \(S_{k+1}=\dots=S_{n}=\left\{(1,1)\,,\dots,(\ell,\ell)\right\}\).
We will now show that probabilistic inference queries allow for efficient sampling from \(\pi\).
**Claim 21**.: _Sampling from the distribution \(\pi\) can be implemented in time \(O(n\ell)\) by making \(O(n\ell)\) probabilistic inference queries._
Proof.: We describe how to sample from \(\pi\) iteratively, symbol by symbol. Assume that we have sampled the first \(k-1\) symbols, that is, assume that we have already sampled \(w_{1},\dots,w_{k-1}\) to be equal to \(b_{1},\dots,b_{k-1}\in[\ell]\). We describe now how to sample \(w_{k}\). For every possible value \(b\in[\ell]\) of \(w_{k}\), we compute the marginal
\[\mu_{b}:=\pi(b_{1},\dots,b_{k-1},b)=\frac{\sum_{w:(w_{1}\cdots w_{k})=(b_{1} \cdots b_{k-1}b)}g(w)}{Z}=\frac{Z_{b_{1}\cdots b_{k-1}b}}{Z}\]
by two invocations of Claim 20. Then, we sample \(w_{k}\) based on the values \(\{\mu_{b}\}_{b=1}^{\ell}\).
Let \(S(n)\) be the number of steps to sample \(n\) symbols from \(\pi\). The above procedure gives the recurrence relation \(S(n)=O(\ell)+S(n-1)\) which yields \(S(n)=O(n\ell)\). Since we perform at most two probabilistic inference queries for every coordinate and every symbol, the total number of probabilistic inference queries is equal to \(S(n)=O(n\ell)\).
### Analysis of the algorithm
Next, we establish some useful properties of the function \(f\) and the distribution \(\pi\).
**Claim 22**.: _For any \(w\), it is the case that \(0\leq f(w)\leq 1\)._
Proof.: We separately show \(0\leq f(w)\) and \(f(w)\leq 1\). To establish \(0\leq f(w)\), since the numerator is non-negative, it suffices to show that \(g(w)=P(w)-\prod_{i=1}^{n}h(w,i)\geq 0\) or equivalently \(P(w)\geq\prod_{i=1}^{n}h(w,i)\).
We have
\[P(w) =\prod_{i=1}^{n}P_{i|\Pi(i)}\Big{(}w_{i}|w_{\Pi(i)}\Big{)}\] \[\geq\prod_{i=1}^{n}\min\Bigl{(}P_{i|\Pi(i)}\Big{(}w_{i}|w_{\Pi(i) }\Big{)}\,,Q_{i|\Pi(i)}\Big{(}w_{i}|w_{\Pi(i)}\Big{)}\Bigr{)}=\prod_{i=1}^{n} h(w,i)\]
by the definition of \(h\).
For showing \(f(w)\leq 1\), it suffices to show that \(P(w)-Q(w)\leq g(w)\) (since \(0/g(w)=0\leq 1\)). Since \(g(w)=P(w)-\prod_{i-1}^{n}h(w,i)\), it suffices to show that \(Q(w)\geq\prod_{i=1}^{n}h(w,i)\). An argument identical to the above, where we showed that \(P(w)\geq\prod_{i=1}^{n}h(w,i)\), will show this.
We next relate the expected value of the function \(f\) with respect to the distribution \(\pi\) to \(d_{\mathrm{TV}}(P,Q)\).
**Claim 23**.: _It is the case that \(\mathbf{E}_{\pi}[f(w)]=d_{\mathrm{TV}}(P,Q)/Z\)._
Proof.: We have that \(\mathbf{E}_{\pi}[f(w)]\) is equal to
\[\mathbf{E}_{\frac{\pi}{\pi}}\Bigl{[}\frac{\max(0,P(w)-Q(w))}{g(w)} \Bigr{]} =\sum_{w}\pi(w)\,\frac{\max(0,P(w)-Q(w))}{g(w)}\] \[=\sum_{w}\frac{g(w)}{Z}\frac{\max(0,P(w)-Q(w))}{g(w)}\] \[=\frac{1}{Z}\sum_{w}\max(0,P(w)-Q(w))\] \[=\frac{d_{\mathrm{TV}}(P,Q)}{Z}.\qed\]
We need the following claim that ensures the estimand is large enough to facilitate Monte Carlo sampling.
**Lemma 24**.: _It is the case that \(Z\leq 2n\cdot d_{\mathrm{TV}}(P,Q)\)._
Proof.: By Claim 19, it suffices to show that \(\mathbf{Pr}[X\neq Y]\leq 2n\cdot d_{\mathrm{TV}}(P,Q)\). We split the event \((X\neq Y)\) into \(n\) disjoint events \(\{E_{i}\}_{i=1}^{n}\). Without loss of generality, assume that \(1,2,\ldots,n\) is the
topological ordering of the vertices of \(G\). Event \(E_{i}\) is defined as \((\bigwedge_{1\leq j\leq i-1}X_{j}=Y_{j})\wedge(X_{i}\neq Y_{i})\). Note that the \(E_{i}\)'s are disjoint. Thus \(\mathbf{Pr}\left[X\neq Y\right]=\sum_{i}\mathbf{Pr}\left[E_{i}\right]\). We have that
\[\mathbf{Pr}\left[E_{i}\right]\leq\mathbf{Pr}\left[(X_{i}\neq Y_{i})\wedge(X_{ \Pi(i)}=Y_{\Pi(i)})\right]=\sum_{\sigma}\mathbf{Pr}\left[(X_{i}\neq Y_{i}) \wedge(X_{\Pi(i)},Y_{\Pi(i)})=(\sigma,\sigma)\right]\]
where \(\sigma\) is an assignment for \(\Pi(i)\) (note that the length of \(\sigma\) is equal to the in-degree of \(i\)). Henceforth, for notational brevity, we shall omit the dependence on \(i\). Thus,
\[\mathbf{Pr}[X\neq Y] =\sum_{i}\mathbf{Pr}\left[E_{i}\right]\] \[\leq\sum_{i}\sum_{\sigma}\mathbf{Pr}\Big{[}X_{i}\neq Y_{i}\wedge \Big{(}X_{\Pi(i)},Y_{\Pi(i)}\Big{)}=(\sigma,\sigma)\Big{]}\] \[=\sum_{i}\sum_{\sigma}\mathbf{Pr}\Big{[}X_{i}\neq Y_{i}\Big{|} \left(X_{\Pi(i)},Y_{\Pi(i)}\right)=(\sigma,\sigma)\Big{]}\,\mathbf{Pr}\Big{[} \Big{(}X_{\Pi(i)},Y_{\Pi(i)}\Big{)}=(\sigma,\sigma)\Big{]}\,.\]
We require the following claim.
**Claim 25**.: _For any \(\sigma\), we have_
\[\mathbf{Pr}\Big{[}X_{i}\neq Y_{i}|\left(X_{\Pi(i)},Y_{\Pi(i)}\right)=(\sigma, \sigma)\Big{]}=d_{\mathrm{TV}}\Big{(}P_{i|\Pi(i)}(\cdot|\sigma)\,,Q_{i|\Pi(i)}( \cdot|\sigma)\Big{)}\,.\]
Proof.: We have that \(\mathbf{Pr}\Big{[}X_{i}\neq Y_{i}|\left(X_{\Pi(i)},Y_{\Pi(i)}\right)=(\sigma, \sigma)\Big{]}\) is equal to
\[1-\mathbf{Pr}\Big{[}X_{i}=Y_{i}|\left(X_{\Pi(i)},Y_{\Pi(i)} \right)=(\sigma,\sigma)\Big{]} =1-\sum_{c\in[\ell]}\mathbf{Pr}\Big{[}(X_{i},Y_{i})=(c,c)\,|\left( X_{\Pi(i)},Y_{\Pi(i)}\right)=(\sigma,\sigma)\Big{]}\] \[=1-\sum_{c\in[\ell]}\min\Bigl{(}P_{i|\Pi(i)}(c|\sigma)\,,Q_{i|\Pi (i)}(c|\sigma)\Bigr{)}\] \[=\sum_{c\in[\ell]}P_{i|\Pi(i)}(c|\sigma)-\sum_{c\in[\ell]}\min \Bigl{(}P_{i|\Pi(i)}(c|\sigma)\,,Q_{i|\Pi(i)}(c|\sigma)\Bigr{)}\] \[=\sum_{c\in[\ell]}\max\Bigl{(}0,P_{i|\Pi(i)}(c|\sigma)-Q_{i|\Pi(i) }(c|\sigma)\Bigr{)}\] \[=d_{\mathrm{TV}}\Big{(}P_{i|\Pi(i)}(\cdot|\sigma)\,,Q_{i|\Pi(i)}( \cdot|\sigma)\Big{)}\,.\qed\]
By Claim 25 we have that \(\mathbf{Pr}\left[X\neq Y\right]\) is at most
\[\sum_{i}\sum_{\sigma}\mathbf{Pr}\Big{[}\Big{(}X_{\Pi(i)},Y_{\Pi( i)}\Big{)}=(\sigma,\sigma)\Big{]}\,d_{\mathrm{TV}}\Big{(}P_{i|\Pi(i)}( \cdot|\sigma)\,,Q_{i|\Pi(i)}(\cdot|\sigma)\Big{)}\] \[\qquad\leq\sum_{i}\sum_{\sigma}\mathbf{Pr}\Big{[}X_{\Pi(i)}= \sigma\Big{]}\,\frac{1}{2}\sum_{c}\Big{|}P_{i|\Pi(i)}(c|b)-Q_{i|\Pi(i)}(c| \sigma)\Big{|}\] \[\qquad\leq\sum_{i}\sum_{\sigma}P_{\Pi(i)}(\sigma)\frac{1}{2}\sum_ {c}\Big{|}P_{i|\Pi(i)}(c|\sigma)-Q_{i|\Pi(i)}(c|\sigma)\Big{|}\ \ \text{(since $X \sim P$ by Observation~{}\ref{eq:max-P})}\] \[=\sum_{i}\sum_{\sigma}\frac{1}{2}\sum_{c}\Big{|}P_{\Pi(i)}( \sigma)\,P_{i|\Pi(i)}(c|\sigma)-P_{\Pi(i)}(\sigma)\,Q_{i|\Pi(i)}(c|\sigma) \Big{|}\] \[=\sum_{i}\sum_{\sigma}\frac{1}{2}\sum_{c}\Big{|}P_{\Pi(i)}(\sigma) \,P_{i|\Pi(i)}(c|\sigma)-Q_{\Pi(i)}(\sigma)\,Q_{i|\Pi(i)}(c|\sigma)\]
\[+Q_{\Pi(i)}(\sigma)\,Q_{i|\Pi(i)}(c|\sigma)-P_{\Pi(i)}(\sigma)\,Q_{i| \Pi(i)}(c|\sigma)\Big{|}\] \[\leq\sum_{i}\sum_{\sigma}\frac{1}{2}\sum_{c}\Big{|}P_{\Pi(i)}( \sigma)\,P_{i|\Pi(i)}(c|\sigma)-Q_{\Pi(i)}(\sigma)\,Q_{i|\Pi(i)}(c|\sigma) \Big{|}\] \[\qquad+\sum_{i}\sum_{\sigma}\frac{1}{2}\sum_{c}\Big{|}Q_{\Pi(i)} (\sigma)\,Q_{i|\Pi(i)}(c|\sigma)-P_{\Pi(i)}(\sigma)\,Q_{i|\Pi(i)}(c|\sigma) \Big{|}\] \[=\sum_{i}\sum_{\sigma}\frac{1}{2}\sum_{c}\Big{|}P_{i,\Pi(i)}(c, \sigma)-Q_{i,\Pi(i)}(c,\sigma)\Big{|}\] \[\qquad+\sum_{i}\sum_{\sigma}\frac{1}{2}\Big{|}Q_{\Pi(i)}(\sigma )-P_{\Pi(i)}(\sigma)\Big{|}\sum_{c}Q_{i|\Pi(i)}(c|\sigma)\] \[=\sum_{i}\frac{1}{2}\sum_{\sigma}\sum_{c}\Big{|}P_{i,\Pi(i)}(c, \sigma)-Q_{i,\Pi(i)}(c,\sigma)\Big{|}\] \[\qquad+\sum_{i}\sum_{\sigma}\frac{1}{2}\Big{|}Q_{\Pi(i)}(\sigma )-P_{\Pi(i)}(\sigma)\Big{|}\] \[=\sum_{i}d_{\mathrm{TV}}\Big{(}P_{i,\Pi(i)},Q_{i,\Pi(i)}\Big{)}+ \sum_{i}d_{\mathrm{TV}}(P_{i},Q_{i})\] \[\leq 2n\cdot d_{\mathrm{TV}}(P,Q).\]
The last inequality follows because the inequalities \(d_{\mathrm{TV}}\Big{(}P_{i,\Pi(i)},Q_{i,\Pi(i)}\Big{)}\leq d_{\mathrm{TV}}(P,Q)\) and \(d_{\mathrm{TV}}(P_{i},Q_{i})\leq d_{\mathrm{TV}}(P,Q)\) hold.
We are now ready to prove the correctness and provide a running time bound for Algorithm 1. We have, from Hoeffding's inequality (Lemma 7), that
\[\mathbf{Pr}[|\mathsf{Est}-d_{\mathrm{TV}}(P,Q)|>\varepsilon d_{ \mathrm{TV}}(P,Q)] =\mathbf{Pr}\Bigg{[}\Bigg{|}\frac{Z}{m}\sum_{i=1}^{m}f\Big{(}w^{i }\Big{)}-Z\,\underset{\pi}{\mathbf{E}}[f(w)]\Bigg{|}>\varepsilon d_{\mathrm{ TV}}(P,Q)\Bigg{]}\] \[=\mathbf{Pr}\Bigg{[}\Bigg{|}\sum_{i=1}^{m}f\Big{(}w^{i}\Big{)}-m \,\underset{\pi}{\mathbf{E}}[f(w)]\Bigg{|}>\frac{m\varepsilon}{Z}d_{\mathrm{ TV}}(P,Q)\Bigg{]}\] \[\leq 2\exp\Bigg{(}-\frac{2m^{2}\varepsilon^{2}d_{\mathrm{TV}}^{2}( P,Q)}{Z^{2}\sum_{i=1}^{m}\left(0-1\right)^{2}}\Bigg{)}\] \[\leq 2\exp\Bigg{(}-\frac{2m^{2}\varepsilon^{2}d_{\mathrm{TV}}^{2}( P,Q)}{4n^{2}d_{\mathrm{TV}}^{2}(P,Q)\,m}\Bigg{)}\] \[=2\exp\Bigg{(}-\frac{m\varepsilon^{2}}{2n^{2}}\Bigg{)}\]
which is at most \(\delta\) whenever \(m=\Omega\big{(}n^{2}\varepsilon^{-2}\log\delta^{-1}\big{)}\). The second inequality follows from Lemma 24.
Thus the running time of Algorithm 1 is \(O(mn\ell)=O\big{(}n^{3}\varepsilon^{-2}\ell\log\delta^{-1}\big{)}\), since we draw \(m\) samples from \(\pi\), we can sample from \(\pi\) in time \(O(n\ell)\), and evaluate \(f\) in time \(O(n)\). Finally, the number of probabilistic inference queries is at most \(O\big{(}n^{3}\varepsilon^{-2}\ell\log\delta^{-1}\big{)}\).
### Application: An FPRAS for estimating the TV distance between Bayes nets of bounded treewidth
In this subsection we prove Corollary 3.
**Corollary 3** (Formal).: _There is an FPRAS for estimating the TV distance between two Bayes nets of treewidth \(w=O(\log n)\) and alphabet size \(\ell=O(1)\), which are defined over the same DAG of \(n\) nodes. In particular, if \(\varepsilon\) and \(\delta\) are the accuracy and confidence errors of the FPRAS, respectively, the FPRAS runs in time \(\operatorname{poly}(n)\cdot O(\varepsilon^{-2}\log\delta^{-1})\)._
The proof of Corollary 3 will follow from the lemma below, Lemma 26, and an application of Theorem 1. We first prove Lemma 26.
**Lemma 26**.: _Probabilistic inference is efficient for all Bayes nets over \(n\) variables which have alphabet size \(\ell=O(1)\) and treewidth \(O(\log n)\)._
Proof.: Let \(B\) be a Bayes net over variables \(X_{1},\ldots,X_{n}\) that has alphabet size \(\ell=O(1)\) and treewidth \(w=O(\log n)\). Let \(S_{1},\ldots,S_{n}\subseteq[\ell]\) be sets. The probabilistic inference task that we want to perform is to compute the probability \(\mathbf{Pr}_{B}[X_{1}\in S_{1},\ldots,X_{n}\in S_{n}]\).
First, we construct the moralization of \(B\) (see Definition 10), namely \(M_{B}\), in time \(O(\operatorname{poly}(n))\) by invoking Lemma 11. Then, we use Theorem 16 to compute a tree decomposition \(\mathcal{T}\) of \(M_{B}\) of width at most \(4w+1\leq 5w\) in time \(O(w3^{3w}n^{2})\). Finally, we use the variable elimination algorithm of Theorem 17 on \(B\), \(S_{1},\ldots,S_{n}\), \(M_{B}\), and \(\mathcal{T}\) to compute \(\mathbf{Pr}_{B}[X_{1}\in S_{1},\ldots,X_{n}\in S_{n}]\) in time \(O(n\ell^{5w})\).
The running time of this procedure is \(O(\operatorname{poly}(n))+O(w3^{3w}n^{2})+O(n\ell^{5w})=O(\operatorname{poly}( n))\), whereby we have used the facts that \(\ell=O(1)\) and \(w=O(\log n)\). This concludes the proof.
The proof of Corollary 3 now follows by invoking Theorem 1 for \(\ell=O(1)\) and \(T(G,\ell^{2})=O(\operatorname{poly}(n))\).
## 5 TV distance between a Bayes net and the uniform distribution
### \(\#\mathsf{P}\)-completeness
The main result of this subsection is Theorem 4. Recall that a function \(f\) from \(\{0,1\}^{*}\) to non-negative integers is in the class \(\#\mathsf{P}\) if there is a polynomial time non-deterministic Turing machine \(M\) so that for any \(x\), it is the case that \(f(x)\) is equal to the number of accepting paths of \(M(x)\).
We now prove Theorem 4.
Proof of Theorem 4.In what follows, we separately show membership in \(\#\mathsf{P}\) and \(\#\mathsf{P}\)-hardness.
##### Membership in \(\#\mathsf{P}\).
Let \(P\) be a Bayes net distribution over the Boolean domain \(\{0,1\}^{n}\). The goal is to design a nondeterministic machine \(\mathcal{N}\) so that the number of accepting paths of \(\mathcal{N}\) (normalized by an appropriate quantity) equals \(d_{\mathrm{TV}}(P,\mathbb{U})\). We will assume that the probabilities specified in the CPTs of the Bayes net for \(P\) are fractions. Let \(M\) be equal to \(2^{n}\) times the product of the denominators of all the probabilities in the CPTs. The non-deterministic machine \(\mathcal{N}\) first guesses an element \(i\in\{0,1\}^{n}\) in the sample space of \(P\), computes \(|P(i)-1/2|\) by using the CPTs, then guesses an integer \(0\leq z\leq M\), and finally accepts if and only if \(1\leq z\leq M|P(i)-1/2|\). (Note that \(M|P(i)-1/2|=|M\cdot P(i)-M/2|\) is an integer.) It follows that
\[d_{\mathrm{TV}}(P,\mathbb{U})=\frac{1}{2}\sum_{i\in\{0,1\}^{n}}\Big{|}P(i)- \frac{1}{2}\Big{|}=\frac{\text{number of accepting paths of }\mathcal{N}}{2M}\]
since the number of accepting paths of \(\mathcal{N}\) is equal to \(\sum_{i\in\{0,1\}^{n}}\left(M\left|P(i)-1/2\right|\right)\) which is equal to \(M\sum_{i\in\{0,1\}^{n}}\left|P(i)-1/2\right|\) or \(2Md_{\mathrm{TV}}(P,Q)\).
#P-hardness.For the #P-hardness part, the proof gives a Turing reduction from the problem of counting the satisfying assignments of a CNF formula (which is #P-hard to compute) to computing the total variation distance between a Bayes net distribution and the uniform distribution. In what follows, by a graph of a formula we mean the DAG that captures the circuit structure of \(F\), whereby the nodes are either AND, OR, NOT, or variable gates, and the edges correspond to wires connecting the gates.
Let \(F\) be a CNF formula viewed as a Boolean circuit. Assume \(F\) has \(n\) input variables \(x_{1},\ldots,x_{n}\) and \(m\) gates \(\Gamma=\{y_{1},\ldots,y_{m}\}\), where \(\Gamma\) is topologically sorted with \(y_{m}\) being the output gate. We will define a Bayes net distribution on some DAG \(G\) which, intuitively, is the graph of \(F\).
The vertex set of \(G\) is split into two sets \(\mathcal{X}\) and \(\mathcal{Y}\), and a node \(Z\). The set \(\mathcal{X}=\{X_{i}\}_{i=1}^{n}\) contains \(n\) nodes with node \(X_{i}\) corresponding to variable \(x_{i}\) and the set \(\mathcal{Y}=\{Y_{i}\}_{i=1}^{m}\) contains \(m\) nodes with each node \(Y_{i}\) corresponding to gate \(y_{i}\). So totally there are \(n+m+1\) nodes. There is a directed edge from node \(V_{i}\) to node \(V_{j}\) if the gate/variable corresponding to \(V_{i}\) is an input to \(V_{j}\).
The distribution \(P\) on \(G\) is given by a CPT defined as follows. Each \(X_{i}\) is a uniformly random bit. For each \(Y_{i}\), its CPT is deterministic: For each of the setting of the parents \(Y_{j},Y_{k}\), namely \(y_{j},y_{k}\), the variable \(Y_{i}\) takes the value of the gate \(y_{i}\) for that setting of its inputs \(y_{j},y_{k}\). Finally, let \(Z\) be the value of \(Y_{m}\) OR-ed with a random bit.
Note that the formula \(F\) computes a Boolean function on the input variables. Let \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) be this function. We extend \(f\) to \(\{0,1\}^{m}\) (i.e., \(f:\{0,1\}^{n}\rightarrow\{0,1\}^{m}\)) to also include the values of the intermediate gates.
With this notation in mind, for any binary string \(XYZ\) of length \(n+m+1\) it is the case that \(P\) has a probability \(0\) if \(Y\neq f(X)\). Let \(A:=\{x\mid F(x)=1\}\) and \(R:=\{x\mid F(x)=0\}\).
To finish the proof, we will write the number of satisfying assignments of \(F\), namely \(\left|A\right|\), as a polynomial-time computable function of \(d_{\mathrm{TV}}(P,\mathbb{U})\): We have
\[2\cdot d_{\mathrm{TV}}(P,\mathbb{U})=\sum_{X,Y,Z}\left|P-\mathbb{U}\right|= \underbrace{\sum_{\begin{subarray}{c}X,Y,Z\\ Y\neq f(X)\end{subarray}}\left|P-\mathbb{U}\right|}_{\begin{subarray}{c}X,Y,Z \\ Y=f(X)\end{subarray}}\
\[\sum_{\begin{subarray}{c}X,f(X),Z\\ X\in A\end{subarray}}|P-\mathbb{U}|=\sum_{\begin{subarray}{c}X,f(X),0\\ X\in A\end{subarray}}|P-\mathbb{U}|+\sum_{\begin{subarray}{c}X,f(X),1\\ X\in A\end{subarray}}|P-\mathbb{U}|\] \[\qquad=\sum_{\begin{subarray}{c}X,f(X),0\\ X\in A\end{subarray}}\left|0-\frac{1}{2^{n+m+1}}\right|+\sum_{\begin{subarray}{ c}X,f(X),1\\ X\in A\end{subarray}}\left|\frac{1}{2^{n}}-\frac{1}{2^{n+m+1}}\right|=\frac{|A|} {2^{n+m+1}}+\frac{|A|\cdot(2^{m+1}-1)}{2^{n+m+1}}=\frac{|A|}{2^{n}}\]
and for (4) we have
\[\sum_{\begin{subarray}{c}X,f(X),Z\\ X\in R\end{subarray}}|P-\mathbb{U}| =\sum_{\begin{subarray}{c}X,f(X),0\\ X\in R\end{subarray}}|P-\mathbb{U}|+\sum_{\begin{subarray}{c}X,f(X),1\\ X\in R\end{subarray}}|P-\mathbb{U}|\] \[=\sum_{\begin{subarray}{c}X,f(X),0\\ X\in R\end{subarray}}\left|\frac{1}{2^{n+1}}-\frac{1}{2^{n+m+1}}\right|+\sum_{ \begin{subarray}{c}X,f(X),1\\ X\in R\end{subarray}}\left|\frac{1}{2^{n+1}}-\frac{1}{2^{n+m+1}}\right|=\frac{| R|\cdot(2^{m}-1)\cdot 2}{2^{n+m+1}}.\]
Thus
\[2\cdot d_{\mathrm{TV}}(P,\mathbb{U})=(1)+(2) =(1)+(3)+(4)\] \[=1-\frac{1}{2^{m}}+\frac{|A|}{2^{n}}+\frac{|R|\cdot(2^{m}-1)\cdot 2 }{2^{n+m+1}}=2\left(1-\frac{1}{2^{m}}+\frac{|A|}{2^{m+n+1}}\right)\]
since \(|A|+|R|=2^{n}\). For that matter, \(d_{\mathrm{TV}}(P,\mathbb{U})=\frac{|A|}{2^{n+m+1}}+\left(1-\frac{1}{2^{m}}\right)\) or
\[|A|=2^{n+m+1}\left(d_{\mathrm{TV}}(P,\mathbb{U})-\left(1-\frac{1}{2^{m}} \right)\right).\]
That concludes the proof.
### Estimation in fully polynomial time
We prove Theorem 5.
**Theorem 5** (Formal).: _There is an FPRAS for estimating the TV distance between a Bayes net \(P\) and the uniform distribution. Let \(n\) be the number of nodes of \(P\), let \(\ell\) be the size of its alphabet, and let \(d\) be its maximum in-degree. Then the running time of this FPRAS is \(O\!\left(n^{3}\ell^{2d+2}\varepsilon^{-2}\log\delta^{-1}\right)\) whereby \(\varepsilon\) is the accuracy error and \(\delta\) is the confidence error of the FPRAS._
**Remark 27**.: Note that the running time of the FPRAS of Theorem 5 is polynomial in the input length, as the description of the Bayes net \(P\) in terms of the CPTs has size at least \(n+\ell^{d+1}\).
We shall now prove Theorem 5. We require the following lemma (which we will prove later).
**Lemma 28**.: _For all \(x\), it is the case that_
\[1-O\!\left(d_{\mathrm{TV}}(P,\mathbb{U})\,\ell^{d+1}n\right)\leq P(x)\,\ell^{ n}\leq 1+O\!\left(d_{\mathrm{TV}}(P,\mathbb{U})\,\ell^{d+1}n\right)\]
_whenever \(d_{\mathrm{TV}}(P,\mathbb{U})\leq\frac{1}{16\ell^{d+1}}\)._
The proof of Theorem 5 now resumes as follows. First, let us assume that \(d_{\mathrm{TV}}(P,\mathbb{U})\leq\frac{1}{16\ell^{d+1}}\) so that Lemma 28 holds. We have that \(d_{\mathrm{TV}}(P,\mathbb{U})\) is equal to
\[\frac{1}{2}\sum_{x}|P(x)-\mathbb{U}(x)|=\sum_{x}\max(0,P(x)-\mathbb{U}(x))\]
\[=\sum_{x}\mathbb{U}(x)\max\!\left(0,\frac{P(x)}{\mathbb{U}(x)}-1\right)\] \[=\underset{x\sim\mathbb{U}}{\mathbf{E}}\!\left[\max\!\left(0,\frac {P(x)}{\mathbb{U}(x)}-1\right)\right]\] \[=\underset{x\sim\mathbb{U}}{\mathbf{E}}\!\left[\max\!\left(0,P(x) \,\ell^{n}-1\right)\right].\]
This yields a natural estimator for \(d_{\mathrm{TV}}(P,\mathbb{U})\), namely \(\mathsf{Est}\), as follows:
1. Sample \(x_{1},\ldots,x_{m}\sim\mathbb{U}\) for some value of \(m\) that we will fix later;
2. compute \(\max(0,P(x_{i})\,\ell^{n}-1)\) for all \(1\leq i\leq m\);
3. output \((1/m)\sum_{i=1}^{m}\max(0,P(x_{i})\,\ell^{n}-1)\).
We will now prove the correctness and upper bound the running time of this procedure. We have from Hoeffding's inequality (Lemma 7) and Lemma 28 that
\[\mathbf{Pr}[ |\mathsf{Est}-d_{\mathrm{TV}}(P,\mathbb{U})|>\varepsilon d_{ \mathrm{TV}}(P,\mathbb{U})]\] \[=\mathbf{Pr}\!\left[\left|\sum_{i=1}^{m}\max(0,P(x_{i})\,\ell^{n} -1)-m\underset{x\sim\mathbb{U}}{\mathbf{E}}\!\left[\max(0,P(x)\,\ell^{n}-1) \right]\right|>m\varepsilon d_{\mathrm{TV}}(P,\mathbb{U})\right]\] \[=\mathbf{Pr}\!\left[\left|\sum_{i=1}^{m}\max(0,P(x_{i})\,\ell^{n} -1)-m\underset{x\sim\mathbb{U}}{\mathbf{E}}\!\left[\max(0,P(x)\,\ell^{n}-1) \right]\right|>m\varepsilon d_{\mathrm{TV}}(P,\mathbb{U})\right]\] \[\leq 2\exp\!\left(-\frac{2m^{2}\varepsilon^{2}d_{\mathrm{TV}}^{2} (P,\mathbb{U})}{\sum_{i=1}^{m}\left(0-O(d_{\mathrm{TV}}(P,\mathbb{U})\,\ell^{ d+1}n)\right)^{2}}\right)\] \[=2\exp\!\left(-\frac{2m^{2}\varepsilon^{2}d_{\mathrm{TV}}^{2}(P, \mathbb{U})}{m\cdot O(d_{\mathrm{TV}}^{2}(P,\mathbb{U})\,\ell^{2d+2}n^{2})}\right)\] \[=2\exp\!\left(-\frac{m\varepsilon^{2}}{O(\ell^{2d+2}n^{2})}\right)\]
which is at most \(\delta\) whenever \(m=\Omega\!\left(n^{2}\ell^{2d+2}\varepsilon^{-2}\log\delta^{-1}\right)\).
The running time of this procedure is \(O(mn)=O\!\left(n^{3}\ell^{2d+2}\varepsilon^{-2}\log\delta^{-1}\right)\), since we draw \(m\) samples and \(P\) can be evaluated on any sample in time \(O(n)\).
If \(d_{\mathrm{TV}}\left(P,\mathbb{U}\right)>\frac{1}{16\ell^{d+1}}\), then it suffices to additively approximate \(d_{\mathrm{TV}}\left(P,\mathbb{U}\right)\) up to error \(\varepsilon/\left(16\ell^{d+1}\right)\). This can be done by Monte Carlo sampling using \(m=\Omega\!\left(\ell^{2d+2}\varepsilon^{-2}\log\delta^{-1}\right)\) samples and \(O(mn)=O\!\left(n\ell^{2d+2}\varepsilon^{-2}\log\delta^{-1}\right)\) time.
We now prove Lemma 28.
Proof of Lemma 28.Let us denote the maximum in-degree of \(P\) by \(d\). Let \(X_{0}\) be an arbitrary node with its parents as \(X_{1},\ldots,X_{d}\).
We have that \(\gamma:=d_{\mathrm{TV}}(P,\mathbb{U})\) is at least
\[d_{\mathrm{TV}} ((X_{0},\ldots,X_{d})\,,(Y_{0},\ldots,Y_{d}))\] \[=\frac{1}{2}\sum_{v_{0}}\cdots\sum_{v_{d}}\left|\mathbf{Pr}[X_{0} =v_{0}|X_{1}=v_{1},\ldots,X_{d}=v_{d}]\,\mathbf{Pr}[X_{1}=v_{1},\ldots,X_{d}=v _{d}]-\frac{1}{\ell^{d+1}}\right|\]
or
\[\frac{1}{2}\sum_{v_{0}}\cdots\sum_{v_{d}}\left|\mathbf{Pr}[X_{0}=v_{0}|X_{1}=v_{1}, \ldots,X_{d}=v_{d}]\,\mathbf{Pr}[X_{1}=v_{1},\ldots,X_{d}=v_{d}]-\frac{1}{\ell^{ d+1}}\right|=\gamma\]
or
\[\left|\mathbf{Pr}[X_{0}=v_{0}|X_{1}=v_{1},\ldots,X_{d}=v_{d}]\,\mathbf{Pr}[X_{1 }=v_{1},\ldots,X_{d}=v_{d}]-\frac{1}{\ell^{d+1}}\right|\leq 2\gamma \tag{1}\]
for any \(v_{0},\ldots,v_{d}\). We observe the following.
**Claim 29**.: _We have that \(1/\ell^{d}-\gamma\leq\mathbf{Pr}[X_{1}=v_{1},\ldots,X_{d}=v_{d}]\leq 1/\ell^{d}+\gamma\)._
Proof.: Since \(d_{\mathrm{TV}}(P,\mathbb{U})=\gamma\) and \(\mathbf{Pr}[Y_{1}=v_{1},\ldots,Y_{d}=v_{d}]=1/\ell^{d}\), the claim is immediate.
By Equation (1) and Claim 29 we have the following.
**Corollary 30**.: _For \(\gamma<1/\left(2\ell^{d}\right)\) we have that_
\[|\mathbf{Pr}[X_{0}=v_{0}|X_{1}=v_{1},\ldots,X_{d}=v_{d}]-1/\ell|\leq 8\gamma \ell^{d}.\]
Proof.: By Equation (1) we have
\[\frac{1}{\ell^{d+1}}-2\gamma\leq\mathbf{Pr}[X_{0}=v_{0}|X_{1}=v_{1},\ldots,X_{ d}=v_{d}]\,\mathbf{Pr}[X_{1}=v_{1},\ldots,X_{d}=v_{d}]\leq\frac{1}{\ell^{d+1}}+2\gamma\]
or
\[\frac{\frac{1}{\ell^{d+1}}-2\gamma}{\mathbf{Pr}[X_{1}=v_{1},\ldots,X_{d}=v_{d }]} \leq\mathbf{Pr}[X_{0}=v_{0}|X_{1}=v_{1},\ldots,X_{d}=v_{d}]\] \[\leq\frac{\frac{1}{\ell^{d+1}}+2\gamma}{\mathbf{Pr}[X_{1}=v_{1}, \ldots,X_{d}=v_{d}]}\]
or, by making use of Claim 29,
\[\frac{\frac{1}{\ell^{d+1}}-2\gamma}{\frac{1}{\ell^{d}}+\gamma}\leq\mathbf{Pr} [X_{0}=v_{0}|X_{1}=v_{1},\ldots,X_{d}=v_{d}]\leq\frac{\frac{1}{\ell^{d+1}}+2 \gamma}{\frac{1}{\ell^{d}}-\gamma}\]
or
\[\frac{\frac{1}{\ell}-2\ell^{d}\gamma}{1+\ell^{d}\gamma}\leq\mathbf{Pr}[X_{0}=v _{0}|X_{1}=v_{1},\ldots,X_{d}=v_{d}]\leq\frac{\frac{1}{\ell}+2\ell^{d}\gamma}{ 1-\ell^{d}\gamma}.\]
We now have
\[\mathbf{Pr}[X_{0}=v_{0}|X_{1}=v_{1},\ldots,X_{d}=v_{d}] \leq\frac{\frac{1}{\ell}+2\ell^{d}\gamma}{1-\ell^{d}\gamma}\] \[\leq\left(\frac{1}{\ell}+2\ell^{d}\gamma\right)\left(1+2\ell^{d} \gamma\right)\] \[=\frac{1}{\ell}+2\ell^{d-1}\gamma+2\ell^{d}\gamma+4\ell^{2d} \gamma^{2}\] \[\leq\frac{1}{\ell}+2\ell^{d}\gamma+2\ell^{d}\gamma+4\ell^{d}\gamma\] \[=\frac{1}{\ell}+8\ell^{d}\gamma\]
since \(1/\left(1-x\right)\leq 1+2x\) for \(x<1/2\) (here \(x=\ell^{d}\gamma<1/2\)), and
\[\mathbf{Pr}[X_{0}=v_{0}|X_{1}=v_{1},\ldots,X_{d}=v_{d}] \geq\frac{\frac{1}{\ell}-2\ell^{d}\gamma}{1+\ell^{d}\gamma}\] \[\geq\left(\frac{1}{\ell}-2\ell^{d}\gamma\right)\left(1-\ell^{d} \gamma\right)\] \[=\frac{1}{\ell}-\ell^{d-1}\gamma-2\ell^{d}\gamma+2\ell^{2d}\gamma ^{2}\] \[\geq\frac{1}{\ell}-\ell^{d}\gamma-2\ell^{d}\gamma\] \[\geq\frac{1}{\ell}-8\gamma\ell^{d}\]
since \(1/\left(1+x\right)\geq 1-x\) for \(x<1/2\) (here \(x=\ell^{d}\gamma<1/2\)).
The result now follows from the observation that
\[\left(1/\ell-8\gamma\ell^{d}\right)^{n}\leq P(x)=\prod_{i=1}^{n}\mathbf{Pr} \Big{[}X_{i}=x_{i}|X_{\Pi(X_{i})}=x_{\Pi(X_{i})}\Big{]}\leq\left(1/\ell+8 \gamma\ell^{d}\right)^{n}\]
or
\[\left(1-8\gamma\ell^{d+1}\right)^{n}\leq P(x)\,\ell^{n}\leq\left(1+8\gamma \ell^{d+1}\right)^{n}\]
or
\[1-16\gamma\ell^{d+1}n\leq P(x)\,\ell^{n}\leq 1+16\gamma\ell^{d+1}n,\]
whereby we used the facts that \((1-\alpha)^{k}\geq(1-2\alpha k)\) and \((1+\alpha)^{k}\leq(1+2\alpha k)\) whenever \(\alpha<1/2\) and \(k>0\), and the fact that \(\gamma<1/\left(16\ell^{d+1}\right)\) or \(8\gamma\ell^{d+1}<1/2\).
Finally, we have
\[1-16d_{\mathrm{TV}}(P,\mathbb{U})\,\ell^{d+1}n\leq P(x)\,\ell^{n}\leq 1+16d_{ \mathrm{TV}}(P,\mathbb{U})\,\ell^{d+1}n\]
as desired.
## 6 Conclusion
We have established a general connection between probabilistic inference and TV distance computation. In particular, we proved that TV distance estimation can be reduced to probabilistic inference. This enables us to prove the existence of a novel FPRAS for estimating the TV distance between Bayes nets of small treewidth.
Moreover, we made some significant progress in understanding the complexity of computing the TV distance between an arbitrary Bayes net and the uniform distribution: We showed that the exact computation is #P-hard, while there is an FPRAS for the same task.
We outline the following open problems: Can we prove similar results for TV distance estimation between undirected graphical models? Another problem of interest is to study other notions of distance, such as Wasserstein metrics.
## Acknowledgements
The work of AB was supported in part by National Research Foundation Singapore under its NRF Fellowship Programme (NRF-NRFFAI-2019-0002) and an Amazon Faculty Research Award. The work of SG was supported by an initiation grant from IIT Kanpur and a SERB
award CRG/2022/007985. Pavan's work is partly supported by NSF award 2130536. Vinodchandran's work is partly supported by NSF award 2130608. This work was supported in part by National Research Foundation Singapore under its NRF Fellowship Programme [NRF-NRFFAI1-2019-0004] and an Amazon Research Award. Part of the work was done during Meel, Pavan, and Vinodchandran's visit to Simons Institute for the Theory of Computing.
|
2302.14571 | Additivity of multiplicative (generalized) maps over rings | In this paper, we prove that a bijective map $\varphi$ over a ring $R$ with a
non-trivial idempotent satisfying $\varphi(ab)=\varphi(a)\varphi(b),$ for all
$a,b\in R$, is additive. Also, we prove that a map $D$ on $R$ satisfying
$D(ab)=D(a)b+\varphi(a) D(b),$ for all $a,b\in R$ where $\varphi$ is the map
just mentioned above, is additive. Moreover, we establish that if a map $g$
over $R$ satisfies $g(ab)=g(a)b+\varphi(a)D(b),$ for all $a,b\in R$ and above
mentioned maps $\varphi$ and $D$, then $g$ is additive. | Sk Aziz, Arindam Ghosh, Om Prakash | 2023-02-28T13:50:24Z | http://arxiv.org/abs/2302.14571v1 | # Additivity of multiplicative (generalized) maps over rings
###### Abstract.
In this paper, we prove that a bijective map \(\varphi\) over a ring \(R\) with a non-trivial idempotent satisfying \(\varphi(ab)=\varphi(a)\varphi(b)\), for all \(a,b\in R\), is additive. Also, we prove that a map \(D\) on \(R\) satisfying \(D(ab)=D(a)b+\varphi(a)D(b)\), for all \(a,b\in R\) where \(\varphi\) is the map just mentioned above, is additive. Moreover, we establish that if a map \(g\) over \(R\) satisfies \(g(ab)=g(a)b+\varphi(a)D(b)\), for all \(a,b\in R\) and above mentioned maps \(\varphi\) and \(D\), then \(g\) is additive.
Key words and phrases:Automorphisms, Endomorphisms, Derivations, Skew structure, Idempotents 2020 Mathematics Subject Classification: 16W10, 16W25, 16W55, 17C27 * corresponding author
## 1. Introduction
Recall that a ring \(R\) is said to be a prime ring if \(aRb=0\) for some \(a,b\in R\) implies either \(a=0\) or \(b=0\), and a semi-prime ring if \(aRa=0\) for some \(a\in R\) implies that \(a=0\). Let \(n\) be a positive integer. A ring \(R\) is called \(n\)-torsion free if \(na=0\) for some \(a\in R\) implies that \(a=0\). Let \(R=(R,+,\cdot)\) be a ring. Then the opposite ring of \(R\), denoted by \(R^{\circ}=(R,+,\star)\), is same as \(R\) except the multiplication \(\star\) is defined by \(a\star b=b\cdot a\), for all \(a,b\in R\). Throughout this paper, let \(R\) be a ring with the identity element \(1\) and a non-trivial idempotent element \(e\). Let \(e_{1}=e\) and \(e_{2}=1-e_{1}\). Then \(R=R_{11}\oplus R_{12}\oplus R_{21}\oplus R_{22}\) where \(R_{ij}=e_{i}Re_{j}\), and \(1\leq i,j\leq 2\). A map \(f:R\to R\) is said to be additive if
\[f(a+b)=f(a)+f(b),\]
for all \(a,b\in R\). An additive bijective map \(\varphi:R\to R\) is said to be an automorphism if
\[\varphi(ab)=\varphi(a)\varphi(b),\]
for all \(a,b\in R\); and an anti-automorphism if
\[\varphi(ab)=\varphi(b)\varphi(a),\]
for all \(a,b\in R\). Note that without assuming the additivity condition of \(\varphi\), the map is known as a multiplicative automorphism and multiplicative anti-automorphism, respectively. An additive map \(d:R\to R\) is said to be a derivation if
\[d(ab)=d(a)b+ad(b),\]
for all \(a,b\in R\). Let \(\varphi\) be an automorphism of \(R\). In 1957 [4], Herstein proved that any Jordan derivation (a generalization of ordinary derivation) over a prime
Introduction
Let \(R\) be a positive integer. Let \(R\) be a positive integer. Let \(\varphi\) be a positive integer. Let \(\varphi\) be a positive integer. Let \(R\) be a positive integer. Let \(\varphi\) be a positive integer. Let \(\varphi_{1},\ldots,\varphi_{n}\) be a positive integer. Let \(\varphi_{i}\) be a positive integer.
## 2. Multiplicative Automorphism
**Theorem 2.1**.: _If \(\varphi\) is a multiplicative automorphism on \(R\), then \(\varphi\) is additive. Moreover, \(\varphi\) is an automorphism on \(R\)._
Before proving Theorem 2.1, we have several lemmas.
**Lemma 2.2**.: \(\varphi(0)=0\)_._
Proof.: Since \(\varphi\) is onto, there exists \(x\in R\) such that \(\varphi(x)=0\). Now, \(\varphi(0)=\varphi(0\cdot x)=\varphi(0)\varphi(x)=0\).
**Lemma 2.3**.: \(\varphi(1)=1\)_._
Proof.: Let \(\varphi(1)=p\) and \(q\in R\) be an arbitrary element. Since \(\varphi\) is onto, there exists \(x\in R\) such that \(\varphi(x)=q.\) Now \(pq=\varphi(1)\varphi(x)=\varphi(1\cdot x)=\varphi(x)=q\). Also, \(qp=\varphi(x)\varphi(1)=\varphi(x\cdot 1)=\varphi(x)=q\). Therefore
\[pq=qp=q\implies p=1\implies\varphi(1)=1.\]
**Lemma 2.4**.: \(\varphi(e_{1})+\varphi(e_{2})=1\)_._
Proof.: Since \(\varphi\) is onto, there exists \(z\in R\) such that \(\varphi(z)=\varphi(e_{1})+\varphi(e_{2})\). Since \(\varphi\) is one-one, we have
\[\varphi(ze_{1}) =\varphi(z)\varphi(e_{1})=[\varphi(e_{1})+\varphi(e_{2})]\varphi (e_{1})=\varphi(e_{1})\varphi(e_{1})+\varphi(e_{2})\varphi(e_{1})\] \[=\varphi(e_{1}e_{1})+\varphi(e_{2}e_{1})=\varphi(e_{1})+\varphi(0 )=\varphi(e_{1})\ (\text{By Lemma \ref{lem:2}})\] \[\implies ze_{1}=e_{1}\]
Also, \(\varphi(ze_{2})=\varphi(z)\varphi(e_{2})=[\varphi(e_{1})+\varphi(e_{2})] \varphi(e_{2})=\varphi(e_{1})\varphi(e_{2})+\varphi(e_{2})\varphi(e_{2})\)
\[=\varphi(e_{1}e_{2})+\varphi(e_{2}e_{2})=\varphi(0)+\varphi(e_{2})=\varphi(e_{ 2})\ (\text{By Lemma \ref{lem:2}})\]
\[\implies ze_{2}=e_{2}\]
Adding both, we get
\[ze_{1}+ze_{2}=e_{1}+e_{2}\implies z(e_{1}+e_{2})=e_{1}+e_{2}\implies z=1\]
\[\implies\varphi(z)=\varphi(1)\implies\varphi(e_{1})+\varphi(e_{2})=1\ (\text{By Lemma \ref{lem:2}}).\]
Hence, the result follows.
**Lemma 2.5**.: _Let \(a_{ij},b_{ij}\in R_{ij}\). Then_
\[(i)\ \varphi(a_{11}+b_{12}) =\varphi(a_{11})+\varphi(b_{12}),\] \[(ii)\ \varphi(a_{22}+b_{21}) =\varphi(a_{22})+\varphi(b_{21}),\] \[(iii)\ \varphi(a_{11}+b_{21}) =\varphi(a_{11})+\varphi(b_{21}),\] \[(iv)\ \varphi(a_{22}+b_{12}) =\varphi(a_{22})+\varphi(b_{12}).\]
Proof.: Let \(x_{22}\in R_{22}\). Since \(\varphi\) is onto, there exists \(z\in R\) such that \(\varphi(z)=\varphi(a_{11})+\varphi(b_{12})\). Since \(\varphi\) is one-one,
\[\varphi(zx_{22}) =\varphi(z)\varphi(x_{22})=[\varphi(a_{11})+\varphi(b_{12})] \varphi(x_{22})=\varphi(a_{11})\varphi(x_{22})+\varphi(b_{12})\varphi(x_{22})\] \[=\varphi(a_{11}x_{22})+\varphi(b_{12}x_{22})=\varphi(0)+\varphi( b_{12}x_{22})=\varphi(a_{11}x_{22}+b_{12}x_{22})\] \[\implies zx_{22}=(a_{11}+b_{12})x_{22}\implies[z-(a_{11}+b_{12})] x_{22}=0\] \[\implies[z-(a_{11}+b_{12})]_{22}=0\ \&\ [z-(a_{11}+b_{12})]_{12}=0\] \[\qquad\text{(By the assumption (A) on R)}. \tag{2.1}\]
Let \(x_{12}\in R_{12}\). Then
\[\begin{split}\varphi(zx_{12})&=[\varphi(a_{11})+ \varphi(b_{12})]\varphi(x_{12})=\varphi(a_{11})\varphi(x_{12})+\varphi(b_{12}) \varphi(x_{12})\\ &=\varphi(a_{11}x_{12})+\varphi(b_{12}x_{12})=\varphi(a_{11}x_{1 2})+\varphi(0)=\phi(a_{11}x_{12}+b_{12}x_{12})\\ &\implies zx_{12}=(a_{11}+b_{12})x_{12}\implies[z-(a_{11}+b_{12}) ]x_{12}=0\\ &\implies[z-(a_{11}+b_{12})]_{11}=0\text{ and }[z-(a_{11}+b_{12})]_{21}=0.\end{split} \tag{2.2}\]
By (2.1) and (2.2),
\[z=a_{11}+b_{12}\implies\varphi(a_{11}+b_{12})=\varphi(z)=\varphi(a_{11})+ \varphi(b_{12}).\]
Hence, we get \((i)\). Similarly, we can prove \((ii)\). Now,
\[\varphi(e_{1})\varphi(a_{11}+b_{21}) =\varphi(e_{1}(a_{11}+b_{21}))=\varphi(e_{1}a_{11}+0)=\phi(e_{1}a _{11})+\phi(e_{1}b_{21})\] \[=\varphi(e_{1})\varphi(a_{11})+\varphi(e_{1})\varphi(b_{21}).\]
Therefore,
\[\varphi(e_{1})[\varphi(a_{11}+b_{21})-\varphi(a_{11})-\varphi(b_{21})]=0.\]
Similarly, we have
\[\varphi(e_{2})[\varphi(a_{11}+b_{21})-\varphi(a_{11})-\varphi(b_{21})]=0.\]
Adding both the above,
\[[\varphi(e_{1})+\varphi(e_{2})][\varphi(a_{11}+b_{21})-\varphi(a_ {11})-\varphi(b_{21})]=0\] \[\implies\varphi(a_{11}+b_{21})=\varphi(a_{11})+\varphi(b_{21}) \text{ (By Lemma \ref{lem:2.2}).}\]
Hence, we have \((iii)\). Similarly, we can prove \((iv)\).
**Lemma 2.6**.: _Let \(a_{ij},b_{ij}\in R_{ij}\). Then_
\[(i)\ \varphi(a_{12}+b_{12})=\varphi(a_{12})+\varphi(b_{12})\]
\[\text{and}(ii)\ \varphi(a_{21}+b_{21})=\varphi(a_{21})+\varphi(b_{21})\]
Proof.: Let \(x_{22}\in R_{22}\). Since \(\varphi\) is onto, there exists \(z\in R\) such that \(\varphi(z)=\varphi(a_{12})+\varphi(b_{12})\). Now,
\[\varphi(zx_{22}) =\varphi(z)\varphi(x_{22})=[\varphi(a_{12})+\varphi(b_{12})] \varphi(x_{22})\] \[=\varphi(a_{12})\varphi(x_{22})+\varphi(b_{12})\varphi(x_{22})= \varphi(a_{12}x_{22})+\varphi(b_{12}x_{22})\] \[=\varphi(e_{1}x_{22})+\varphi(e_{1}a_{12}x_{22})+\varphi(b_{12}x_ {22})+\varphi(b_{12}a_{12}x_{22})\] \[=[\varphi(e_{1})+\varphi(b_{12})][\varphi(x_{22})+\varphi(a_{12} x_{22})]\] \[=\varphi(e_{1}+b_{12})\varphi(x_{22}+a_{12}x_{22})\text{ (By Lemma \ref{lem:2.2})}\] \[=\varphi((e_{1}+b_{12})(x_{22}+a_{12}x_{22}))\] \[=\varphi(a_{12}x_{22}+b_{12}x_{22}).\]
Since \(\varphi\) is one-one,
\[zx_{22}=a_{12}x_{22}+b_{12}x_{22}\implies[z-(a_{12}+b_{12})]x_{22}=0.\]
By the assumption (A) on \(R\),
\[[z-(a_{12}+b_{12})]_{12}=0\text{ and }[z-(a_{12}+b_{12})]_{22}=0. \tag{2.3}\]
Let \(x_{11}\in R_{11}\). Similarly, we have
\[\varphi(zx_{11}) =\varphi(z)\varphi(x_{11})=[\varphi(a_{12})+\varphi(b_{12})]\varphi( x_{11})=\varphi(a_{12})\varphi(x_{11})+\varphi(b_{12})\varphi(x_{11})\] \[=\varphi(a_{12}x_{11})+\varphi(b_{12}x_{11})=\varphi(0)+\varphi(0) =\varphi(a_{12}x_{11}+b_{12}x_{11})\] \[\implies zx_{11}=a_{12}x_{11}+b_{12}x_{11}\] \[\implies[z-(a_{12}+b_{12})]x_{11}=0\] \[\implies[z-(a_{12}+b_{12})]_{11}=0\text{ and}[z-(a_{12}+b_{12})]_{ 21}=0. \tag{2.4}\]
Therefore, by (2.3) and (2.4),
\[z=a_{12}+b_{12}\implies\varphi(a_{12}+b_{12})=\varphi(z)=\varphi(a_{12})+ \varphi(b_{12}).\]
Hence, we have \((i)\). Similarly, we can prove \((ii)\).
**Lemma 2.7**.: _Let \(a_{ij},b_{ij}\in R_{ij}\). Then_
\[(i) \ \varphi(a_{11}+b_{11})=\varphi(a_{11})+\varphi(b_{11})\] \[(ii) \ \varphi(a_{22}+b_{22})=\varphi(a_{22})+\varphi(b_{22}).\]
Proof.: Since \(\varphi\) is onto, there exists \(z\in R\) such that \(\varphi(z)=\varphi(a_{11})+\varphi(b_{11})\). Let \(x_{12}\in R_{12}\). Then, we have
\[\varphi(zx_{12}) =\varphi(z)\varphi(x_{12})=[\varphi(a_{11})+\varphi(b_{11})] \varphi(x_{12})=\varphi(a_{11})\varphi(x_{12})+\varphi(b_{11})\varphi(x_{12})\] \[=\varphi(a_{11}x_{12})+\varphi(b_{11}x_{12})=\varphi(a_{11}x_{12 }+b_{11}x_{12})\text{ (By Lemma \ref{lem:2.2.3})}\] \[\implies zx_{12}=(a_{11}+b_{11})x_{12}\text{ (Since $\varphi$ is one-one)}\] \[\implies[z-(a_{11}+b_{11})]x_{12}=0\] \[\implies[z-(a_{11}+b_{11})]_{11}=0\text{ and}[z-(a_{11}+b_{11})]_{ 21}=0. \tag{2.5}\]
Similarly, by taking \(x_{21}\in R_{21}\) we obtain
\[[z-(a_{11}+b_{11})]_{12}=0\text{ and }[z-(a_{11}+b_{11})]_{22}=0. \tag{2.6}\]
Therefore, by (2.5) and (2.6), we have
\[\varphi(a_{11}+b_{11})=\varphi(a_{11})+\varphi(b_{11}).\]
Similarly, we can prove that
\[\varphi(a_{22}+b_{22})=\varphi(a_{22})+\varphi(b_{22}).\]
**Lemma 2.8**.: _Let \(a_{ij},b_{ij},c_{ij},d_{ij}\in R_{ij}\). Then_
\[\varphi(a_{11}+b_{12}+c_{21}+d_{22})=\varphi(a_{11})+\varphi(b_{12})+\varphi( c_{21})+\varphi(d_{22}).\]
Proof.: Since \(\varphi\) is onto, there exists \(z\in R\) such that
\[\varphi(z)=\varphi(a_{11})+\varphi(b_{12})+\varphi(c_{21})+\varphi(d_{22}).\]
Let \(x_{11}\in R_{11}\). Then
\[\varphi(zx_{11}) =\varphi(z)\varphi(x_{11})=(\varphi(a_{11})+\varphi(b_{12})+\varphi (c_{21})+\varphi(d_{22}))\varphi(x_{11})\] \[=\varphi(a_{11})\varphi(x_{11})+\varphi(b_{12})\varphi(x_{11})+ \varphi(c_{21})\varphi(x_{11})+\varphi(d_{22})\varphi(x_{11})\] \[=\varphi(a_{11}x_{11})+\varphi(b_{12}x_{11})+\varphi(c_{21}x_{11 })+\varphi(d_{22}x_{11})\] \[=\varphi(a_{11}x_{11})+\varphi(0)+\varphi(c_{21}x_{11})+\varphi(0)\] \[=\varphi(a_{11}x_{11}+c_{21}x_{11})\text{ (By Lemma \ref{lem:2.2} and \ref{lem:2.5})}\] \[=\varphi(a_{11}x_{11}+b_{12}x_{11}+c_{21}x_{11}+d_{22}x_{11})\] \[\Longrightarrow zx_{11}=(a_{11}+b_{12}+c_{21}+d_{22})x_{11}\text{ (Since $\varphi$ is one-one).}\]
By the condition (A) on \(R\), we obtain
\[[z-(a_{11}+b_{12}+c_{21}+d_{22})]x_{11}=0\] \[\Longrightarrow[z-(a_{11}+b_{12}+c_{21}+d_{22})]_{11}=0\text{and }[z-(a_{11}+b_{12}+c_{21}+d_{22})]_{21}=0. \tag{2.7}\]
Similarly, by taking \(x_{22}\in R_{22}\), we get
\[[z-(a_{11}+b_{12}+c_{21}+d_{22})]_{12}=0\text{ and }[z-(a_{11}+b_{12}+c_{21}+d_{22})]_{22}=0. \tag{2.8}\]
Therefore, by (2.7) and (2.8)
\[\varphi(a_{11}+b_{12}+c_{21}+d_{22})=\varphi(a_{11})+\varphi(b_{21})+\varphi(c _{21})+\varphi(d_{22}).\]
Proof of Theorem 2.1.: Let \(a,b\in R\). Then
\[a=a_{11}+a_{12}+a_{21}+a_{22}\text{ and }b=b_{11}+b_{12}+b_{21}+b_{22}\text{ for some }a_{ij},b_{ij}\in R_{ij}.\]
Then
\[\varphi(a+b) =\varphi(a_{11}+a_{12}+a_{21}+a_{22}+b_{11}+b_{12}+b_{21}+b_{22})\] \[=\varphi((a_{11}+b_{11})+(a_{12}+b_{12})+(a_{21}+b_{21})+(a_{22}+ b_{22}))\] \[=\varphi(a_{11}+b_{11})+\varphi(a_{12}+b_{12})+\varphi(a_{21}+b_ {21})+\varphi(a_{22}+b_{22})\] \[\text{ (By Lemma \ref{lem:2.8})}\] \[=\varphi(a_{11})+\varphi(b_{11})+\varphi(a_{12})+\varphi(b_{12})+ \varphi(a_{21})+\varphi(b_{21})+\varphi(a_{22})+\varphi(b_{22})\] \[\text{ (By Lemma \ref{lem:2.6} and \ref{lem:2.7})}\] \[=\varphi(a_{11}+a_{12}+a_{21}+a_{22})+\varphi(b_{11}+b_{12}+b_{21 }+b_{22})\] \[\text{ (By Lemma \ref{lem:2.8})}\] \[=\varphi(a)+\varphi(b).\]
Hence, \(\varphi\) is additive.
With motivation of the Corollaries in [7], we have the following Corollaries.
**corollary 2.9**.: _If \(\varphi\) is a multiplicative automorphism on a prime ring \(R\) with a non-trivial idempotent \(e\), then \(\varphi\) is additive. Moreover, \(\varphi\) is an automorphism on \(R\)._
Proof.: Let \(e_{1}=e\) and \(e_{2}=1-e_{1}\). Then \(R_{ij}=e_{i}Re_{j}\). Then
\[a_{ij}x_{jk}=0,\text{ for some }a_{ij}\in R_{ij}\text{ and for all }x_{jk}\in R_{jk}\] \[\implies e_{i}ae_{j}e_{j}xe_{k}=0,\text{ for some }a\in R\text{ and for all }x\in R\] \[\implies(e_{i}ae_{j})x(e_{k})=0,\text{ and for all }x\in R\] \[\implies e_{i}ae_{j}=0\text{ (Since R is a prime ring)}\] \[\implies a_{ij}=0.\]
Thus, \(R\) satisfies condition (A). Hence, we get the desired result by Theorem 2.1.
**corollary 2.10**.: _If \(\varphi\) is a multiplicative anti-automorphism on a prime ring \(R\) with a non-trivial idempotent \(e\), then \(\varphi\) is additive. Moreover, \(\varphi\) is an anti-automorphism on \(R\)._
Proof.: Let \(\tau:R\to R^{\circ}\) be the map defined by
\[\tau(a)=a,\]
for all \(a\in R\). Then \(\tau\) is an anti-isomorphism. Let \(\sigma=\tau\circ\varphi\). Then \(\sigma:R\to R^{\circ}\) is a multiplicative isomorphism. Then \(\sigma\) is additive by a result in [7]. Therefore, \(\varphi\) is additive (Since \(\tau\) is additive and one-one).
## 3. Multiplicative Skew Derivation
**Theorem 3.1**.: _If \(D\) is a multiplicative skew derivation on \(R\), then \(D\) is additive. Moreover, \(D\) is a skew derivation on \(R\)._
Let \(\varphi\) be the associated multiplicative automorphism on \(R\). Hence, by Theorem 2.1, \(\varphi\) is additive. Before proving Theorem 3.1, we have several lemmas.
**Lemma 3.2**.: \[D(0)=0.\]
Proof.: \[D(0)=D(0\cdot 0)=D(0)0+\varphi(0)D(0)=0\text{ (By Lemma \ref{lem:2.2}).}\]
**Lemma 3.3**.: _Let \(a_{ij},b_{ij}\in R_{ij}\). Then_
\[(i)\ D(a_{11}+b_{12}) =D(a_{11})+D(b_{12})\] \[(ii)\ D(a_{22}+b_{21}) =D(a_{22})+D(b_{21})\] \[(iii)\ D(a_{11}+b_{21}) =D(a_{11})+D(b_{21})\] \[(iv)\ D(a_{22}+b_{12}) =D(a_{22})+D(b_{12}).\]
Proof.: Let \(x_{22}\in R_{22}\). Then
\[D((a_{11}+b_{12})x_{22})=D(a_{11}+b_{12})x_{22}+\varphi(a_{11}+b_{12})D(x_{22}). \tag{3.1}\]
On the other hand,
\[\begin{split}&\quad D((a_{11}+b_{12})x_{22})\\ =&\quad D(b_{12}x_{22})\\ =&\quad D(a_{11}x_{22})+D(b_{12}x_{22})\\ =&\quad D(a_{11})x_{22}+\varphi(a_{11})D(x_{22})+D(b _{12})x_{22}+\varphi(b_{12})D(x_{22}).\end{split} \tag{3.2}\]
Comparing (3.1) and (3.2), we get
\[[D(a_{11}+b_{12})-D(a_{11})-D(b_{12})]x_{22}=0\text{ (Since $\varphi$ is additive)}\] \[\Longrightarrow[D(a_{11}+b_{12})-D(a_{11})-D(b_{12})]_{12}x_{22}=0\] \[\text{and }[D(a_{11}+b_{12})-D(a_{11})-D(b_{12})]_{22}x_{22}=0.\]
By the assumption (A) on \(R\),
\[[D(a_{11}+b_{12})-D(a_{11})-D(b_{12})]_{12}=0\] \[\text{and }[D(a_{11}+b_{12})-D(a_{11})-D(b_{12})]_{22}=0.\]
Let \(x_{12}\in R_{12}\). Then
\[D((a_{11}+b_{12})x_{12})=D(a_{11}+b_{12})x_{12}+\varphi(a_{11}+b_{12})D(x_{12}). \tag{3.3}\]
Also,
\[D((a_{11}+b_{12})x_{12}) =D(a_{11}x_{12})\] \[=D(a_{11}x_{12})+D(b_{12}x_{12})\] \[=D(a_{11})x_{12}+\varphi(a_{11})D(x_{12})+D(b_{12})x_{12}+\varphi( b_{12})D(x_{12}). \tag{3.4}\]
Comparing (3.3) and (3.4),
\[[D(a_{11}+b_{12})-D(a_{11})-D(b_{12})]x_{12}=0.\]
Again, by the condition (A) on \(R\), we have
\[[D(a_{11}+b_{12})-D(a_{11})-D(b_{12})]_{11} =0\] \[\text{and }[D(a_{11}+b_{12})-D(a_{11})-D(b_{12})]_{21} =0.\]
Hence,
\[D(a_{11}+b_{12})=D(a_{11})+D(b_{12}).\]
Similarly, we can prove
\[D(a_{22}+b_{21})=D(a_{22})+D(b_{21}).\]
Now,
\[D(e_{1}(a_{11}+b_{21}))=D(e_{1})(a_{11}+b_{21})+\varphi(e_{1})D(a_{11}+b_{21}). \tag{3.5}\]
Also,
\[D(e_{1}(a_{11}+b_{21})) =D(e_{1}a_{11})\] \[=D(e_{1}a_{11})+D(e_{1}b_{21})\] \[=D(e_{1})a_{11}+\varphi(e_{1})D(a_{11})+D(e_{1})b_{21}+\varphi(e _{1})D(b_{21}). \tag{3.6}\]
Comparing (3.5) and (3.6),
\[\varphi(e_{1})(D(a_{11}+b_{21})-D(a_{11})-D(b_{21}))=0. \tag{3.7}\]
Similarly,
\[\varphi(e_{2})(D(a_{11}+b_{21})-D(a_{11})-D(b_{21}))=0. \tag{3.8}\]
Adding (3.7) and (3.8), we have
\[(\varphi(e_{1})+\varphi(e_{2}))(D(a_{11}+b_{21})-D(a_{11})-D(b_{21} ))=0\] \[\implies D(a_{11}+b_{21})=D(a_{11})+D(b_{21})\text{ (By Lemma \ref{lem:21}).}\]
Similarly, we can prove
\[D(a_{22}+b_{12})=D(a_{22})+D(b_{12}).\]
**Lemma 3.4**.: _Let \(a_{ij}\), \(b_{ij},c_{ij}\in R_{ij}\). Then_
\[(i)\ D(a_{12}+b_{12}c_{22}) =D(a_{12})+D(b_{12}c_{22})\] \[(ii)\ D(a_{21}+b_{22}c_{21}) =D(a_{21})+D(b_{22}c_{21}).\]
Proof.: Note that
\[a_{12}+b_{12}c_{22}=(e_{1}+b_{12})(a_{12}+c_{22}).\]
Therefore,
\[D(a_{12}+b_{12}c_{22}) =D((e_{1}+b_{12})(a_{12}+c_{22}))\] \[=D(e_{1}+b_{12})(a_{12}+c_{22})+\varphi(e_{1}+b_{12})D(a_{12}+c_{ 22})\] \[=[D(e_{1})+D(b_{12})](a_{12}+c_{22})+[\varphi(e_{1})+\varphi(b_{1 2}][D(a_{12})+D(c_{22})]\] \[\text{ (By Lemma \ref{lem:21})}\] \[=D(e_{1}a_{12})+D(e_{1}c_{22})+D(b_{12}c_{22})+D(b_{12}a_{12})\] \[=D(a_{12})+D(b_{12}c_{22})\text{ (By Lemma \ref{lem:21}).}\]
Similarly, we can prove that
\[D(a_{21}+b_{22}c_{21})=D(a_{21})+D(b_{22}c_{21}).\]
**Lemma 3.5**.: _Let \(a_{12},b_{12}\in R_{12}\) and \(a_{21},b_{21}\in R_{21}\). Then_
\[(i)\ D(a_{12}+b_{12})=D(a_{12})+D(b_{12}).\] \[(ii)\ D(a_{21}+b_{21})=D(a_{21})+D(b_{21}).\]
Proof.: Let \(x_{22}\in R_{22}\). Then
\[D(a_{12}+b_{12})x_{22}+\varphi(a_{12}+b_{12})D(x_{22})\] \[=D[(a_{12}+b_{12})x_{22}]\] \[=D(a_{12}x_{22}+b_{12}x_{22})\] \[=D(a_{12}x_{22})+D(b_{12}x_{22})\text{ (By Lemma \ref{lem:21})}\] \[=D(a_{12})x_{12}+\varphi(a_{12})D(x_{22})+D(b_{12})x_{22}+\varphi( b_{12})D(x_{22}).\]
Consequently, we obtain
\[[D(a_{12}+b_{12})-D(a_{12})-D(b_{12})]x_{22}=0 \tag{3.9}\] \[\implies[D(a_{12}+b_{12})-D(a_{12})-D(b_{12})]_{12}=0\] \[\text{and}[D(a_{12}+b_{12})-D(a_{12})-D(b_{12})]_{22}=0.\]
Let \(x_{12}\in R_{12}\). Then
\[D(a_{12}+b_{12})x_{12}+\varphi(a_{12}+b_{12})D(x_{12})\] \[=D[(a_{12}+b_{12})x_{12}]\] \[=D(0)=0\] \[=D(a_{12}x_{12})+D(b_{12}x_{12})\] \[=D(a_{12})x_{12}+\varphi(a_{12})D(x_{12})+D(b_{12})x_{12}+ \varphi(b_{12})D(x_{12}),\]
which gives
\[\begin{split}&[D(a_{12}+b_{12})-D(a_{12})-D(b_{12})]x_{12}=0\\ &\implies[D(a_{12}+b_{12})-D(a_{12})-D(b_{12})]_{11}=0\\ &\text{and}\ [D(a_{12}+b_{12})-D(a_{12})-D(b_{12})]_{21}=0.\end{split} \tag{3.10}\]
Hence, by (3.9) and (3.10),
\[D(a_{12}+b_{12})=D(a_{12})+D(b_{12}).\]
Similarly, we can get
\[D(a_{21}+b_{21})=D(a_{21})+D(b_{21}).\]
**Lemma 3.6**.: _Let \(a_{ii},b_{ii}\in R_{ii}\), where \(i\in\{1,2\}\). Then_
\[(i)\ D(a_{11}+b_{11})=D(a_{11})+D(b_{11})\] \[(ii)\ D(a_{22}+b_{22})=D(a_{22})+D(b_{22}).\]
Proof.: Let \(x_{22}\in R_{22}\). Then
\[D(a_{11}+b_{11})x_{22}+\varphi(a_{11}+b_{11})D(x_{22})\] \[= D[(a_{11}+b_{11})x_{22}]\] \[= D(0)=0\] \[= D(a_{11}x_{22})+D(b_{11}x_{22})\] \[= D(a_{11})x_{22}+\varphi(a_{11})D(x_{22})+D(b_{11})x_{22}+\varphi( b_{11})D(x_{22})\]
Therefore,
\[[D(a_{11}+b_{11})-D(a_{11})-D(b_{11})]x_{22}=0,\]
which implies
\[\begin{split}&[D(a_{11}+b_{11})-D(a_{11})-D(b_{11})]_{12}=0\\ &\text{and}\ [D(a_{11}+b_{11})-D(a_{11})-D(b_{11})]_{22}=0\end{split} \tag{3.11}\]
Let \(x_{12}\in R_{12}\). Then
\[D(a_{11}+b_{11})x_{12}+\varphi(a_{11}+b_{11})D(x_{12})\] \[=D[(a_{11}+b_{11})x_{12}]\] \[=D(a_{11}x_{12}+b_{11}x_{12})\] \[=D(a_{11}x_{12})+D(b_{11}x_{12})\] \[=D(a_{11})x_{12}+\varphi(a_{11})D(x_{12})+D(b_{11})x_{12}+\varphi( b_{11})D(x_{12})\]
Therefore,
\[[D(a_{11}+b_{11})-D(a_{11})-D(b_{11})]x_{12}=0,\]
which implies
\[\begin{split}&{}_{11}=0\\ \text{and}&[D(a_{11}+b_{11})-D(a_{11})-D(b_{11})]_{21}=0.\end{split} \tag{3.12}\]
Hence, by (3.11) and (3.12),
\[D(a_{11}+b_{11})=D(a_{11})+D(b_{11}).\]
Similarly, we can prove that
\[D(a_{22}+b_{22})=D(a_{22})+D(b_{22}).\]
**Lemma 3.7**.: _Let \(a_{11}\in R_{11},\ b_{12}\in R_{12},\ c_{21}\in R_{21},\ d_{22}\in R_{22}\). Then_
\[D(a_{11}+b_{12}+c_{21}+d_{22})=D(a_{11})+D(b_{12})+D(c_{21})+D(d_{22}).\]
Proof.: Let \(x_{11}\in R_{11}\). Then
\[D(a_{11}+b_{12}+c_{21}+d_{22})x_{11}+\varphi(a_{11}+b_{12}+c_{21} +d_{22})D(x_{11})\] \[= D[(a_{11}+b_{12}+c_{21}+d_{22})x_{11}]\] \[= D(a_{11}x_{11}+c_{21}x_{11})\] \[= D(a_{11}x_{11})+D(b_{12}x_{11})+D(c_{21}x_{11})+D(d_{22}x_{11})\] \[\quad\text{(By Lemma \ref{lem:21} and \ref{lem:22})}\] \[= D(a_{11})x_{11}+\varphi(a_{11})D(x_{11})+D(b_{12})x_{11}+\varphi( b_{12})D(x_{11})\] \[+D(c_{21})x_{11}+\varphi(c_{21})D(x_{11})+D(d_{22})x_{11}+\varphi (d_{22})D(x_{11}).\]
Comparing both sides and using the additivity of \(\varphi\), we have
\[[D(a_{11}+b_{12}+c_{21}+d_{22})-D(a_{11})-D(b_{12})-D(c_{21})-D(d_ {22})]x_{11}=0\] \[\Longrightarrow [D(a_{11}+b_{12}+c_{21}+d_{22})-D(a_{11})-D(b_{12})-D(c_{21})-D( d_{22})]_{11}=0\] \[\text{and} [D(a_{11}+b_{12}+c_{21}+d_{22})-D(a_{11})-D(b_{12})-D(c_{21})-D( d_{22})]_{21}=0. \tag{3.13}\]
Similarly, by taking \(x_{22}\) from \(R_{22}\), we can get
\[[D(a_{11}+b_{12}+c_{21}+d_{22})-D(a_{11})-D(b_{12})-D(c_{21})-D( d_{22})]_{12}=0\] \[\&\ [D(a_{11}+b_{12}+c_{21}+d_{22})-D(a_{11})-D(b_{12})-D( c_{21})-D(d_{22})]_{22}=0. \tag{3.14}\]
Hence, by (3.13) and (3.14),
\[D(a_{11}+b_{12}+c_{21}+d_{22})=D(a_{11})+D(b_{12})+D(c_{21})+D(d_{22}).\]
Proof of Theorem 3.1.: Let \(a,b\in R\). Then
\[a=a_{11}+a_{12}+a_{21}+a_{22},\] \[b=b_{11}+b_{12}+b_{21}+b_{22},\]
for some \(a_{ij},b_{ij}\in R_{ij}\). Now,
\[D(a+b)= D(a_{11}+a_{12}+a_{21}+a_{22}+b_{11}+b_{12}+b_{21}+b_{22})\] \[= D[(a_{11}+b_{11})+(a_{12}+b_{12})+(a_{21}+b_{21})+(a_{22}+b_{22})]\] \[= D(a_{11}+b_{11})+D(a_{12}+b_{12})+D(a_{21}+b_{21})+D(a_{22}+b_{22})\] \[\text{ (By Lemma \ref{lemma:2})}\] \[= D(a_{11})+D(b_{11})+D(a_{12})+D(b_{12})\] \[+D(a_{21})+D(b_{21})+D(a_{22})+D(b_{22})\text{ (By Lemma \ref{lemma:2} and \ref{lemma:2})}\] \[= D(a_{11}+a_{12}+a_{21}+a_{22})+D(b_{11}+b_{12}+b_{21}+b_{22})\] \[\text{ (By Lemma \ref{lemma:2})}\] \[= D(a)+D(b).\]
Hence, \(D\) is additive.
**corollary 3.8**.: _Let \(R\) be a \(2\)-torsion free semi-prime ring with a non-trivial idempotent \(e\) and satisfies the condition_
(B) \[a_{ii}x_{ij}=0,\text{ for some }\ a_{ii}\in R_{ii}\text{ and for all }x_{ij}\in R_{ij}\ (i\neq j),\text{ then }a_{ii}=0.\]
_Then every multiplicative skew derivation \(D\) over \(R\) is additive. Moreover, \(D\) is a skew derivation on \(R\)._
Proof.: Since \(R\) satisfies (B), by Lemma 1.5 in [5], \(R\) also satisfies the condition (A). Hence, we have the desired result by Theorem 3.1.
## 4. Multiplicative Generalized Skew Derivation
**Theorem 4.1**.: _If \(g\) is a multiplicative generalized skew derivation on \(R\), then \(g\) is additive. Moreover, \(g\) is a generalized skew derivation on \(R\)._
Let \(\varphi\) and \(D\) be the associated multiplicative automorphism and associated multiplicative skew derivation on \(R\), respectively. Hence, by Theorem 2.1 and 3.1, \(\varphi\) and \(D\) are additive. Before proving Theorem 4.1, we have several lemmas.
**Lemma 4.2**.: \[g(0)=0.\]
Proof.: \[g(0)=g(0\cdot 0)=g(0)0+\varphi(0)D(0)=0\text{ (By Lemma \ref{lemma:2} and \ref{lemma:2}).}\]
**Lemma 4.3**.: \[D(1)=0.\]
Proof.: \[g(1)=g(1\cdot 1)=g(1)1+\varphi(1)D(1)=g(1)+D(1)\] \[\implies D(1)=0.\]
**Lemma 4.4**.: \[g(e_{1}+e_{2})=g(e_{1})+g(e_{2}).\]
Proof.: \[g(e_{1}) =g(1\cdot e_{1})=g(1)e_{1}+\varphi(1)D(e_{1})\] \[\text{and }g(e_{2}) =g(1\cdot e_{2})=g(1)e_{2}+\varphi(1)D(e_{2}).\]
Adding these,
\[g(e_{1})+g(e_{2}) =g(1)(e_{1}+e_{2})+\varphi(1)(D(e_{1})+D(e_{2}))\] \[=g(1)1+\varphi(1)D(e_{1}+e_{2})\] \[=g(1)+D(1)=g(1)\text{ (By Lemma \ref{lem:g1})}\] \[=g(e_{1}+e_{2}).\]
**Lemma 4.5**.: _Let \(a_{ij}\in R_{ij}\) and \(b_{ij}\in R_{ij}\). Then_
\[(i)\ g(a_{11}+b_{12}) =g(a_{11})+g(b_{12})\] \[(ii)\ g(a_{22}+b_{21}) =g(a_{22})+g(b_{21})\] \[(iii)\ g(a_{11}+b_{21}) =g(a_{11})+g(b_{21})\] \[(iv)\ g(a_{22}+b_{12}) =g(a_{22})+g(b_{12}).\]
Proof.: Let \(x_{22}\in R_{22}\). Then
\[g(a_{11}+b_{12})x_{22}+\varphi(a_{11}+b_{12})D(x_{22})\] \[=g[(a_{11}+b_{12})x_{22}]\] \[=g(b_{12}x_{22})\] \[=g(a_{11}x_{22})+g(b_{12}x_{22})\text{ (By Lemma \ref{lem:g1})}\] \[=g(a_{11})x_{22}+\varphi(a_{11})D(x_{22})+g(b_{12})x_{22}+\varphi( b_{12})D(x_{22}).\]
Comparing both sides,
\[[g(a_{11}+b_{12})-g(a_{11})-g(b_{12})]x_{22}=0\text{ (By Theorem \ref{lem:g1}).}\]
Using the assumption (A) on \(R\), we get
\[[g(a_{11}+b_{12})-g(a_{11})-g(b_{12})]_{12}=0\] \[\text{and }[g(a_{11}+b_{12})-g(a_{11})-g(b_{12})]_{22}=0. \tag{4.1}\]
Similarly, for \(x_{12}\in R_{12}\), we have
\[g(a_{11}+b_{12})x_{12}+\varphi(a_{11}+b_{12})D(x_{12})\] \[=g[(a_{11}+b_{12})x_{12}]\] \[=g(a_{11}x_{12})\] \[=g(a_{11}x_{12})+g(b_{12}x_{12})\] \[=g(a_{11})x_{12}+\varphi(a_{11})D(x_{12})+g(b_{12})x_{12}+\varphi (b_{12})D(x_{12}).\]
This yields that,
\[[g(a_{11}+b_{12})-g(a_{11})-g(b_{12})]x_{12}=0\] \[\implies[g(a_{11}+b_{12})-g(a_{11})-g(b_{12})]_{11}=0\] \[\text{and }[g(a_{11}+b_{12})-g(a_{11})-g(b_{12})]_{21}=0. \tag{4.2}\]
Hence, by (4.1) and (4.2),
\[g(a_{11}+b_{12})=g(a_{11})+g(b_{12}).\]
Similarly, we can prove that
\[g(a_{22}+b_{21})=g(a_{22})+g(b_{21}).\]
Now,
\[g(a_{11}) =g(e_{1}(a_{11}+b_{21}))=g(e_{1})(a_{11}+b_{21})+\varphi(e_{1})D(a_ {11}+b_{21})\] \[\text{and }g(b_{21}) =g(e_{2}(a_{11}+b_{21}))=g(e_{2})(a_{11}+b_{21})+\varphi(e_{2})D(a _{11}+b_{21}).\]
Adding these,
\[g(a_{11})+g(b_{21}) =(g(e_{1})+g(e_{2}))(a_{11}+b_{21})+(\varphi(e_{1})+\varphi(e_{2} ))D(a_{11}+b_{21})\] \[=g(1)(a_{11}+b_{21})+\varphi(1)D(a_{11}+b_{21})\ (\text{By Lemma \ref{lem:g_11})}\] \[=g(1(a_{11}+b_{21}))=g(a_{11}+b_{21}).\]
Similarly, we can prove that
\[g(a_{22}+b_{12})=g(a_{22})+g(b_{12}).\]
**Lemma 4.6**.: _Let \(a_{ij},b_{ij},c_{ij}\in R_{ij}\). Then_
\[(i)\ g(a_{12}+b_{12}c_{22})=g(a_{12})+g(b_{12}c_{22})\] \[(ii)\ g(a_{21}+b_{22}c_{21})=g(a_{21})+g(b_{22}c_{21}).\]
Proof.: Note that,
\[a_{12}+b_{12}c_{22}=(e_{1}+b_{12})(a_{12}+c_{22}).\]
Therefore,
\[g(a_{12}+b_{12}c_{22})= g((e_{1}+b_{12})(a_{12}+c_{22}))\] \[= g(e_{1}+b_{12})(a_{12}+c_{22})+\varphi(e_{1}+b_{12})D(a_{12}+c_{ 22})\] \[= (g(e_{1})+g(b_{12}))(a_{12}+c_{22})+(\varphi(e_{1})+\varphi(b_{12 }))(D(a_{12})+D(c_{22}))\] \[(\text{By Lemma \ref{lem:g_11})}\] \[= g(e_{1})a_{12}+\varphi(e_{1})D(a_{12})+g(e_{1})c_{22}+\varphi(e_{ 1})D(c_{22})\] \[+g(b_{12})a_{12}+\varphi(b_{12})D(a_{12})+g(b_{12})c_{22}+\varphi( b_{12})D(c_{22})\] \[= g(e_{1}a_{12})+g(e_{1}c_{22})+g(b_{12}a_{12})+g(b_{12}c_{22})\] \[= g(a_{12})+g(b_{12}c_{22})\ (\text{By Lemma \ref{lem:g_11})}.\]
Similarly, we can prove that
\[g(a_{21}+b_{22}c_{21})=g(a_{21})+g(b_{22}c_{21}).\]
**Lemma 4.7**.: _Let \(a_{ij},b_{ij}\in R_{ij}\). Then_
\[(i)\ g(a_{12}+b_{12}) =g(a_{12})+g(b_{12})\] \[(ii)\ g(a_{21}+b_{21}) =g(a_{21})+g(b_{21}).\]
Proof.: Let \(x_{22}\in R_{22}\). Then
\[g(a_{12}+b_{12})x_{22}+\varphi(a_{12}+b_{12})D(x_{22})\] \[=g[(a_{12}+b_{12})x_{22}]\] \[=g(a_{12}x_{22}+b_{12}x_{22})\] \[=g(a_{12}x_{22})+g(b_{12}x_{22})\] (By Lemma 4.6) \[=g(a_{12})x_{22}+\varphi(a_{12})D(x_{22})+g(b_{12})x_{22}+\varphi(b_{12}) D(x_{22}).\]
Comparing both sides, we get
\[\begin{split}&[g(a_{12}+b_{12})-g(a_{12})-g(b_{12})]x_{22}=0\\ &\implies[g(a_{12}+b_{12})-g(a_{12})-g(b_{12})]_{12}=0\\ &\text{and }[g(a_{12}+b_{12})-g(a_{12})-g(b_{12})]_{22}=0.\end{split} \tag{4.3}\]
Let \(x_{12}\in R_{12}\). Then
\[g(a_{12}+b_{12})x_{12}+\varphi(a_{12}+b_{12})D(x_{12})\] \[=g[(a_{12}+b_{12})x_{12}]\] \[=g(0)=0\] \[=g(a_{12}x_{12})+g(b_{12}x_{12})\] \[=g(a_{12})x_{12}+\varphi(a_{12})D(x_{12})+g(b_{12})x_{12}+\varphi( b_{12})D(x_{12}),\]
which yields,
\[\begin{split}&[g(a_{12}+b_{12})-g(a_{12})-g(b_{12})]x_{12}=0\\ &\implies[g(a_{12}+b_{12})-g(a_{12})-g(b_{12})]_{11}=0\\ &\text{and }[g(a_{12}+b_{12})-g(a_{12})-g(b_{12})]_{21}=0.\end{split} \tag{4.4}\]
Hence, by (4.3) and (4.4),
\[g(a_{12}+b_{12})=g(a_{12})+g(b_{12}).\]
Similarly, we can prove that
\[g(a_{21}+b_{21})=g(a_{21})+g(b_{21}).\]
**Lemma 4.8**.: _Let \(a_{ii},b_{ii}\in R_{ii}\). Then_
\[(i)\ g(a_{11}+b_{11})=g(a_{11})+g(b_{11})\] \[(ii)\ g(a_{22}+b_{22})=g(a_{22})+g(b_{22}).\]
Proof.: Let \(x_{22}\in R_{22}\). Then
\[g(a_{11}+b_{11})x_{22}+\varphi(a_{11}+b_{11})D(x_{22})\] \[=g[(a_{11}+b_{11})x_{22}]\] \[=g[(a_{11}+b_{11})x_{22}]\] \[=g(0)=0\] \[=g(a_{11}x_{22})+g(b_{11}x_{22})\] \[=g(a_{11})x_{22}+\varphi(a_{11})D(x_{22})+g(b_{11})x_{22}+\varphi( b_{11})D(x_{22}).\]
Comparing both sides,
\[[g(a_{11}+b_{11})-g(a_{11})-g(b_{11})]x_{22}=0\] \[\Longrightarrow [g(a_{11}+b_{11})-g(a_{11})-g(b_{11})]_{12}=0\] \[\text{and }[g(a_{11}+b_{11})-g(a_{11})-g(b_{11})]_{22}=0 \tag{4.5}\]
Similarly, by taking \(x_{12}\in R_{12}\),
\[[g(a_{11}+b_{11})-g(a_{11})-g(b_{11})]_{11}=0\] \[\text{and }[g(a_{11}+b_{11})-g(a_{11})-g(b_{11})]_{21}=0. \tag{4.6}\]
Hence, by (4.5) and (4.6),
\[g(a_{11}+b_{11})=g(a_{11})+g(b_{11}).\]
Similarly, we can prove that
\[g(a_{22}+b_{22})=g(a_{22})+g(b_{22}).\]
**Lemma 4.9**.: _Let \(a_{11}\in R_{11}\), \(b_{12}\in R_{12}\), \(c_{21}\in R_{21}\) and \(d_{22}\in R_{22}\). Then_
\[g(a_{11}+b_{12}+c_{21}+d_{22})=g(a_{11})+g(b_{12})+g(c_{21})+g(d_{22}).\]
Proof.: Let \(x_{11}\in R_{11}\). Then
\[g(a_{11}+b_{12}+c_{21}+d_{22})x_{11}+\varphi(a_{11}+b_{12}+c_{21 }+d_{22})D(x_{11})\] \[=g[(a_{11}+b_{12}+c_{21}+d_{22})x_{11}]\] \[=g(a_{11}x_{11}+c_{21}x_{11})\] \[=g(a_{11}x_{11})+g(b_{12}x_{11})+g(c_{21}x_{11})+g(d_{22}x_{11})\] (By Lemma 4.2 and 4.5) \[=g(a_{11})x_{11}+\varphi(a_{11})D(x_{11})+g(b_{12})x_{11}+\varphi( b_{12})D(x_{11})\] \[+g(c_{21})x_{11}+\varphi(c_{21})D(x_{11})+g(d_{22})x_{11}+ \varphi(d_{22})D(x_{11}).\]
Comparing both sides, we get
\[[g(a_{11}+b_{12}+c_{21}+d_{22})-g(a_{11})-g(b_{12})-g(c_{21})-g( d_{22})]x_{11}=0.\] \[\implies [g(a_{11}+b_{12}+c_{21}+d_{22})-g(a_{11})-g(b_{12})-g(c_{21})-g( d_{22})]_{11}=0\] \[\text{and }[g(a_{11}+b_{12}+c_{21}+d_{22})-g(a_{11})-g(b_{12})-g( c_{21})-g(d_{22})]_{21}=0. \tag{4.7}\]
Similarly, by taking \(x_{22}\in R_{22}\),
\[[g(a_{11}+b_{12}+c_{21}+d_{22})-g(a_{11})-g(b_{12})-g(c_{21})-g( d_{22})]_{12}=0\] \[\text{and }[g(a_{11}+b_{12}+c_{21}+d_{22})-g(a_{11})-g(b_{12})-g( c_{21})-g(d_{22})]_{22}=0. \tag{4.8}\]
Hence, by (4.7) and (4.8),
\[g(a_{11}+b_{12}+c_{21}+d_{22})=g(a_{11})+g(b_{12})+g(c_{21})+g(d_{22}).\]
Proof of Theorem 4.1.: Let \(a,b\in R\). Then
\[a=a_{11}+a_{12}+a_{21}+a_{22},\] \[b=b_{11}+b_{12}+b_{21}+b_{22},\text{ for some }a_{ij},\ b_{ij}\in R_{ij}.\]
Now,
\[g(a+b)=g(a_{11}+a_{12}+a_{21}+a_{22}+b_{11}+b_{12}+b_{21}+b_{22})\] \[=g[(a_{11}+b_{11})+(a_{12}+b_{12})+(a_{21}+b_{21})+(a_{22}+b_{22})]\] \[=g(a_{11}+b_{11})+g(a_{12}+b_{12})+g(a_{21}+b_{21})+g(a_{22}+b_{22})\] (By Lemma 4.9) \[=g(a_{11})+g(b_{11})+g(a_{12})+g(b_{12})+g(a_{21})+g(b_{21})+g(a_{22} )+g(b_{22})\] (By Lemma 4.7 and 4.8) \[=g(a_{11}+a_{12}+a_{21}+a_{22})+g(b_{11}+b_{12}+b_{21}+b_{22})\] (By Lemma 4.9) \[=g(a)+g(b).\]
Hence, \(g\) is additive.
**corollary 4.10**.: _Let \(R\) be a \(2\)-torsion free semi-prime ring with a non-trivial idempotent \(e\) and satisfies the condition_ (B) _given in Corollary 3.8. Then every multiplicative generalized skew derivation \(g\) over \(R\) is additive. Moreover, \(g\) is a generalized skew derivation on \(R\)._
Proof.: We have the desired result by Lemma 1.5 in [5] and Theorem 4.1.
## Acknowledgement
The first author is thankful to the Indian Institute of Technology Patna for providing the research facilities.
|
2309.16014 | Graph-level Representation Learning with Joint-Embedding Predictive
Architectures | Joint-Embedding Predictive Architectures (JEPAs) have recently emerged as a
novel and powerful technique for self-supervised representation learning. They
aim to learn an energy-based model by predicting the latent representation of a
target signal y from the latent representation of a context signal x. JEPAs
bypass the need for negative and positive samples, traditionally required by
contrastive learning while avoiding the overfitting issues associated with
generative pretraining. In this paper, we show that graph-level representations
can be effectively modeled using this paradigm by proposing a Graph
Joint-Embedding Predictive Architecture (Graph-JEPA). In particular, we employ
masked modeling and focus on predicting the latent representations of masked
subgraphs starting from the latent representation of a context subgraph. To
endow the representations with the implicit hierarchy that is often present in
graph-level concepts, we devise an alternative prediction objective that
consists of predicting the coordinates of the encoded subgraphs on the unit
hyperbola in the 2D plane. Through multiple experimental evaluations, we show
that Graph-JEPA can learn highly semantic and expressive representations, as
shown by the downstream performance in graph classification, regression, and
distinguishing non-isomorphic graphs. The code will be made available upon
acceptance. | Geri Skenderi, Hang Li, Jiliang Tang, Marco Cristani | 2023-09-27T20:42:02Z | http://arxiv.org/abs/2309.16014v2 | # Graph-level Representation Learning with Joint-Embedding Predictive Architectures
###### Abstract
Joint-Embedding Predictive Architectures (JEPAs) have recently emerged as a novel and powerful technique for self-supervised representation learning. They aim to learn an energy-based model by predicting the latent representation of a target signal \(y\) from a context signal \(x\). JEPAs bypass the need for data augmentation and negative samples, which are typically required by contrastive learning, while avoiding the overfitting issues associated with generative-based pretraining. In this paper, we show that graph-level representations can be effectively modeled using this paradigm and propose Graph-JEPA, the first JEPA for the graph domain. In particular, we employ masked modeling to learn embeddings for different subgraphs of the input graph. To endow the representations with the implicit hierarchy that is often present in graph-level concepts, we devise an alternative training objective that consists of predicting the coordinates of the encoded subgraphs on the unit hyperbola in the 2D plane. Extensive validation shows that Graph-JEPA can learn representations that are expressive and competitive in both graph classification and regression problems.
## 1 Introduction
Graph data is ubiquitous in the real world due to its ability to universally abstract various concepts and entities (Ma & Tang, 2021; Velickovic, 2023). To deal with this peculiar data structure, Graph Neural Networks (GNNs) (Scarselli et al., 2008; Kipf & Welling, 2016a; Gilmer et al., 2017; Velickovic et al., 2017) have established themselves as a stable solution. Nevertheless, most applications of GNNs usually rely on ground-truth labels for training. The growing amount of graph data in fields such as bioinformatics, chemoinformatics, and social networks has made manual labeling laborious, sparking significant interest in unsupervised graph representation learning.
A particularly emergent area in this line of research is self-supervised learning (SSL). In SSL, alternative forms of supervision are created stemming from the input signal. This process is then typically followed by invariance-based or generative-based pretraining (Liu et al., 2023; Assran et al., 2023). Invariance-based approaches optimize the model to produce comparable embeddings for different views of the input signal. A common term associated with this procedure is contrastive learning (Tian et al., 2020). Typically, these alternative views are created by a data augmentation procedure. The views are then passed through their respective encoder networks (which may share weights), as shown in Figure 0(a). Finally, an energy function, usually framed as a distance, acts on the latent embeddings. In the graph domain, several works have applied contrastive learning by designing graph-specific augmentations (You et al., 2020), using multi-view learning (Hassani & Khasahmadi, 2020) and even adversarial learning (Suresh et al., 2021). This pretraining strategy is effective but comes with several drawbacks i.e., the necessity to augment the data and process negative samples. In order to learn embeddings that are useful for downstream tasks, the augmentations must also be non-trivial.
Generative methods on the other hand typically remove or corrupt portions of the input and predict them using an autoencoding procedure (Vincent et al., 2010; He et al., 2022), or rely on autoregressive modeling (Brown et al., 2020; Hu et al., 2020). Figure 0(b) depicts the typical instantiation of these methods: The input signal \(x\) is fed into an encoder network that constructs the latent represen
tation and a decoder generates \(\hat{y}\), the data corresponding to the target signal \(y\). The energy function is then applied in data space, often as a reconstruction term (Bengio et al., 2013). Generative models generally display strong overfitting tendencies since they have to estimate the data distribution (implicitly or explicitly), so the latent representations must be directly descriptive of the whole data space. This can be very problematic for graphs given that they live in a non-Euclidean and inhomogenous data space. Nevertheless, masked autoencoding has recently shown promising results in the graph domain with appropriately designed models (Hou et al., 2022; Tan et al., 2023).
Inspired by the innovative Joint-Embedding Predictive Architecture (JEPA) (LeCun, 2022; Assran et al., 2023), we propose Graph-JEPA to learn graph-level representations by bridging contrastive and generative models. As illustrated in Figure 1c, a JEPA has two encoder networks that receive the input signals and produce the corresponding representations. Notably, the two encoders can be different models and don't need to share weights. A predictor module outputs a prediction of the latent representation of one signal based on the other, possibly conditioned on another variable. Graph-JEPA does not require any negative samples or data augmentation and by operating in the latent space it avoids the pitfalls associated with learning high-level details needed to fit the data distribution. However, the graph domain presents us with additional challenges, namely: context and target extraction; designing a latent prediction task that is optimal for graph-level concepts; and learning expressive representations. In response to these questions, we design Graph-JEPA with a specific masked modeling objective. The input graph is first divided into several subgraphs, and then the latent representation of randomly chosen target subgraphs is predicted given a context subgraph. The subgraph representations are consequently pooled to create a graph-level representation.
The nature of graph-level concepts is often assumed to be hierarchical (Ying et al., 2018), thus we conjecture that the typical latent reconstruction objective used in current JEPA formulations is not enough to provide optimal downstream performance. To this end, we design a prediction objective that starts by expressing the target subgraph encoding as a high-dimensional description of the hyperbolic angle. The predictor module is then tasked with predicting the location of the target in the 2D unit hyperbola. This prediction is compared with the target coordinates, obtained by using the aforementioned hyperbolic angle. Graph-JEPA outperforms popular contrastive and generative graph-level SSL methods on different datasets, while maintaining efficiency and ease of training. Notably, we observe from our experiments that Graph-JEPA can run up to 1.45x faster than Graph-MAE (Hou et al., 2022) and 8x faster than MVGRL (Hassani and Khasahmadi, 2020). Finally, we empirically demonstrate Graph-JEPA's ability to learn highly expressive graph representations, showing it almost perfectly distinguishes pairs of non-isomorphic graphs that the 1-WL test cannot differentiate.
Figure 1: Illustration of the three main SSL approaches: (a) Joint-Embedding Architectures learn to create similar embeddings for inputs x and y that are compatible with each other and dissimilar embeddings for inputs that are not compatible. This compatibility is implemented in practice by creating different views of the input data. (b) Generative Architectures reconstruct a signal \(y\) from a compatible signal \(x\) by conditioning the decoder network on additional (potentially latent) variables \(z\). (c) Joint-Embedding Predictive Architectures act as a bridge: They utilize a predictor network that processes the context \(x\) and is conditioned on additional (potentially latent) factors, to predict the embedding of the target \(y\) in latent space.
## 2 Related work
### Self-Supervised Graph Representation Learning
Graph Neural Networks (GNNs) (Scarselli et al., 2008; Kipf and Welling, 2016a; Velickovic et al., 2017; Hamilton et al., 2017; Xu et al., 2018; Wu et al., 2019) are now established solutions to different graph machine learning problems such as node classification, link prediction, and graph classification. Nevertheless, the cost of labeling graph data is quite high given the immense variability of graph types and the information they can represent. To alleviate this problem, SSL on graphs has become a research frontier. SSL methods on graphs can be divided into two major groups (Liu et al., 2023):
**Contrastive Methods.** Contrastive learning methods usually consist of minimizing an energy function (Hinton, 2002; Gutmann and Hyvarinen, 2010) between different views of the same data. InfoGraph (Sun et al., 2019) maximizes the mutual information between the graph-level representation and the representations of substructures at different scales. GraphCL (You et al., 2020) works similarly to distance-based contrastive methods in the imaging domain. The authors first propose four types of graph augmentations and then perform contrastive learning based on them. The work of Hassani and Khasahmadi (2020) goes one step further by contrasting structural views of graphs. They also show that a large number of views or multiscale training does not seem to be beneficial, contrary to the image domain. Another popular research direction for contrastive methods is learning graph augmentations (Suresh et al., 2021) and AutoSSL (Jin et al., 2021), where the goal is to learn how to automatically leverage multiple pretext tasks effectively. Contrastive learning methods typically require a lot of memory due to data augmentation and negative samples. Graph-JEPA is much more efficient than typical formulations of these architectures given that it does not require any augmentations or negative samples. Another major difference is that the prediction in latent space in JEPAs is done through a separate predictor network, rather than using the common Siamese structure (Bromley et al., 1993)(Figure 0(a) vs 0(c)).
**Generative Methods.** The goal of generative models is to recover the data distribution, an objective that is typically implemented through a reconstruction process. In the graph domain, most generative architectures that are also used for SSL are extensions of Auto-Encoder (AE) (Hinton and Zemel, 1993) and Variational Auto-Encoder (VAE) (Kingma and Welling, 2013) architectures. These models learn an embedding from the input data and then use a reconstruction objective with (optional) regularization in order to maximize the data evidence. Kipf and Welling (2016b) extended the framework of AEs and VAEs to graphs by using a GNN as an encoder and the reconstruction of the adjacency matrix as a training objective. However, the results on node and graph classification benchmarks with these embeddings are often unsatisfactory compared with contrastive learning methods, a tendency also observed in other domains (Liu et al., 2023). A recent and promising direction is masked autoencoding (MAE) (He et al., 2022), which has proved to be a very successful framework for the image and text domains. GraphMAE (Hou et al., 2022) is an instantiation of MAEs in the graph domain, where the node attributes are perturbed and then reconstructed, providing a paradigm shift from the structure learning objective of GAEs. S2GAE (Tan et al., 2023) is one of the latest GAEs, which focuses on reconstructing the topological structure but adds several auxiliary objectives and additional designs. Our architecture differs from generative models in that it learns to predict directly in the latent space, thereby bypassing the necessity of remembering and overfitting high-level details that help maximize the data evidence (Figure 0(b) vs 0(c)).
### Joint-Embedding Predictive Architectures
Joint-Embedding Predictive Architectures (LeCun, 2022) are a recently proposed design for SSL. The idea is similar to both generative and contrastive approaches, yet JEPAs are non-generative since they cannot directly predict \(y\) from \(x\), as shown in Figure 0(c). The energy of a JEPA is given by the prediction error in the embedding space, not the input space. These models can therefore be understood as a way to capture abstract dependencies between \(x\) and \(y\), potentially given another latent variable \(z\). It is worth noting that the different models comprising the architecture may differ in terms of structure and parameters. An in-depth explanation of Joint-Embedding Predictive Architectures and their connections to human representation learning is provided by LeCun (2022). The most well-known works to use particular instantiations of JEPAs before the term was officially coined are BYOL (Grill et al., 2020) and SimSiam (Chen and He, 2021). There have recently been two
works that implement JEPAs in practice, both in the vision domain: I-JEPA (Assran et al., 2023) and MC-JEPA (Bardes et al., 2023). I-JEPA provides an instantiation in the context of images, where the goal is to predict the latent embeddings of different image patches given a context patch. MC-JEPA instantiates the architecture in the context of learning optical flow and content features within a shared encoder. In this work, we propose the first JEPA for the graph domain and use it to learn graph-level representations in a self-supervised manner.
## 3 Method
**Preliminary.** Let a graph \(G\) be defined as \(G=(V,E)\) where \(V=\{v_{1}\ldots v_{N}\}\) is the set of nodes, with a cardinality \(|V|=N\), and \(E=\{e_{1}\ldots e_{M}\}\) is the set of edges, with a cardinality \(|E|=M\). We consider symmetric, unweighted graphs, although our method can be generalized to directed or weighted graphs. In this setting, \(G\) can be represented by an adjacency matrix \(A\in\{0,1\}^{N\times N}\), with \(A_{ij}=1\) if nodes \(v_{i}\) and \(v_{j}\) are connected and \(A_{ij}=0\) otherwise.
**A general overview.** Figure 2 provides an overview of the proposed architecture. The high-level idea of Graph-JEPA is to first divide the input graph into subgraphs (patches) (He et al., 2023) and then predict the representation of a randomly chosen target subgraph from a context subgraph (Assran et al., 2023). We would like to stress that this masked modeling objective is realized in latent space, without the need for augmentations or negative samples. The subgraph representations are then pooled to create a vectorized graph-level representation. Therefore Graph-JEPA can be described through a sequence of operations, namely: 1. Spatial Partitioning; 2. Subgraph Embedding; 3. Context and Target Encoding; 4. Latent Target Prediction.
### Spatial Partitioning
We base the initial part of the Graph-JEPA architecture on the recent work of He et al. (2023), but similar ideas consisting of graph partitioning have been proposed before for Graph SSL (Jin et al., 2020). This step consists of creating different subgraphs (patches) of a graph, similar to how Vision Transformers (ViT) (Dosovitskiy et al., 2020) operate on images. We rely on the METIS
Figure 2: An overview of Graph-JEPA. We first extract non-overlapping subgraphs (patches) (a.), perform a 1-hop neighborhood expansion (b.), and encode the subgraphs with a GNN (c.). After the subgraph encoding, one is randomly picked as the context and \(m\) others as the targets (d.) and they are fed into their respective encoders (e.). The embeddings generated from the target encoder are used to produce the target subgraphs’ coordinates \(\psi_{y}\). Finally, the predictor network is tasked with directly predicting the coordinates \(\hat{\psi}_{y}\) for each target subgraph based on the context embedding and the positional embedding of each target subgraph (f.). A regression loss acts as the energy function \(D\) between the predicted and target coordinates.
(Karypis & Kumar, 1998) graph clustering algorithm, which partitions a graph into a pre-defined, non-overlapping number of clusters \(p\) (Figure 1(a)), such that the number of within-cluster links is much higher than between-cluster links. Note that having non-overlapping subgraphs can be problematic since edges can be lost in this procedure and it is possible to end up with empty "subgraphs". We would also like to preserve the notion of locality in each subgraph while relating it to others that are close in terms of shared nodes. To create this overlap and avoid completely empty subgraphs, a one-hop neighborhood expansion of all the nodes belonging to a subgraph is performed (Figure 1(b)).
### Subgraph Embedding
After partitioning the graph, we learn a representation for each subgraph through a GNN (Figure 1(c)). The specific choice of GNN architecture is arbitrary and depends on what properties we wish to induce in the representation. The learned node embeddings are mean pooled to create a vector representation for each subgraph: \(\{h_{1}...h_{p}\},h\in\mathbb{R}^{d}\). Given that these embeddings will be used as context or target variables, it is important to provide additional information regarding the subgraphs to help guide the predictor network. Thus, we propose to use a positional embedding for each subgraph, which is implemented as the maximum Random Walk Structural Embedding (RWSE) of all the nodes in that subgraph. In this way, the position is characterized in a global and consistent manner for each patch. Formally, a RWSE (Dwivedi et al., 2021) for a node \(v\) can be defined as:
\[P_{v}=(M_{ii},M_{ii}^{2},\dots,M_{ii}^{k}),P_{v}\in\mathbb{R}^{k},\quad M^{k}= (D^{-1}A)^{k} \tag{1}\]
where \(M^{k}\) is the random-walk diffusion matrix of order \(k\) and \(i\) is the index of node \(v\) in the adjacency matrix. Therefore, \(M_{ii}^{k}\) encodes the probability of node \(v\) landing to itself after a \(k\)-step random walk. Given a subgraph \(l\), let \(V_{l}\) denote the set of nodes in \(l\). The subgraph RWSE is then defined as:
\[P_{l}=\max_{v\in V_{l}}P_{v} \tag{2}\]
### Context and Target Encoding
Given the subgraph representations and their respective positional embeddings, we frame the GraphJEPA prediction task in a similar manner to I-JEPA (Assran et al., 2023). The goal of the network is to predict the latent embeddings of randomly chosen target subgraphs, given one random context subgraph. The prediction is conditioned on positional information regarding each subgraph. At each training step, we choose one random subgraph as the context \(x\) and \(m\) others as targets \(Y=\{y_{1},\dots,y_{m}\}\) (Figure 1(d)). These subgraphs are processed by the context and target encoders (Figure 1(e)) which are parametrized by Transformer encoder blocks (without self-attention for the context) where normalization is applied at first (Xiong et al., 2020). The target encoder uses Hadamard self-attention (He et al., 2023), but other choices, such as the standard self-attention mechanism (Vaswani et al., 2017) are perfectly viable. We can summarize this step as:
\[z^{x}=E_{context}(x),z^{x}\in\mathbb{R}^{d},\ \ Z^{y}=E_{target}(Y),Z^{y}\in \mathbb{R}^{m\times d} \tag{3}\]
At this stage, we could use the predictor network to directly predict \(Z^{y}\) from \(z^{x}\). This is the typical formulation of JEPAs, also followed by Assran et al. (2023). We argue that learning how to organize concepts for abstract objects such as graphs or networks directly in Euclidean space is suboptimal. In the following section, we propose a simple trick to bypass this problem using the encoding and prediction mechanisms in Graph-JEPA. The following subsections and the ablations studies in Section 4.3 will provide additional insights.
### Latent Target Prediction
Different works in Neuroscience have shown the importance of learning hierarchically consistent concepts (Deco et al., 2021), especially during infancy and young age (Rosenberg & Feigenson, 2013). Networks in the real world often conform to some concept of hierarchy (Moustians et al., 2021) and this assumption is frequently used when learning graph-level representations (Ying et al., 2018). Thus, we conjecture that Graph-JEPA should operate in a hyperbolic space, where learned embeddings implicitly organize hierarchical concepts(Nickel & Kiela, 2017; Zhao et al., 2023). This gives rise to another issue: Hyperbolic (Poincare) embeddings are known to have several tradeoffs
related to dimensionality (Sala et al., 2018; Guo et al., 2022), which severely limits the expressive ability of the model. Given that graphs can describe very abstract concepts, high expressivity in terms of model parameters is preferred. In summary, we would ideally like to have a high-dimensional latent code that has a concept of hyperbolicity built into it.
To achieve this, we think of the target embedding as a high-dimensional representation of the hyperbolic angle, which allows us to describe each target patch through its position in the 2D unit hyperbola. Formally, given a target patch \(l\), its embedding \(Z_{l}^{y}\) and positional encoding \(P_{l}\), we first express the latent target as:
\[\alpha_{l}^{y}=\frac{1}{N}\sum_{n=1}^{d}Z_{l\;n}^{y},\;\;\;\psi_{l}^{y}= \left(\begin{array}{c}cosh(\alpha_{l}^{y})\\ sinh(\alpha_{l}^{y})\end{array}\right) \tag{4}\]
where \(cosh(\cdot)\) and \(sinh(\cdot)\) are the hyperbolic cosine and sine functions respectively. The predictor module is then tasked with directly locating the target in the unit hyperbola given the context embedding and the target patch's positional encoding.
\[\hat{\psi}_{l}^{y}=MLP(LayerNorm(z^{x}+P_{l})),\hat{\psi}_{l}^{y}\in\mathbb{R }^{2} \tag{5}\]
This allows us to frame the learning procedure as a simple regression problem and the whole network can be trained end-to-end (Figure 2f). In practice, we use the smooth L1 loss as the distance function, as it is less sensitive to outliers compared to the L2 loss (Girshick, 2015):
\[L(y,\hat{y})=\frac{1}{N}\sum_{n=1}^{N}S_{n},\;\;\;S_{n}=\begin{cases}0.5(y_{n} -\hat{y}_{n})^{2}/\beta,&\text{if }|y-\hat{y}|<\beta\\ |y-\hat{y}|-0.5\beta,&\text{otherwise}\end{cases} \tag{6}\]
Thus, we are effectively measuring how far away the context and target patches are in the unit hyperbola of the plane, but the targets are actually described through a high dimensional latent code (Eq. 4). We explicitly show the differences between this choice and using the Euclidean or Hyperbolic distances as energy functions (in the latent space) in Section 4.3. Our proposed pretraining objective forces the context encoder to understand the differences in the hyperbolic angle between the target patches, which can be thought of as establishing a hierarchy.
An asymmetric design of the predictor and target networks (in terms of parameters) is used since it is reported to prevent representation collapse in self-supervised contrastive techniques (Chen et al., 2020; Baevski et al., 2022). We also utilize stop-gradient for the target encoder and update its weights using an Exponential Moving Average of the context encoder, as done in other works that utilize JEPAs (Assran et al., 2023; Grill et al., 2020; Chen and He, 2021). In the next section, we provide insights into these choices and why they are important. Finally, to produce the unique graph-level representation when fine-tuning, we simply feed all the subgraphs through the trained target encoder and then use mean pooling, obtaining a single feature vector \(z_{G}\in\mathbb{R}^{d}\) that represents the whole graph.
### Why does Graph-JEPA work?
In this subsection, we aim to provide a discussion regarding the design choices for Graph-JEPA and also why our formulation makes sense in the graph domain. We make the following assumptions for our analysis: i) The predictor network is linear; ii) We consider the encoded context features \(z^{x}\) and target coordinates \(\psi_{l}^{y}\). Without loss of generality, we assume that there is a one-to-one correspondence between context and target patches. (This holds in practice due to Eq. 5; iii) The problem is treated as a least-squares problem in finite-dimensional vector spaces over the field of reals \(\mathbb{R}\). Based on our assumptions, we can rewrite the context features as \(X\in\mathbb{R}^{n\times d}\), the target coordinates as \(Y\in\mathbb{R}^{n\times 2}\), and the weights of the linear model as \(W\in\mathbb{R}^{d\times 2}\). The objective of the predictor is:
\[\operatorname*{arg\,min}_{W}\left\|XW-Y\right\|^{2} \tag{7}\]
where \(\left\|.\right\|\) indicates the Frobenius norm. The solution to this system can be given by the (multivariate) least squares estimator:
\[W=(X^{T}X)^{-1}X^{T}Y \tag{8}\]
By plugging Eq. 8 into Eq. 7 and rewriting the difference by factorizing \(Y\) we get:
\[\operatorname*{arg\,min}_{W}\left\|(X(X^{T}X)^{-1}X^{T}-I_{n})Y\right\|^{2} \tag{9}\]
Thus, to find an optimal (linear) predictor we must find the orthogonal projection of \(Y\) onto the orthogonal complement of a subspace of \(Col(X)\). This simply translates into finding the linear combination that is closest (in terms of \(\left\|\cdot\right\|^{2}\)) to \(Y\) in a particular subspace. Similarly to what was shown in Richemond et al. (2023), we argue that this provides a key intuition behind the JEPA design. The target encoder which produces \(Y\) must not share weights or be optimized with the context encoder. If that were the case, the easiest solution to Eq. 9 would be using a vector that is orthogonal to itself, i.e., the \(\mathbf{0}\) vector, leading to representation collapse. Note that in practice, even though the predictor network doesn't have to be linear, it would still fail to perfectly learn a linear relationship (as shown above). This would render the architecture suboptimal.
We now argue about the existence of hyperbolicity due to our latent prediction task and the necessity of the asymmetric predictor. The predictor network is linearly approximating the behavior of the unit hyperbola such that it best matches with the generated target coordinates (Eq 6). Thus, the network is actually trying to estimate a space that can be considered a particular section of the hyperboloid model (Reynolds, 1993). In the hyperboloid model, hyperbolas appear as hyperbolic geodesics. We are therefore evaluating our energy in a (very) restricted part of hyperbolic space. As mentioned before, we find this task to offer great flexibility as it is straightforward to implement, works well in practice (Section 4.2), and is computationally efficient compared to the hyperbolic distance used to typically learn hyperbolic embeddings (Nickel and Kiela, 2017). In our current setting, this approximation cannot perfectly capture the dynamics of the unit hyperbola, given the linearity of the predictor. In practice, with a non-linear predictor network, we might immediately run into a degenerate solution where the target encoder and predictor output collapse representations by immediately minimizing the training objective. It is therefore important to parametrize the predictor using a simpler network that is less expressive but can capture the correct dynamics over the training process.
## 4 Experiments
The experimental section introduces the empirical evaluation of the Graph-JEPA model in terms of downstream performance on different graph datasets and tasks, as well as various ablation studies that provide more insights into our design choices.
### Experimental setting
We use the TUD datasets (Morris et al., 2020) as commonly done for graph-level SSL (Suresh et al., 2021; Tan et al., 2023). We utilize seven different graph-classification datasets: PROTEINS, MUTAG, DD, REDDIT-BINARY, REDDIT-MULTI-5K, IMDB-BINARY and IMDB-MULTI. For all classification experiments, we report the accuracy over five runs (with different seeds) of ten-fold cross validation. It is worth noting that we retrain the Graph-JEPA model for each fold, without ever having access to the testing partition both in the pretraining and fine-tuning stage. For graph regression, we use the ZINC dataset and report the Mean Squared Error over ten runs (with different seeds), given that the testing partition is already provided. More details regarding the training procedure are available in Appendix A.1.
### Downstream performance
For the experiments on downstream performance, we follow Suresh et al. (2021) and also report the results of a fully supervised Graph Isomorphism Network (GIN) (Xu et al., 2018), denoted F-GIN. As can be seen in Table 1, Graph-JEPA achieves competitive results on all datasets, setting the state-of-the-art as a pretrained backbone on five different datasets and coming second on one. Overall, our proposed framework learns semantic embeddings that work well on different types of graphs, showing that Graph-JEPA can indeed be utilized as a general pretraining method for graph-level SSL. Notably, Graph-JEPA works well for both classification and regression and performs better than F-GIN on all classification datasets.
We further explore the performance of our model on the synthetic EXP dataset (Abboud et al., 2020). The goal behind this experiment is to empirically verify if Graph-JEPA can learn highly expressive graph representations, in terms of graph isomorphisms. The results in Table 2 show that our model is able to perform much better than commonly used GNNs. Given its local and global exchange of information, this result is to be expected. Most importantly, Graph-JEPA closely rivals the flawless performance of Graph-MLP-Mixer (He et al., 2023), which is trained in a supervised manner.
### Exploring the Graph-JEPA latent space
As discussed in Section 3.4, the choice of energy function has a big impact on the learned representations. We argued against the use of simple Euclidean or Poincare embeddings previously and now provide empirical evidence for this conjecture in Table 3. Our results reveal that learning the distance between patches in the 2D unit hyperbola provides a simple way to get the advantages of both embedding types. Hyperbolic embeddings must be learned in lower dimensions due to stability issues (Yu & De Sa, 2021), while Euclidean ones do not properly reflect the dependencies between subgraphs. Our results suggest that the hyperbolic distance seems to be a better choice than the Euclidean distance in lower dimensions, but it is very unstable and expensive computationally in high dimensions. The proposed approach provides the best overall results, with the Euclidean distance performing negligibly better in a single case. We provide a qualitative example that describes how the embedding space is altered when using our proposed latent objective in Appendix A.3.
### Ablation studies
In the following, we present various ablation studies to get a better understanding of the different design choices of Graph-JEPA. For all ablations, we consider 4 out of the 8 datasets initially pre
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{**POROR**} & \multicolumn{1}{c|}{**POROR**} & \multicolumn{1}{c|}{**POROR**} & \multicolumn{1}{c|}{**POROR**} & \multicolumn{1}{c|}{**POROR**} & \multicolumn{1}{c|}{**POROR**} & \multicolumn{1}{c|}{**POROR**} & \multicolumn{1}{c|}{**POROR**} & \multicolumn{1}{c|}{**POROR**} \\ \hline \hline _4.43_ (Na et al., 2018) & 72.39 \(\pm\) 2.76 & 96.11 \(\pm\) 4.81 & 74.73 \(\pm\) 3.56 & 66.97 \(\pm\) 2.01 & 83.28 \(\pm\) 3.17 & 71.03 \(\pm\) 1.30 & 64.06 \(\pm\) 2.11 & 0.244 \(\pm\) 0.007 \\ \hline _1.66_ (Cael De et al., 2018) & 60.09 \(\pm\) 0.03 & 67.61 \(\pm\) 0.39 & 74.22 \(\pm\) 0.30 & 68.97 \(\pm\) 0.13 & 72.51 \(\pm\) 0.62 & 81.06 \(\pm\) 0.13 & 73.81 \(\pm\) 0.77 & 0.099 \(\pm\) 0.002 \\ \hline _1.66_ (Cael De et al., 2019) & 72.38 \(\pm\) 0.18 & 72.39 \(\pm\) 0.20 & 74.22 \(\pm\) 0.30 & 74.21 \(\pm\) 0.30 & 74.11 \(\pm\) 0.30 & 74.11 \(\pm\) 0.30 & 74.11 \(\pm\) 0.30 & 74.11 \(\pm\) 0.30 \\ \hline _1.66_ (Cael De et al., 2019) & 72.38 \(\pm\) 0.18 & 72.39 \(\pm\) 0.15 & 74.20 \(\pm\) 0.20 & 74.11 \(\pm\) 0.30 & 74.11 \(\pm\) 0.30 & 74.11 \(\pm\) 0.30 & 74.11 \(\pm\) 0.30 & 74.11 \(\pm\) 0.30 \\ \hline _1.66_ (Cael De et al., 2021) & 72.38 \(\pm\) 0.18 & 68.98 \(\pm\) 0.15 & 74.20 \(\pm\) 0.20 & 83.52 \(\pm\) 0.30 & 74.50 \(\pm\) 0.27 & 73.11 \(\pm\) 0.30 & 69.08 \(\pm\) 0.31 & 80.02 \\ \hline _2.16_ (Cael De et al., 2021) & 72.38 \(\pm\) 0.16 & 68.98 \(\pm\) 0.12 & 74.20 \(\pm\) 0.20 & 83.52 \(\pm\) 0.30 & 55.07 \(\pm\) 0.27 & 73.11 \(\pm\) 0.30 & 69.08 \(\pm\) 0.31 & 80.02 \\ \hline _3.66_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & 72.00 \(\pm\) 0.20 & 83.52 \(\pm\) 0.30 & 55.07 \(\pm\) 0.27 & 73.11 \(\pm\) 0.30 & 69.08 \(\pm\) 0.31 & 80.02 \\ \hline _4.16_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & 72.00 \(\pm\) 0.20 & 83.52 \(\pm\) 0.30 & 55.07 \(\pm\) 0.27 & 73.11 \(\pm\) 0.30 & 69.08 \(\pm\) 0.31 & 80.02 \\ \hline _5.06_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & 2.00 & 84.52 \(\pm\) 0.30 & 5.07 \(\pm\) 0.27 & 73.12 \(\pm\) 0.35 & 74.11 \(\pm\) 0.33 & 74.11 \(\pm\) 0.33 & 74.11 \(\pm\) 0.33 \\ \hline _6.66_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & 2.00 & 84.07 \(\pm\) 0.20 & 7.05 \(\pm\) 0.27 & 73.11 \(\pm\) 0.30 & 69.08 \(\pm\) 0.31 & 80.02 \\ \hline _7.06_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & - & 84.52 \(\pm\) 0.30 & - & 70.52 \(\pm\) 0.27 & 73.11 \(\pm\) 0.30 & 69.08 \(\pm\) 0.31 & 80.02 \\ \hline _7.06_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & - & 84.52 \(\pm\) 0.30 & - & 70.52 \(\pm\) 0.27 & 73.11 \(\pm\) 0.30 & 69.08 \(\pm\) 0.31 & 80.02 \\ \hline _7.06_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & - & 84.52 \(\pm\) 0.30 & - & 70.52 \(\pm\) 0.27 & 73.11 \(\pm\) 0.30 & 69.08 \(\pm\) 0.31 & 80.02 \\ \hline _7.06_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & - & 84.52 \(\pm\) 0.30 & - & 84.52 \(\pm\) 0.30 & 74.11 \(\pm\) 0.33 & 74.11 \(\pm\) 0.33 & 74.11 \(\pm\) 0.33 & 74.11 \(\pm\) 0.33 \\ \hline _7.06_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & - & 84.52 \(\pm\) 0.30 & - & 84.52 \(\pm\) 0.30 & 74.11 \(\pm\) 0.33 & 74.11 \(\pm\) 0.33 & 74.11 \(\pm\) 0.33 & 74.11 \(\pm\) 0.33 \\ \hline _7.06_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & - & 84.52 \(\pm\) 0.30 & - & 70.52 \(\pm\) 0.27 & 73.11 \(\pm\) 0.30 & 69.08 \(\pm\) 0.31 & 80.02 \\ \hline _7.06_ (Cael De et al., 2021) & 70.38 \(\pm\) 0.19 & 88.90 \(\pm\) 0.19 & - & 84.52 \(\pm\) 0.30 & - & 7
sented in Table 1, making sure to choose different graph types for a fair comparison. All results are summarized in Figure 3, but we also provide the results in tabular format in Appendix A.2 for additional clarity.
**Positional embedding.** Following He et al. (2023), it is possible to use the RWSE of the patches as conditioning information. Formally, let \(B\in\{0,1\}^{p\times N}\) be the patch assignment matrix, such that \(B_{ij}=1\) if \(v_{j}\in p_{i}\). We can calculate a coarse patch adjacency matrix \(A^{\prime}=BB^{T}\in\mathbb{R}^{p\times p}\), where each \(A^{\prime}_{ij}\) contains the node overlap between \(p_{i}\) and \(p_{j}\). The RWSE can be calculated as described in Eq. 1 (considering \(A^{\prime}\)). We test Graph-JEPA with these relative positional embeddings and find that they still provide good performance, but consistently fall behind the node-level (global) RWSE that we employ in our formulation (Figure 3a). Indeed, a great issue for the relative embeddings is that the number of shared neighbors obscures the local peculiarities of each patch and this adds more variance to the results.
**Using different Self-Attention mechanisms.** As stated in Section 3.3, Graph-JEPA uses Hadamard self-attention (He et al., 2023), which provides a strong inductive bias for graphs. It is possible to make no assumptions about the data and render the prediction task more challenging, by using the standard self-attention mechanism (Vaswani et al., 2017). We show the results of this change in Figure 3b. The results reveal that the performance is slightly better but more unstable, i.e., additional variance in the results. This is to be expected since, as mentioned previously, using the standard self-attention mechanism emphasizes a lack of inductive bias regarding the input data.
**Random subgraphs.** A natural question that arises in our framework is how to design the spatial partitioning procedure. Using a structured approach like METIS is intuitive and leads to favorable results. Another option would be to simply extract random, non-empty subgraphs as context and target. As can be seen in Figure 3c, the random patches provide strong performance as well but do not show any improvement. Even though our results show that it might not be necessary to use a structured way to extract the patches, not all graphs are equally easy to sample randomly from. Thus, we advocate extracting subgraphs with METIS as it is a safer option in terms of generalizability across different graphs and what inductive biases it provides.
## 5 Conclusion
In this work, we introduce the first Joint Embedding Predictive Architecture (JEPA) for graph-level Self-Supervised Learning (SSL). A proper design of the model both in terms of data preparation and pretraining objective reveals that it is possible for a neural network model to self-organize the semantic knowledge embedded in a graph, demonstrating competitive performance in both graph classification and regression. Future research directions include extending the method to node and edge-level learning, exploring the expressiveness of Graph-JEPA, and gaining insights into the optimal geometry of the embedding space for graph SSL.
Figure 3: Ablation studies: a. Performance when using absolute (node-level) vs relative (patch-level) RWSEs. b. Results when using the standard self-attention mechanism vs a graph-specific one. c. Performance when using METIS subgraphs vs using random subgraphs. We remind the reader that the MAE is reported on ZINC and Accuracy on the other datasets. The error bars represent the average standard deviation values over the runs in order to report results similarly to the tables in previous sections. Best viewed in color. |
2309.07119 | Metastable Kitaev Spin Liquids in Isotropic Quantum Heisenberg Magnets | Metastable states with surprising properties abound in Hilbert space. We
study unfrustrated isotropic spin-\half Heisenberg models in honeycomb lattice
and find emergence of \textit{metastable Kitaev spin liquids having a 2-spin
nematic long range order}, via spontaneous symmetry breaking. Decomposition of
isotropic Heisenberg Hamiltonian $H^H$ into an exact sum of 3 noncommuting
(permuted) Kitaev Hamiltonians, $H^H$ = $H^{K}_{\rm xyz}+H^{K}_{\rm
yzx}+H^{K}_{\rm zxy},$ helps us build a degenerate \textit{manifold of flux
free metastable Kitaev spin liquid vacua} and vector Fermionic (Goldstone like)
collective modes. We introduce a method, \textit{symmetric decomposition of
Hamiltonians}, which might help craft \textit{designer metalstable phases}. It
is likely that small Kitaev interactions present in Jackeli-Khaliullin-Kitaev
materials, with dominant Heisenberg couplings, bring in metastable Kitaev spin
liquid features in real experiments. Present work opens possibilities of
performing quantum computation and other tasks, using exotic quasiparticles and
exotic metastable states, present in nonexotic real systems. | Ganapathy Baskaran | 2023-09-13T17:56:26Z | http://arxiv.org/abs/2309.07119v1 | # Metastable Kitaev Spin Liquids in Isotropic Quantum Heisenberg Magnets
###### Abstract
Metastable states with surprising properties abound in Hilbert space. We study unfustrated isotropic spin-\(\frac{1}{2}\) Heisenberg models in honeycomb lattice and find emergence of _metastable Kitaev spin liquids having a 2-spin nematic long range order_, via spontaneous symmetry breaking. Decomposition of isotropic Heisenberg Hamiltonian \(H^{H}\) into an exact sum of 3 noncommuting (permuted) Kitaev Hamiltonians, \(H^{H}=H^{K}_{\rm xya}+H^{K}_{\rm yx}+H^{K}_{\rm xy}\), helps us build a degenerate _manifold of flux free metastable Kitaev spin liquid vacua_ and vector Fermionic (Goldstone like) collective modes. We introduce a method, _symmetric decomposition of Hamiltonians_, which might help craft _designer metastable phases_. It is likely that small Kitaev interactions present in Jackeli-Khaliullin-Kitaev materials, with dominant Heisenberg couplings, bring in metastable Kitaev spin liquid features in real experiments. Present work opens possibilities of performing quantum computation and other tasks, using exotic quasiparticles and exotic metastable states, present in nonexotic real systems.
## I I. Introduction
In the incomprehensibly vast many body Hilbert space of model Hamiltonians, novel metastable states, many-body localization, quantum scars, Hilbert space fragmentation, time crystals, quantum orders and phenomena, beyond the familiar ground realities exist [1, 2, 3, 4]. Model Hamiltonians, designed to understand certain low energy physics and experiments, are space ships that could take us to new worlds. Such explorations are currently in vogue, thanks to exciting developments in non-equilibrium quantum manybody physics and quantum information science.
Equilibrium stable phases with long range order are well known: scores of crystalline phases of ice, ferro/antiferromagnets, ferro/antiferroelectrics, superfluids, superconductors etc. Metastable phases, which break translational invariance, are built using topological defects in underlying Landau type of order parameters: intertwined network of magnetic domain walls, Skyrmion glasses in magnets, vortex glasses in type-II superconductors, structural glasses, defect networks in crystals etc. Degree of metastability, time scale of relaxation are decided by, among other things, thermal, quantum fluctuations, interaction between defects etc. There are also defect free metastable phases, like supercooled water and possibly supersolid He\({}^{4}\).
There is growing interest in going beyond Landau's order parameter paradigm in quantum matter. For example, quantum spin liquids, present in certain frustrated Heisenberg spin models [5], anisotropic Kugel-Khomskii-Kitaev compass models [7, 8] etc., harbor exotic neutral Fermion, anyon excitations, emergent gauge fields, quantum orders etc [9, 10, 11, 12]. Spin liquids are sources of rich physics and connect us to worlds beyond condensed matter and rich mathematics [13, 14].
Metastable quantum phases with quantum orders or symmetry protected topological orders remain vastly unexplored. We suggest _Symmetric Decomposition of Hamiltonians_ (SDH), a route to explore or sculpt degeneracy metastable vacua, having quantum or topological orders. It is illustrated using unfrustrated isotropic Heisenberg spin models. Isotropic spin-\(\frac{1}{2}\) Heisenberg Hamiltonians with nearest neighbor FM/AFM coupling in honeycomb lattice are known to have spontaneous long range FM/AFM ordered ground states. We build, in these unfrustrated isotropic Heisenberg systems, _metastable Kitaev spin liquid vacua_, exhibiting spontaneous two spin nematic order, quantum order, Goldstone like Fermionic collective modes etc.
Decay of the false Kitaev spin liquid vacuum of an isotropic Heisenberg magnet is a difficult question to analyse; it is beyond the scope of the present article. However, we hypothesize a three stage decay, with a superimposed transient and _time crystal_ like local oscillations.
In _Jackeli-Khaliullin-Kitaev materials_[15], with a dominant Heisenberg coupling, metastable Kitaev spin liquid phenomena are likely to intervene in real experiments, because of an enhanced Kitaev spin liquid susceptibility exhibited by isotropic Heisenberg systems. This could explain some of the anomalies and experimental puzzles.
Our work opens possibilities of performing quantum computation and other tasks, using exotic quasiparticles and exotic metastable states hiding in real nonexotic systems.
## II Symmetries of Quantum Spin Liquids
Anderson proposed spin singlet short range RVB (quantum spin liquid) ground state for spin-\(\frac{1}{2}\) isotropic Heisenberg antiferromagnet [5], on a frustrated triangular lattice. This paramagnetic quantum spin liquid state has no long range \(120^{0}\) type single spin order. Further, phase relations among bond singlet configurations can be varied suitably to define family of quantum spin liquids, beyond the simple short range RVB [6], containing rich variety of quantum orders, topological orders etc.
It also is possible to have stable paramagnetic quantum spin liquid ground states, which have spontaneous long range two spin nematic orders [16; 17; 18; 19; 20], showing divergence of corresponding 2 spin susceptibilities, obtained via 4 spin correlation functions. Or three spin scalar chiral orders [21], exhibiting divergence of corresponding 3 spin susceptibilities, obtained via suitable 6 spin correlation functions. These generalised RVB states have resonating singlets and triplets appearing together in a variety of ways and define a plethora of quantum spin liquids.
Kitaev spin liquid ground state (zero flux sector) is anisotropic in spin space. It has lesser symmetry, from the point of view of global spin symmetry present in isotropic Heisenberg Hamiltonians. However, Kitaev spin liquid has the full symmetry of the Kitaev spin Hamiltonian that has manifest bond anisotropy. In the present article, we show emergence of such metastable phases and corresponding Goldstone modes, even in the absence of frustration in spin-\(\frac{1}{2}\) isotropic Heisenberg magnets. (We also find [22] that we could stablize such a spontaneous symmetry broken Kitaev spin liquid like ground state having nematic order, by incorporating non neighbor Heisenberg couplings in spin-\(\frac{1}{2}\), isotropic Heisenberg Hamiltonian in honeycomb lattice).
## FM or AFM order in isotropic Heisenberg Hamiltonians
To study AFM ordered ground state arising via spontaneous symmetry breaking, we traditionally decompose isotropic spin Hamiltonians into solvable pieces. Then start with ground state solution of one of the solvable pieces and include effects of the rest of the pieces using certain approximations. For example, we decompose non-frustrated nearest neighbor S=\(\frac{1}{2}\) Heisenberg Hamiltonian \(H^{H}\) in a bipartite lattice into three noncommuting Ising pieces,
\[H^{H}=J\sum_{\langle ij\rangle}\vec{\sigma}_{i}\cdot\vec{\sigma}_{j}=J\sum_{ \langle ij\rangle}(\sigma_{i}^{x}\sigma_{j}^{x}+\sigma_{i}^{y}\sigma_{j}^{y}+ \sigma_{i}^{z}\sigma_{j}^{z}) \tag{1}\]
and perform spin wave (stability) analysis, about classical (direct product) ground state of one of the three pieces, say z-axis Ising piece. Holstein-Primakoff transformation and large S approximation is often to include remaining XY piece of the Hamiltonian and study effects of residual spin interactions (transverse quantum fluctuations) and obtain Goldstone modes.
Presence of positive frequency quadratic/linear Goldstone modes for AFM, indicates emergence of long range order via spontaneous symmetry breaking. In the case of unfrustrated AFM's in dimensions 2 and above, transverse zero point spin fluctuations lead to (finite, non-divergent) zero point reduction in sublattice magnetization. Quantum fluctuations also modify Ising AFM (product) state into quantum entangled many body spin states.
## Looking for metastable Kitaev spin liquids in isotropic Heisenberg Hamiltonians
We observe that isotropic SU(2) symmetric Heisenberg Hamiltonian \(H^{H}\), with nearest neighbor \(\langle ij\rangle\) coupling in a honeycomb lattice can be symmetrically decomposed exactly, into three noncommuting pieces of Kitaev Hamiltonians,
\[H^{H}=J\sum_{\langle ij\rangle}\vec{\sigma}_{i}\cdot\vec{\sigma}_{j}\ \equiv\ H^{K}_{xyz}+H^{K}_{yzx}+H^{K}_{zxy} \tag{2}\]
where the reference Kitaev piece
\[H^{K}_{xyz}\equiv J\sum_{\langle ij\rangle_{x}}\sigma_{i}^{x}\sigma_{j}^{x}+J \sum_{\langle ij\rangle_{y}}\sigma_{i}^{y}\sigma_{j}^{y}+J\sum_{\langle ij \rangle_{z}}\sigma_{i}^{z}\sigma_{j}^{z} \tag{3}\]
The three noncommuting Kitaev pieces are related by cyclic permutations of spin components along 3 nearest neighbor bonds, denoted as \(\left\langle ij\right\rangle_{x},\left\langle ij\right\rangle_{y}\) and \(\left\langle ij\right\rangle_{z}\) of the reference piece \(H^{K}_{xyz}\) (equation 3).
In the supplementary section we summarise exact solution of manybody spectrum [8] of the reference Kitaev Hamiltonian. Kitaev writes Pauli spin operators \(\sigma_{i}^{x}=ic_{i}^{x}c_{i}\), \(\sigma_{i}^{y}=ic_{i}^{y}c_{i}\) and \(\sigma_{i}^{z}=ic_{i}^{z}c_{i}\), using four Majorana Fermion operators \((c_{i}^{x},c_{i}^{y},c_{i}^{z},c_{i})\equiv(\vec{c}_{i},c_{i})\) (vector and a scalar), of Hilbert space dimension 4. This representation preserves commutation relation between components of the Pauli spin operators. The condition \(\sigma_{i}^{x}\sigma_{i}^{y}\sigma_{i}^{z}=i\) for each site, imposes a local constraint \(c_{i}^{z}c_{i}^{y}c^{z}c_{i}=1\), and reduces dimension of the 4-dimensional Hilbert space of Majorana Fermion modes into 2-dimensional physical Hilbert space of Pauli spin operators.
In what follows we pick a reference Kitaev Hamiltonian piece \(H^{K}_{xyz}(0)\), in the _zero flux sector_ - it is the Hamiltonian of free emergent Majorana Fermions hopping on the honeycomb lattice (equation 16, supplementary section), in the absence of flux (B\({}_{p}=1\)) in every plaquette. Then we study, using certain approximations, how this reference metastable Kitaev liquid vacuum and its low energy excitation spectra are modified by interactions among Majorana Fermions, arising from interaction terms of remaining two Kitaev pieces.
As discussed in the next section, in the renormalized vacuum sector, Majorana Fermion hopping parameter gets modified and new vector Majorana Fermion collective modes emerge. All excitations have frequencies greater than or equal to zero. This qualifies the reference zero flux Kitaev spin liquid state as a metastable or locally stable phase, within our approximations.
## Interactions modified metastable Kitaev spin liquids in isotropic Heisenberg magnets
In the Majorana Fermion representation for spin-\(\frac{1}{2}\) operators Heisenberg Hamiltonian has the manifest rota
tional invariance, with respect to global SO(3) rotation of the vectors \(\vec{c}_{i}\)'s:
\[H=J\sum_{\langle ij\rangle}\vec{\sigma}_{i}\cdot\vec{\sigma}_{j}=J\sum_{\langle ij \rangle}c_{i}c_{j}(\vec{c}_{i}\cdot\vec{c}_{j}) \tag{4}\]
Scalar and vector Majorana Fermions get delocalized over the honeycomb lattice, via isotropic 2-body interactions. However, this 2-body interaction prevents exact solvability of SU(2) symmetric Heisenberg Hamiltonian, even in the enlarged Hilbert space.
With a focus on building metastable vacuua, we proceed to find the effect of the two remaining Kitaev pieces on the ground state Kitaev spin liquid sector of the reference Kitaev piece, using a Hartree type of approximate analysis. We introduce Heisenberg Hamiltonian H\({}^{H}\)(0;xyz), for the ground state (zero flux) sector and take care of quantum fluctuations about the zero flux sector of reference Kitaev spin liquid (described by \(H^{K}_{xyz}(0)\)), arising from interaction among Majorana Fermions, present in the remaining two Kitaev pieces \(H^{K}_{yxx}+H^{K}_{zxy}\).
The effective Heisenberg Hamiltonian H\({}^{H}\)(0;xyz), corresponding to reference zero flux sector acquires a specific anisotropic two body Majorana Fermion interaction form:
\[H^{H}(0;xyz)=H^{K}_{xyz}(0)+H^{K}_{yxx}+H^{K}_{zxy}\ \, \tag{5}\]
where
\[H^{K}_{xyz}(0) \equiv J\sum_{\langle ij\rangle}ic_{i}c_{j}\ \ \ {\rm and} \tag{6}\] \[H^{K}_{yzz}+H^{K}_{zxy} \equiv +J\sum_{\langle ij\rangle_{x}}c_{i}c_{j}(c^{y}_{i}c^{y}_{j}+c^{z} _{i}c^{z}_{j})+\] \[+J\sum_{\langle ij\rangle_{y}}c_{i}c_{j}(c^{z}_{i}c^{z}_{j} + c^{x}_{i}c^{x}_{j})+J\sum_{\langle ij\rangle_{z}}c_{i}c_{j}(c^{x}_{i }c^{x}_{j}+c^{y}_{i}c^{y}_{j}) \tag{7}\]
In H\({}^{H}\)(0;xyz), corresponding to zero flux sector, scalar Fermion has an independent dynamics (first term, from reference Kitaev piece). Vector Fermions have no independent dynamics. Scalar Majorana Fermion has bond dependent 2-body interactions with the vector Fermions (arising from remaining two Kitaev pieces, equation 7), which help delocalizes vector Fermions in an anisotropic fashion. This is to be contrasted with the full isotropic Heisenberg Hamiltonian (equation 4), where scalar and vector Fermions get delocalized in an isotropic fashion, via 2-body interactions.
Now we perform a uniform Hartree factorization of interaction terms (equation 6) and get Hartree approximated Heisenberg Hamiltonian \(H^{H}_{H}(0;xyz)\) in the zero flux sector:
\[H^{H}_{H}(0;xyz)=(J+\alpha_{v})\sum_{\langle ij\rangle}ic_{i}c_{j} + \tag{8}\] \[+(J+\alpha_{s})\{\sum_{\langle ij\rangle_{x}}i(c^{y}_{i}c^{y}_{j} +c^{z}_{i}c^{z}_{j}) + \sum_{\langle ij\rangle_{y}}i(c^{z}_{i}c^{z}_{j}+c^{x}_{i}c^{x}_{j}) +\] \[+\sum_{\langle ij\rangle_{z}}i(c^{x}_{i}c^{x}_{j}+c^{x}_{i}c^{y}_ {j})\}+{\rm constant}, \tag{9}\]
where the Hartree averages, \(\alpha_{v}\equiv\frac{2}{3}\langle\vec{c}_{i}\cdot\vec{c}_{j}\rangle\) and \(\alpha_{s}\equiv\langle c_{i}c_{j}\rangle\) are determined selfconsistently. We define renormalized hopping parameters, J\({}_{v}\equiv J(1+\alpha_{v})\) for vector and J\({}_{s}\equiv J(1+\alpha_{s})\) for scalar Fermions. After regrouping, above Hamiltonian takes a more transparent form:
\[H^{H}_{H}(G,xyz)=J_{s}\sum_{\langle ij\rangle}ic_{i}c_{j} +\] \[+J_{v}\{\sum_{\langle ij\rangle_{yz}}ic^{x}_{i}c^{x}_{j}+\sum_{ \langle ij\rangle_{xx}}ic^{y}_{i}c^{y}_{j} + \sum_{\langle ij\rangle_{xy}}ic^{z}_{i}c^{z}_{j}\}, \tag{10}\]
where the new bond symbol \(\langle ij\rangle_{xy}\) stands for nearest neighbor hopping of z-component of vector Majorana Fermions, along a system of parallel xy zig-zag chains, made of alternating x and y type of bonds. Similarly x and y components of vector spins hop along yz and zx zig-zag chain systems respectively. While scalar Fermions continue to hop isotropically in the honeycomb lattice, vector Fermions acquire anisotropic 1D chain dynamics. That is, (x,y,z) components of vector Majorana Fermions become one dimensional and hop independently only along (yz, zx, xy) system of zig-zag chains respectively.
Using translational invariance and periodic boundary condition, we get Bloch states for scalar Fermion with 2D (graphene like) Kitaev spectrum and 1D spectrum for each components of vector Fermions, which hop along yz, zx and xy zig-zag chains. All energy eigen values are \(\geq\) 0. This ensures that Kitaev spin liquid state is not completely destabilized by _transverse quantum fluctuations_ (present in the isotropic Heisenberg Hamiltonian) and survives as a metastable one. Scalar Fermion spectrum is renormalized. While the scalar Majorana Fermions have Dirac nodes at K and K' points in, three components of vector Fermions have three independent sets of zero energy lines in the BZ. We have Majorana _Fermi surfaces_ in the 2D BZ, somewhat similar to Kitaev model in the Fisher lattice [25].
## Spontaneous 2 Spin Nematic
Metastable order in isotropic Heisenberg magnets
Metastable Kitaev spin liquid phase is anisotropic, as evidenced from 1D nature of the three components of vector Majorana Fermions, delocalized independently along three different sets of zig-zag chains. There is a nematic
spin liquid order arising via symmetry breaking (from choice of reference Kitaev piece) in an SU(2) symmetric Heisenberg spin system. Vector Fermions, which were organized as static Z\({}_{2}\) gauge field in Kitaev model, organize themselves as positive frequency vector Fermion collective modes, in our isotropic Heisenberg Hamiltonian.
We obtain a manifold of Kitaev spin liquid metastable vacua generated by SO(3) rotation. This is achieved by global rotation of spin axes of the reference Kitaev piece. The vector Fermionic modes we have obtained are collective modes, which represent generalized Goldstone modes. Interestingly, the Kitaev spin liquid metastable vacua exist for both Ferro or Antiferromagnetic isotropic Heisenberg systems (for J positive and negative).
Kitaev Hamiltonian is known to have highly anisotropic and ultra short range two spin correlation functions [23], consistent with the anisotropic bond dependent Ising spin-spin interaction in Kitaev spin Hamiltonian. In particular,
\[\langle\sigma_{i}^{\alpha}\sigma_{j}^{\beta}\rangle =g_{\langle ij\rangle_{\alpha}}\delta_{\alpha\beta} \text{i,j nearest neighbors}\] \[= 0\text{otherwise} \tag{11}\]
In our approximation we find that bond dependent anisotropy gets modified only quantitatively, by quantum fluctuations contained in equation 7. An accurate and quantitative estimate of correction to spin-spin correlation function (unlike the case of pure Kitaev model [23]) involves projection of the Majorana Fermion wave function into physical spin Hilbert space, which we are not performing in the present article.
From the point of view of full rotational spin symmetry of isotropic Heisenberg Hamiltonian, above anisotropic spin-spin correlations represent emergent two spin nematic order present in the metastable Kitaev spin liquid we have constructed. It is a three sublattice (3 types of nearest neighbor bonds) nematic order (represented by three _directors_, headless vectors), lying parallel to x,y and z directions in spin space.
It is also important to remember that eventhough we know approximate manybody spectrum, the manybody wave functions, including the ground state is not known in the physical Hilbert space, interms of physical spin variables.
## Crafting by Symmetric Decomposition of Hamiltonians
Often our focuss is on low energy manybody spectrum and low energy physics of model Hamiltonians. However, these Hamiltonians also organize a rich spectrum of excited manybody states and metastable phases in the Hilbert space. We can get information about metastable phases, when models are completely integrable or use exact diagonalization study of small systems, quantum Monte Carlo simulations, time dependent studies etc. One could also use renormalization group study, in multi-parameter space of Hamiltonians, to explore metastable phases by a study of stable and unstable fixed points and flow in the parameter space.
We saw that symmetric splitting of unfrustrated isotropic Heisenberg Hamiltonians on the honeycomb lattice, into three noncommuting bond anisotropic (frustrated) Kitaev spin Hamiltonians (frustration arising from bond dependent anisotropic spin-spin interactions), helped us to find Kitaev spin liquid like metastable phases and associated spontaneous nematic order etc. This suggests an alternative approach _symmetric decomposition of Hamiltonian, using symmetries of subgroups of full lattice translational and spin rotational symmetries_.
Our hypothesis is that residual symmetries in the decomposed Hamiltonian protects corresponding metastable phases. Further, if the decomposed reference Hamiltonian piece is frustrated enough (like in Kitaev Hamiltonian) in spin space, it also ensures spin liquid metastable states.
There are multiple ways for Hamiltonian decomposition, where the pieces have symmetries of subgroups of lattice translation and spin rotation group. To illustrate this we introduce complementary Kitaev model \(H^{cK}_{xyz}\equiv H^{K}_{yzx}+H^{K}_{zxy}\), such that isotropic Heisenberg Hamiltonian, is a sum of Kitaev nd complementary Kitaev Hamiltonians: \(H^{H}\equiv H^{K}_{xyz}+H^{cK}_{yz}\). We wish to find if metastable vacuua (complementary Kitaev spin liquid) corresponding to complementary Kitaev model exists. Since complementary Kitaev model is not exactly solvable, answering this question becomes difficult.
Let us consider a simpler situation of splitting unfrustrated isotropic spin-\(\frac{1}{2}\) 2D Antiferromagnetic Heisenberg Hamiltonian on a triangular lattice, into Ising and XY pieces. We ask, is there is a manifold of metastable states where we have spontaneous Kosterlitz-Thouless XY order and corresponding Goldstone modes?
Similarly, spin-\(\frac{1}{2}\) XY model Hamiltonian in a square lattice H\({}^{XY}\) can be symmetrically decomposed into noncommuting sum of two Kugel-Khomskii [7] pieces \(H^{KK}_{xy}\)and\(H^{KK}_{yx}\):
\[H^{XY}=J\sum_{\langle ij\rangle}(\sigma_{i}^{x}\sigma_{j}^{x}+\sigma_{i}^{y} \sigma_{j}^{y})\ \equiv\ H^{KK}_{xy}+H^{KK}_{yx} \tag{12}\]
Approximate analysis can be performed to find if square lattice XY FM or AFM contains metastable anisotropic spin liquid vacuua corresponding to Kuge-Khomskii compass Hamiltonians.
There is a hope that designer metastable phases could be found out using our Symmetric Decomposition of Hamiltonians.
Using a similar approach, by adding and substracting SU(2) symmetric, isotropic spin liquid stabilizing frustrating terms in the frustration free isotropic Hamiltonians, we could explore RVB type of SU(2) symmetric quantum spin liquids, which have a rich variety of quantum orders and chirality orders.
FM/AFM Hamiltonians of other lattices, where Kitaev model can be defined and exactly solved are being
investigated by us [24] looking for metastable Kitaev spin liquids: Fisher lattice (having Majorana FS [25]), decorated honey comb lattice exhibiting spontaneous Chiral symmetry breaking [26], non-Archemedian lattice [27], classical spin Kitaev model [28] 3D Kitaev model [29]. There are indications that because the lattices are more complex than honeycomb lattice, Kitaev spin liquids have deeper metastable minimum.
## IV Preparation of False Vacuum and Watching Its Evolution
_Fate of the false vacuum_[30] is an important issue, from cosmology to our present problem of quantum tunneling and decay of metastable Kitaev spin liquid vacuum of an isotropic Heisenberg spin system. Since we have Jackeli-Khaliullin-Kitaev Heisernberg [15] materials with dominant isotropic Ferro/AFM interactions, it will be wonderful to perform real experiments. Also try to understand existing experimental results and anomalies [31; 32] in the light of intervenison of metastable Kitaev spin liquid phases and quasiparticles.
As a number of quantum processors have emerged and experimental simulations [33] and computer simulations [34] are being performed on certain manybody lattice Hamiltonians, we suggest the following protocol to address our problem: i) prepare, best possible Kitaev spin liquid ground state for a finite number of qubits, using a reference (FM/AFM) Kitaev spin Hamiltonian and ii) use the full isotropic spin-half (FM/AFM) Hosenberg Hamiltonian and study time evolution of the prepared Kitaev spin liquid with a nematic order.
What do we expect? Our hypothesis is presence of three stages in time evolution. Since there is a finite energy gap for Z\({}_{2}\) flux excitaions, we expect, first, renormalization of the Kitaev spin liquid state (analogue of growth of zero point spin fluctuations in an Ising AFM vacuum in 3 or 2D). After a time interval, the system will nucleate Z\({}_{2}\) fluxes, create a small density of Majorana Fermions as well as topological defects in the Kitaev nematic order, via quantum tunneling. Finally there will be a crossover to nucleation of real (FM/AFM) ground state bubbles and their growth.
In a model study we find [35] a remarkable time crystal like local transient behaviour, a periodic oscillation of the state vector. In short, there may be many surprises in store, in real experiments and theoretical studies.
## V Discussion
In the present article we have explored possibility of metastable spin liquid states in istropic spin-\(\frac{1}{2}\) Heisenberg systems on a honeycomb lattice. In addition, we have suggested general ways to explore presence of metastable spin liquid states. While some of the metastable states may turn out to be locally stable in all directions, others may not be locally stable in all directions.
Our study also raises some interesting questions. If we get Kitaev spin liquid and spin nematics in isotropic systems, it allows for topological singularities in the spin nematic phase, in addition to plaquette flux excitations. It is likely these topological defects trap non-Abelian anyons as zero modes.
Taking advantage of known exact specrum of Kitaev honeycomb lattice Hamiltonian, we have restricted our search to study quantum spin liquids that do not have manifest global SU(2) symmetry. We also find, in our approximate study of isotropic Heisenberg magnets, presence of SU(2) symmetric metastable spin liquids, a la RVB states. Under certain conditions _time crystal_ like transient and local metastable phases emerge.
It is indeed exciting that beyond the comfort zone of equilibrium quantum statistical mechanics, rich new worlds await even in energy eigen states and near eigen (manybody wave packet) states of familiar and well studied Hamiltonians. From the point of view of performing quantum computation and related tasks, present work opens new avenues to _explore and use exotic quasiparticles and exotic metastable states, which are hiding in nonexotic real systems_.
## VI Acknowledgement
I thank Francesco Ferrari, R. Shankar and S. Mandal for discussions; and thank Tony Leggett for his encouraging remarks. Hospitality and stimulating visiting positions, at The Institute of Mathematical Sciences, Chennai, Indian Institute of Technology, Chennai and Perimeter Institute for Theoretical Physics (PI), Waterloo, Canada are acknowledged. PI is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation.
## VII Supplementary Section
## VIII Exact Solution of Kitaev Honeycomb Lattice Model - a Summary
Since we are looking for metastable Kitaev spin liquid phases in isotropic Heisenberg Hamiltonians, we review Kitaev's method of finding exact manybody spectrum of the Kitaev anisotropic spin Hamiltonian in the honeycomb lattice:
\[H^{K}_{xyz}\equiv J\sum_{\langle ij\rangle_{x}}\sigma^{x}_{i}\sigma^{x}_{j}+J \sum_{\langle ij\rangle_{y}}\sigma^{y}_{i}\sigma^{y}_{j}+J\sum_{\langle ij \rangle_{z}}\sigma^{z}_{i}\sigma^{z}_{j} \tag{13}\]
Here 3 nearest neighbor bonds, where two spin Ising ineraction term, xx or yy or zz alone are present are denoted respectively by \(\langle ij\rangle_{x},\langle ij\rangle_{y}\) and \(\langle ij\rangle_{z}\).
We assume periodic boundary condition for the hexagonal lattice, having N unit cells and 2N sites. Each site carries a spin-\(\frac{1}{2}\) moment. Kitaev found that plaquette spin operators (sites 1 to 6 denote nearest neighbor sites of a given plaquette):
\[B_{p}\equiv\sigma_{1}^{z}\sigma_{2}^{x}\sigma_{3}^{y}\sigma_{4}^{z}\sigma_{5}^{x }\sigma_{6}^{y}\ \ \ \, \tag{14}\]
defined for each elementary plaquette commute among themselves and with \(H_{xyz}^{K}\). Since B\({}_{p}^{2}=1\), eigen values of B\({}_{p}=\pm 1\). Consequently, energy eigenstates of Kitaev Hamiltonian get classified interms of eigen values of the set \(\{B_{p}\}\).
To get the full manybody spectrum Kitaev introduces a Majorana Fermion representation for Pauli spin operators in an enlarged Hilbert space. Each site has four Majorana Fermion operators, \((c_{i}^{x},c_{i}^{y},c_{i}^{z},c_{i})\equiv(\vec{c}_{i},c_{i})\) (vector and a scalar), of dimension 4. Dimension of Majorana Fermion Hilbert space is \(4^{2N}\). Dimension of physical spin Hilbert space is \(2^{2N}\).
The representation, \(\sigma_{i}^{x}=ic_{i}^{x}c_{i}\), \(\sigma_{i}^{y}=ic_{i}^{y}c_{i}\) and \(\sigma_{i}^{z}=ic_{i}^{z}c_{i}\), preserves commutation relation between components of the Pauli spin operators. The condition \(\sigma_{i}^{x}\sigma_{i}^{y}\sigma_{j}^{z}=i\) for each site, imposes a local constraint \(c_{i}^{x}c_{i}^{y}c^{z}c_{i}=1\), and reduces dimension of the 4-dimensional Hilbert space of Majorana Fermion modes into 2-dimensional physical Hilbert space of Pauli spin operators.
A specially designed anisotropic structure of the Kitaev Hamiltonian H\({}_{xyz}^{K}\) makes it exactly solvable. In a remarkable fashion, certain pairs of vector Fermion components appearing in H\({}_{xyz}^{K}\) organize themselves into _static and commuting Hartree vector potential_, which are emergent static Z\({}_{2}\) gauge fields. One obtains free (noninteracting) scalar Fermions, which hop and delocalize, in the presence of emergent static Z\({}_{2}\) gauge fields:
\[H_{xyz}^{K}=J\sum_{\langle ij\rangle_{x}}c_{i}c_{j}c_{i}^{x}c_{j} ^{x}+J\sum_{\langle ij\rangle_{y}}c_{i}c_{j}c_{i}^{y}c_{j}^{y}+J\sum_{\langle ij \rangle_{z}}c_{i}c_{j}c_{i}^{z}c_{j}^{z}\] \[\equiv J\sum_{\langle ij\rangle_{x}}ic_{i}c_{j}\tilde{u}_{\langle ij \rangle_{x}}^{x}+J\sum_{\langle ij\rangle_{y}}ic_{i}c_{j}\tilde{u}_{\langle ij \rangle_{y}}^{y}+J\sum_{\langle ij\rangle_{z}}ic_{i}c_{j}\tilde{u}_{\langle ij \rangle_{z}}^{x} \tag{15}\]
Operators \(\hat{u}_{\langle ij\rangle_{a}}^{\alpha}\equiv-ic_{i}^{\alpha}c_{j}^{\alpha}\) (\(\alpha=\) x,y,z) appearing above commute among themselves and with H\({}_{xyz}^{K}\): consequently they can be treated as c-numbers in solving the manybody Hamiltonian. Further, \((\hat{u}_{\langle ij\rangle_{\alpha}}^{\alpha})^{2}=1\) implies that \(\hat{u}_{\langle ij\rangle_{a}}^{\alpha}=\pm 1\).
The transformation \(\hat{u}_{\langle ij\rangle_{a}}^{\alpha}\rightarrow\tau_{i}\hat{u}_{\langle ij \rangle_{a}}^{\alpha}\tau_{j}\) and \(c_{i}\rightarrow\tau_{i}c_{i}\) leaves Kitaev Hamiltonan invariant. Here \(\tau_{i}\pm 1\). We have an emergent local Z\({}_{2}\) gauge invariance and u's become emergent Z\({}_{2}\) gauge fields: Consequently, enlarged Hilbert space of dimension \(4^{2N}\) breaks into \(2^{2N}\) gauge copies, each copy having identical energy spectrum of \(2^{2N}\) manybody energy eigen states.
Formally, the exact many body spectrum becomes that of emergent non-interacting Majorana Fermions hopping in the lattice in the presence of static \(\hat{u}_{\langle ij\rangle_{\alpha}}^{\alpha}\) fields. Number of distinct Z\({}_{2}\) gauge flux configurations is \(2^{N}\), where N is the number of elementary hexagons. Wilson loops that are gauge invariant operators on hexagens, measure gauge fluxes (0 or \(\pi\)), get identified with the B\({}_{p}\) operators (equation 3). That is, B\({}_{p}\equiv\hat{u}_{\langle 12\rangle_{y}}^{y}\hat{u}_{\langle 23\rangle_{y}}^{z} \hat{u}_{\langle 34\rangle_{x}}^{x}\hat{u}_{\langle 45\rangle_{y}}^{y}\hat{u}_{\langle 56 \rangle_{x}}^{z}\hat{u}_{\langle 61\rangle_{x}}^{x}\). Eigen values \(\pm 1\) of B\({}_{p}\) corresponds to 0 and \(\pi\) fluxes of emergent Z\({}_{2}\) gauge fields.
It follows from Lieb theorem that ground state of Kitaev Hamiltonian is the lowest energy state of zero Z\({}_{2}\) flux, \(\{B_{p}=1\}\) sector. Hamiltonian H\({}_{xyz}^{K}(0)\) of the zero flux sector is obtained by putting all \(\hat{u}_{\langle ij\rangle_{a}}^{\alpha}=1\) in equation (4):
\[H_{xyz}^{K}(0)=J\sum_{\langle ij\rangle}ic_{i}c_{j} \tag{16}\]
That is, in the zero flux sector, free scalar Majorana fermions hop isotropically in the honeycomb lattice. In this sector the vector Majorana Fermion degree of freedom essentially disappear from the problem.
|
2309.04410 | DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields | In this paper, we address the challenging problem of 3D toonification, which
involves transferring the style of an artistic domain onto a target 3D face
with stylized geometry and texture. Although fine-tuning a pre-trained 3D GAN
on the artistic domain can produce reasonable performance, this strategy has
limitations in the 3D domain. In particular, fine-tuning can deteriorate the
original GAN latent space, which affects subsequent semantic editing, and
requires independent optimization and storage for each new style, limiting
flexibility and efficient deployment. To overcome these challenges, we propose
DeformToon3D, an effective toonification framework tailored for hierarchical 3D
GAN. Our approach decomposes 3D toonification into subproblems of geometry and
texture stylization to better preserve the original latent space. Specifically,
we devise a novel StyleField that predicts conditional 3D deformation to align
a real-space NeRF to the style space for geometry stylization. Thanks to the
StyleField formulation, which already handles geometry stylization well,
texture stylization can be achieved conveniently via adaptive style mixing that
injects information of the artistic domain into the decoder of the pre-trained
3D GAN. Due to the unique design, our method enables flexible style degree
control and shape-texture-specific style swap. Furthermore, we achieve
efficient training without any real-world 2D-3D training pairs but proxy
samples synthesized from off-the-shelf 2D toonification models. | Junzhe Zhang, Yushi Lan, Shuai Yang, Fangzhou Hong, Quan Wang, Chai Kiat Yeo, Ziwei Liu, Chen Change Loy | 2023-09-08T16:17:45Z | http://arxiv.org/abs/2309.04410v1 | # DeformToon3D: Deformable Neural Radiance Fields for 3D Toonification
###### Abstract
In this paper, we address the challenging problem of 3D toonification, which involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture. Although fine-tuning a pre-trained 3D GAN on the artistic domain can produce reasonable performance, this strategy has limitations in the 3D domain. In particular, fine-tuning can deteriorate the original GAN latent space, which affects subsequent semantic editing, and requires independent optimization and storage for each new style, limiting flexibility and efficient deployment. To overcome these challenges, we propose DeformToon3D, an effective toonification framework tailored for hierarchical 3D GAN. Our approach decomposes 3D toonification into subproblems of geometry and texture stylization to better preserve the original latent space. Specifically, we devise a novel StyleField that predicts conditional 3D deformation to align a real-space NeRF to the style space for geometry stylization. Thanks to the StyleField formulation, which already handles geometry stylization well, texture stylization can be achieved conveniently via adaptive style mixing that injects information of the artistic domain into the decoder of the pre-trained 3D GAN. Due to the unique design, our method enables flexible style degree control and shape-texture-specific style swap. Furthermore, we achieve efficient training without any real-world 2D-3D training pairs but proxy samples synthesized from off-the-shelf 2D toonification models. Code is released at [https://github.com/junzhehang/DeformToon3D](https://github.com/junzhehang/DeformToon3D).
## 1 Introduction
Artistic portraits are prevalent in various applications such as comics, animation, virtual reality, and augmented reality. In this work, our main objective is to propose an effective approach for 3D-aware artistic toonification, a critical problem that involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture. The task opens up potential applications for quick high-quality 3D avatar creation based on a photograph with the style of a designated artwork, which would typically require highly professional handcraft skills.
Substantial progress has been made in automatic portrait style transfer over 2D images. Starting with image style transfer [15, 55, 35, 29] and image-to-image translation [28, 34, 10, 6, 58], recent advancements in StyleGAN-based generators [24, 25] have shown their potential in high-quality toonification via efficient transfer learning [49]. Specifically, a pre-trained StyleGAN generator on face images is fine-tuned to transfer to the artistic portrait domain. With the progress of 3D-aware GANs [8, 41], researchers have extended this pipeline to 3D with well-designed domain adaptation frameworks [22, 32, 27, 1], enabling remarkable 3D portrait toonification.
Although fine-tuning the pre-trained StyleGAN-based model for toonification achieves superior quality, it has several limitations. **First**, fine-tuning the pre-trained generator shifts its generative space from the real face domain to the artistic portrait domain at the cost of deteriorating the original GAN latent space. With tremendous off-the-shelf tools [56, 64] trained for the original GAN space, altering the well-learned style space would affect the performance of downstream applications over the toonified portrait,, semantic editing. **Second**, despite fine-tuning-based domain adaptations have been thoroughly investigated for 2D GANs [50, 68], applying this technique to 3D GANs fails to leverage the full potential of the architecture of 3D generator models [8, 41] for characterizing view-consistent shape and high-frequency textures in the artistic domain. **Third**, it is inevitable to fine-tune a heavy generator for each new style, which requires hours of training time and additional storage. This limitation affects scalability when deploying dozens of fine-tuned generators for real-time user interactions. Therefore, 3D toonification remains a challenging task that requires further exploration.
To better preserve the pre-trained GAN latent space and to better exploit the 3D GAN generator, we propose **DeformToon3D** that decomposes geometry and texture stylization into more manageable subproblems. In particular, unlike conventional 3D toonification approaches that fine-tune the whole 3D GAN generator following existing 2D fine-tuning schemes, we carefully consider the characteristics of 3D GANs to decompose the stylization of geometry and texture domains. To achieve geometry stylization, we introduce a novel **StyleField** on top of a pre-trained 3D generator to deform each point in the style space to the pre-trained real space guided by an instance code. This allows for easy extension to multiple styles with a single stylization field by introducing a style code to guide the deformation. Since StyleField already handles geometry stylization well, texture stylization can be easily achieved through adaptive style mixing which injects artistic domain information into the network for effective texture toonification. Notably, our unique design enables training of the method at minimal cost using synthetic paired data with realistic faces generated by a pre-trained 3D GAN and corresponding paired stylized data generated by an off-the-shelf 2D toonification model [70].
The proposed DeformToon3D achieves high-quality geometry and texture toonification over a vast variety of styles, as demonstrated in Fig.1. Additionally, our approach preserves the original GAN latent space, enabling compatibility with existing tools built on the real face space GAN, including inversion [32], editing [56], and animation [64]. Furthermore, our design significantly reduces the storage footprint by requiring only a small stylization field with a set of AdaIN parameters for artistic domain stylization. In summary, our work makes the following contributions:
* We propose a novel StyleField that separates geometry toonification from texture, providing a more efficient method for modeling 3D shapes than fine-tuning and enabling flexible style control.
* We present an approach to achieve multi-style toonification with a single model, facilitating cross-style manipulation and reducing storage footprint.
* We introduce a full synthetic data-driven training pipeline that offers an efficient and cost-effective solution to training the model without requiring real-world 2D-3D training pairs.
## 2 Related Work
**3D Generative Models.** Inspired by the success of Generative Adversarial Networks (GAN)[16] in generating photo-realistic images[24, 5, 26], researchers have been making efforts towards 3D-aware generation [38, 19, 42]. Starting with explicit intermediate shape representations, such as voxels [38, 19] and meshes [42], which lack photo-realism and are memory-inefficient, researchers have recently shifted towards using implicit functions [44, 36, 9] along with physical rendering processes [60, 37] as intrinsic 3D inductive biases. Among these approaches, 3D generative models [7, 54] extended from neural radiance fields (NeRF) [37] have demonstrated impressive view-consistency in synthesized results. While the original NeRF
is limited to modeling static scenes, recent research has introduced deformation fields to enable NeRF to model dynamic volumes [45, 46, 65, 31]. To increase the resolution of generated images, recent studies [8, 20] have resorted to voxel-based representations or adopted a hybrid design [39, 41, 8, 17]. This hybrid design involves a cascade model \(G=G_{\mathbf{1}}\circ G_{\mathbf{0}}\) pairing a 3D generator \(G_{\mathbf{0}}\) with a 2D super-resolution decoder \(G_{\mathbf{1}}\). Both \(G_{0}\) and \(G_{1}\) follow the style-based architecture [24, 25] to accept a latent code \(\mathbf{w}\) to control the style of the generated object. By super-resolving the intermediate low-resolution 2D features produced by the \(G_{\mathbf{0}}\) with the \(G_{\mathbf{1}}\), the hybrid design achieves view-consistent synthesis at high resolution, \(1024^{2}\).
**Domain Adaptation for StyleGAN Toonification.** Researchers have typically employed domain adaptation in 2D space to achieve toonification with StyleGANs [24, 26]. Typically, a portrait StyleGAN pre-trained on real images [33, 23] is fine-tuned on an artistic domain dataset to generate toonified faces. Building upon this straightforward framework [50], a series of in-depth research has been conducted to further improve style control [70, 69], choices of latent code [61], few-shot training [40], and text-guided adaptation [14].
Pre-trained 3D GANs offer high-quality generation and thus have the potential to facilitate downstream applications such as portrait stylization through 3D GAN inversion [32, 8]. To extend 2D StyleGAN domain adaptation to 3D, CIPS-3D [75] proposed fine-tuning only the super-resolution decoder module for view-consistent texture toonification. However, it is limited to texture toonification since the 3D generator is left unchanged. Dr.3D [22], E3DGE [32], and 3DAvartarGAN [1] fine-tune the entire 3D generator [8, 41] for both geometry and texture toonification, while DATID-3D [27] leverages a Stable Diffusion [53] generated corpus for text-guided domain adaptation. While these methods yield impressive results, they come with limitations, as as they require costly fine-tuning and independent model storage for each new style. Furthermore, previous fine-tuning-based toonification methods suffer from limited generality due to the incompatibility with abundant editing techniques developed for the original StyleGAN latent space [56, 57, 47]. In contrast, DeformToon3D fully preserves the original 3D GAN latent space, which makes it intrinsically compatible with the editing methods trained for the original StyleGAN space. It achieves comparable visual quality while being \(10\) times more storage-efficient than previous methods.
## 3 DeformToon3D
We present the framework of DeformToon3D in Fig. 2. Our approach begins with a typical hybrid 3D-aware design [41, 8] that generates real-domain faces, and reformulates it to a 3D toonification framework. The approach starts with a cascade model \(G=G_{\mathbf{1}}\circ G_{\mathbf{0}}\), which pairs a 3D generator \(G_{\mathbf{0}}\) with a 2D super-resolution decoder \(G_{\mathbf{1}}\). The generator \(G_{\mathbf{0}}\) captures the underlying geometry with the instance code \(\mathbf{w}\) and camera pose \(\boldsymbol{\xi}\), and produces an intermediate feature map \(\mathbf{F}\) with volume rendering [37]. Then, \(G_{\mathbf{1}}\) upsamples \(\mathbf{F}\) to obtain a high-resolution image \(\mathbf{I}\) with high-frequency details added. To adapt \(G\) from the real domain to the artistic or cartoon domain, existing methods [22, 1, 75, 32] view \(G_{\mathbf{1}}\) and \(G_{\mathbf{0}}\) as a whole and simply fine-tune the pre-trained \(G\), failing to take advantage of the decomposed characteristics of the hybrid framework design. By comparison, DeformToon3D fully exploits this cascaded synthesis process by using a novel StyleField module for geometry stylization, which in turn also benefits appearance stylization, allowing it to adopt a simple adaptive style mixing strategy. In Sec. 3.1, we elaborate the proposed StyleField along with the pre-trained \(G_{\mathbf{0}}\) to handle geometry stylization. In Sec. 3.2, we explain how the adaptive style mixing injects the style of the target domain into \(G_{\mathbf{1}}\) to achieve texture stylization. Lastly, we present our training pipeline in Section 3.3.
### Geometry Toonification with StyleField
To train a 3D generator \(\tilde{G}_{\mathbf{0}}\) capable of synthesizing artistic domain geometry, previous methods [22, 27, 32, 1, 62] fine-tune the pre-trained \(G_{\mathbf{0}}\) with a target-domain dataset, which can be computationally expensive and could potentially deteriorate the original GAN latent space. To address this issue, we propose to establish a correspondence between the stylized NeRF \(\mathcal{N}_{S}\) and the real-space NeRF \(\mathcal{N}_{R}\). More specifically, we use a stylization field (the StyleField), \(H_{D}\), to bridge the correspondence, such that \(\tilde{G}_{\mathbf{0}}(\boldsymbol{x}_{S})=(G_{\mathbf{0}}\circ H_{D})( \boldsymbol{x}_{S})\). As shown in Fig. 2, given a stylized NeRF \(\mathcal{N}_{S}:\mathbb{R}^{3}\mapsto\mathbb{R}^{4}\) of the target style domain, our goal is to estimate a 3D deformation residual, \(H_{D}:\mathbb{R}^{3}\mapsto\mathbb{R}^{3}\), which maps \(\mathcal{N}_{S}\) back to \(\mathcal{N}_{R}\) via:
\[\mathcal{N}_{S}\rightarrow\mathcal{N}_{R}:\boldsymbol{x}_{R}=(\boldsymbol{x}_{ S}+H_{D}(\boldsymbol{x}_{S})),\forall\boldsymbol{x}_{S}\in\mathcal{N}_{S}, \tag{1}\]
where \(H_{D}\) represents the residual 3D deformation \(H_{D}(\boldsymbol{x}_{S})=\Delta\boldsymbol{x}_{S}\) in the 3D space of the Stylized NeRF \(\mathcal{N}_{S}\) and maps each 3D point \(\boldsymbol{x}_{S}\in\mathcal{N}_{S}\) in the stylized space to its corresponding position in the real space \(\mathcal{N}_{R}\).
To improve expressiveness, we extend \(H_{D}\) as a conditional neural deformation field that outputs the offsets under the conditions of style and identity:
\[\mathcal{N}_{S}\rightarrow\mathcal{N}_{R}:\boldsymbol{x}_{R}=\boldsymbol{x}_{ S}+H_{D}(\boldsymbol{x}_{S},\mathbf{w}_{S},\mathbf{w}_{R}) \tag{2}\]
where \(\mathbf{w}_{S}\) is the style code that specifies the artistic domain, \(\mathbf{w}_{R}\) is the instance code corresponding to \(\mathcal{N}_{R}\) that represents the identity of the 3D face in the source domain. Both \(\mathbf{w}_{S}\) and \(\mathbf{w}_{R}\) serve as the holistic geometry indicators to guide the deformation.
We build \(H_{D}\) as an MLP consisting of four siren[59] layers due to its superior high-frequency modeling capacity. After all the points \(\mathbf{x}_{S}\in\mathcal{N}_{S}\) are deformed to the real space, we generate the new feature map \(\tilde{\mathbf{F}}=\tilde{G}_{\mathbf{0}}(\mathbf{w}_{R},\mathbf{w}_{S},\mathbf{ \xi})\) for the input of \(G_{\mathbf{1}}\) and synthesize the high-resolution image with the geometry of the artistic domain. By introducing the 3D deformation module \(H_{D}\), we no longer need to fine-tune pre-trained \(G_{\mathbf{0}}\) to achieve geometry deformation of the target domain. This greatly alleviates the parameters to optimize by 50% and fully preserves the original GAN latent space. Moreover, with \(\mathbf{w}_{S}\) serving as the style condition, a single \(H_{D}\) can support multiple styles, which further saves storage by 98.5% compared to fine-tuning the whole model per style with \(10\) styles.
### Texture Transfer with Adaptive Style Mixing
Thanks to the StyleField \(H_{D}\) that handles the geometrytoonification within the cascade 3D GAN model, we only need to inject the artistic domain texture information into \(G_{\mathbf{1}}\) for texture stylization. Here, \(G_{\mathbf{1}}\) is a 2D style-based architecture, where image styles are effectively adjusted by AdaIN [21]. Inspired by style mixing [24, 26, 70] that controls the AdaIN parameters, we inject the texture information of the target style \(\mathbf{w}_{S}\) by mixing the style parameters of \(G_{\mathbf{1}}\), as shown in Fig. 2. To bridge the domain gap between the real space and target domain, we further add a lightweight MLP, \(T\), for each layer of \(G_{\mathbf{1}}\) to adjust the style code \(\mathbf{w}_{S}\). The adapted \(T(\mathbf{w}_{S})\) and the instance code \(\mathbf{w}_{R}\) are fused with a weight \(w\) by weighted average, and sent to the affine transformation block of \(G_{\mathbf{1}}\) to obtain the final style parameters for AdaIN. This mechanism allows us to model and control multi-domain textures with \(T\) and \(w\), without fine-tuning the original decoder \(G_{\mathbf{1}}\).
Let \(\tilde{G}_{\mathbf{1}}\) denote \(G_{\mathbf{1}}\) with the adaptive style mixing. The image generation process with domain adaptation given \(w\) and \(\mathbf{w}_{S}\) can be formulated as \(\hat{\mathbf{I}}=\tilde{G}_{\mathbf{1}}(\hat{\mathbf{F}},\mathbf{w}_{R}, \mathbf{w}_{S},w)\).
### Training
**Data Preparation.** We follow Sim2Real [67, 74, 32] to generate paired data for training. Specifically, to generate the training corpus for each iteration of the training process, we pre-calculate a set of real space NeRFs \(\mathcal{N}_{R}\) with corresponding latent codes \(\mathbf{w}_{R}\) and rendered images \(\mathbf{I}_{R}\). To generate pair-wise stylized ground truths, we stylize the rendered image with existing 2D toonification models to obtain the target ground truth \(\mathbf{I}_{S}\). Here, to validate our method's performance on multi-style domain adaptation, we adopt the exemplar-based DualStyleGAN [70] as our 2D toonification model, since it supports hundreds of diverse styles, such as Cartoon, Pixar, and Caricature.
Finally, we define \(\mathcal{X}=\{\mathbf{I}_{R},\mathbf{w}_{R},\mathbf{w}_{S},\mathbf{I}_{S}\}\) as a training set for DeformToon3D with \(\{\mathbf{w}_{R}\), \(\mathbf{I}_{R}\), \(\mathbf{w}_{S}\}\) to serve as the training inputs and \(\{\mathbf{I}_{S}\}\) is the set of training ground truth. \(\mathbf{I}_{R}\in\mathcal{X}\) is drawn i.i.d from distribution \(P(G(\mathbf{z},\mathbf{\xi}))\) where \(\mathbf{z}\sim\mathcal{N}(0,1)\) and \(\mathbf{\xi}\) is the camera pose distribution of pre-trained 3D GAN \(G\). Some samples are shown in Fig. 3.
**Reconstruction Loss.** Here we use the \(\mathrm{LPIPS}\) loss [73] to evaluate stylized image quality:
\[\mathcal{L}_{\text{Rec}}\left(\mathcal{X}\right)=\mathbb{E}_{\mathcal{X}} \Bigg{[}||P(\hat{\mathbf{I}})-P(\mathbf{I}_{S})||_{2}\Bigg{]}, \tag{3}\]
where \(P(\cdot)\) denotes the perceptual feature extractor.
**Smoothness Regularization.** To encourage the smoothness of stylization field deformation offsets and reduce spatial distortion, a smoothness regularization is included to regularize \(H_{D}\). Here we penalize the norm of the Jacobian matrix \(\mathbb{J}_{H_{D}}=\nabla H_{D}\) of the deformation field [45] to ensure the learned deformations are physically smooth:
\[\mathcal{L}_{\text{Elastic}}=\mathrm{ReLU}(\|\nabla H_{D}(\mathbf{x}_{S}, \mathbf{w}_{S},\mathbf{w}_{R})\|_{2}^{2}-\epsilon), \tag{4}\]
Figure 2: **DeformToon3D framework. Given sampled instance code \(\mathbf{w}_{R}\) and style code \(\mathbf{w}_{S}\) as conditions, DeformToon3D first deforms a point from the style space \(\mathcal{N}_{S}\) to the real space \(\mathcal{N}_{R}\), which achieves geometry tooinification without modifying the pre-trained \(G_{\mathbf{0}}\). Afterwards, we leverage adaptive style mixing with weight \(w\) to inject the texture information of the target domain into the pre-trained \(G_{\mathbf{1}}\) for texture toonification. Both pre-trained generators \(G_{\mathbf{0}}\) and \(G_{\mathbf{1}}\) are kept frozen.**
where \(\epsilon\) is the slack parameter for the smoothness regularization. We set \(\epsilon=0.1\) for all the experiments.
**Adversarial Training.** Additionally, we apply a non-saturating adversarial loss \(\mathcal{L}_{\mathrm{Adv}}\)[24] to bridge the domain gap of toonified results.
In summary, the overall loss function is defined as
\[\mathcal{L}=\mathcal{L}_{\mathrm{Rec}}+\lambda_{\mathrm{Elastic}}\mathcal{L}_{ \mathrm{Elastic}}+\lambda_{\mathrm{Adv}}\mathcal{L}_{\mathrm{Adv}},\]
where we set \(\lambda_{\mathrm{Adv}}=0.05\) and \(\lambda_{\mathrm{Elastic}}=0.01\) in all the experiments.
## 4 Experiments
**Datasets.** We mainly focus on the human face domain and use synthesized data for the whole training. We follow the data synthesis procedure mentioned in Sec. 3.3 for generating training corpus and adopt DualStyleGAN [70] as the 2D stylization model to infer paired toonified image. We generate images with the following 10 styles: Pixar, Comic, Slam Dunk, The Croods, Fiona (Shrek), Rapunzel (Disney Princess), Hiccup Horrendous Haddock III (How To Train Your Dragon), and three different caricature styles. We use StyleSDF [41] pre-trained on FFHQ [24] as our 3D generator. For evaluating toonification on real-world images, we evaluate on CelebA-HQ [23].
**Implementation Details.** In all the experiments, we set the learning rate to \(5\times 10^{-4}\). We adopt Adam [30] optimizer to train the toonification models. We train the DeformToon3D for 100 epochs. To expedite the training process, we disable the adversarial loss for the initial 50 epochs. The training takes approximately 24 hours using 8 Tesla V100 GPUs with a batch size set to 16. For adaptive style mixing, \(w=1\) is fixed during training and could be manually selected from \([0,1]\) to achieve texture interpolation during inference. More implementation details and experiment results are included in the supplementary material.
**Baselines.** Here we design three baselines for extensive evaluations. The first is CIPS-3D [75], which naively fine-tunes the super-resolution \(G_{\mathbf{1}}\) for view-consistent toonification. The second is E3DGE [32], which fine-tunes both \(G_{\mathbf{0}}\) and \(G_{\mathbf{1}}\) independently for true-3D toonification. Another prominent method is to leverage directional CLIP loss [14, 2, 27] for adapting a pre-trained style-based generator. Here we extend StyleGAN-NADA [14] to 3D GAN and employ image CLIP directional loss for the evaluation. The baseline models are trained on the same dataset as DeformToon3D, with each fine-tuning process applied to a single style. In contrast, DeformToon3D is capable of accommodating all styles within a single model.
### Comparisons with Baselines
**Qualitative Results.** We show the qualitative comparisons against the baselines in Fig. 4. DeformToon3D fully captures the characteristics of the target domain with consistent identity preservations. CIPS-3D only provides texture-wise toonification and ignores geometry deformation. E3DGE suffers from mode collapse and tends to lose identity. StyleGAN-NADA fails to capture the characteristics of the target domain. Our method produces high-quality toonification with consistent identity preservations.
**Quantitative Results.** To evaluate the fidelity and quality of toonification, we compare their identity preservation(IP) [11] and \(\mathrm{FID}\) respectively in Tab. 1. In terms of \(\mathrm{IP}\), DeformToon3D outperforms all baseline methods across the 10 styles provided, which underscores the benefits of retaining the 3D generator. In terms of \(\mathrm{FID}\), CIPS-3D [75] only fine-tunes \(G_{\mathbf{1}}\) and achieves worse performance compared to E3DGE [32], which fine-tunes the whole generator. StyleGAN-NADA achieves the worst \(\mathrm{FID}\) performance, which we attribute to the challenges of directly adopting 2D CLIP-based supervision on 3D GANs. A detailed breakdown by individual styles is available in the supplementary material.
**User Study.** For the user preference study, we collected 2400 votes to select the preferred rendering results in terms of shape stylization, appearance stylization, identity preservation, and overall performance. As shown in Tab. 2, the proposed method gives the most preferable results despite the fact that the baseline methods are trained on a single style.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline & CIPS-3D & E3DGE & NADA & Ours \\ \hline \(\mathrm{IP}\uparrow\) & 0.681 & 0.707 & 0.535 & **0.781** \\ \hline \(\mathrm{FID}\downarrow\) & 50.6 & 34.0 & 59.3 & **27.6** \\ \hline \end{tabular}
\end{table}
Table 1: **Quantitative evaluation over 10 styles. DeformToon3D achieves the best identity preservation (IP) and \(\mathrm{FID}\).**
Figure 3: **Training Samples. We show the source image (row \(1\)) and the stylized images of the target domain (row \(2\)) for training supervisions.**
**Storage Cost Comparison.** We detail the storage costs in Table 3. Thanks to our unique design that disentangles geometry and texture, our method requires no fine-tuning and fewer parameters for training. In a single-style scenario, our method reduces storage needs by 85%. This advantage becomes even more pronounced with the multi-style version of \(H_{D}\), achieving a storage saving of 98.5% in a 10-style scenario. This makes our approach particularly feasible for potential mobile applications.
### Applications
To demonstrate the generality of full GAN latent space preservation, we show that DeformToon3D could be easily extended to a series of downstream applications proposed for the original pre-trained GAN, including inversion, editing, and animation. To further validate DeformToon3D's unique geometry-texture decomposed tooinification, we show results of flexible tooinification style control.
**Inversion and Editing.** With fully preserved 3D generative prior, DeformToon3D could directly adopt pre-trained 3D GAN inversion framework and latent editing directions for semantic-aware editing over the tooinified portrait. Here we adopt E3DGE [32] for 3D GAN inversion and show the inversed results of the real image in Fig. 6. With fully preserved GAN latent space, DeformToon3D could be applied to real images over multiple styles with high quality. We also include the editing results of diverse styles in Fig. 5. As can be seen, our method produces consistent editing results and fully preserves the identity and the style of the target domain, regarding both the geometry and texture.
**Animatable Toonification.** Inspired by the success of 2D method [64] that obtains a rig-like control over StyleGAN-generated 2D faces, we propose a straightforward pipeline that aligns 3DMM [4] parameters with pre-trained 3D GAN latent space. Specifically, we train two MLP \(\mathcal{F}_{\mathbf{w}}:\mathbb{R}^{|\mathcal{W}|}\rightarrow\mathbb{R}^{| \mathcal{M}|}\) and \(\mathcal{F}_{\mathbf{m}}:\mathbb{R}^{|\mathcal{M}|}\rightarrow\mathbb{R}^{| \mathcal{W}|}\) to learn the bi-directional mapping between 3DMM parameter space \(\mathcal{M}\) and 3D GAN
Figure 4: **Qualitative comparisons with baseline methods. DeformToon3D produces better performance against all baselines regarding tooinification fidelity, diversity and identity preservation.**
Figure 5: **Editing of tooinified results. We show two attribute editing results over six styles. In row \(1\), we add bangs to the male identities, and in row \(2\), we add “Smile” to female identities. The edited results fully preserve the identity and abide by the style of the target domain.**
\begin{table}
\begin{tabular}{l|c|c} \hline Methods & Trainable Params\(\downarrow\) & Model Storage\(\downarrow\) \\ \hline CIPS-3D [75] & 5.81 & 59.93 \\ E3DGE [32] & 7.64 & 76.4 \\ StyleGAN-NADA [14] & 7.64 & 76.4 \\ \hline Ours (single-style) & **3.82** & **11.46** \\ \hline \end{tabular}
\end{table}
Table 3: **Storage cost comparison. Values are averaged across \(10\) styles and shown in \(\mathrm{MB}\). DeformToon3D achieves a considerably more efficient storage footprint against the baselines.**
\(\mathcal{W}\) space. To impose 3DMM-based control, given a latent code \(\mathbf{w}\), we first infer its 3DMM parameters \(\tilde{\mathbf{m}}=\mathcal{F}_{\mathbf{w}}(\mathbf{w})\) and reconstructed 3DMM code \(\tilde{\mathbf{w}}=\mathcal{F}_{\mathbf{m}}(\mathbf{m}_{\mathbf{w}})\). After imposing 3DMM-based editing \(\tilde{\mathbf{m}}_{\Delta}=\tilde{\mathbf{m}}+\mathbf{m}_{\Delta}\), we infer the corresponding edited \(\mathcal{W}\) code \(\tilde{\mathbf{w}}_{\Delta}=\mathcal{F}_{\mathbf{m}}(\tilde{\mathbf{m}}_{ \Delta})\). The final result is synthesized from \(\tilde{\mathbf{w}}=\mathbf{w}+(\tilde{\mathbf{w}}_{\Delta}-\tilde{\mathbf{w}})\). The whole framework is trained in a self-supervised manner with cycle consistency regularizations. Please refer to the supplementary material for more technical details and animation results.
We show the animated toonification results in Fig. 7. Here, we extract the 3DMM parameters from a driving video [18] using a pre-trained predictor [12]. The expression dimension of extracted parameters are injected to the sampled \(\mathbf{w}_{R}\) using the procedure described above. Four styles of animated toonified results are included. As can be seen, our designed animation pipeline could accurately drive the identity both in the real space (row \(2\)) and the style spaces (row \(3-6\)). The reenacted expressions are natural and abide with the driving input, which validates the generality of our method.
**Toonification Style Control.** Due to the unique geometry-texture decomposition design, DeformToon3D offers flexible style control. **First**, we achieve style degree control and show the results in Fig. 9. Geometry-wise, since \(H_{D}\) outputs the 3D deformation offsets \(\Delta\mathbf{x}_{S}\), we simply interpolate the offsets with \(\tau*\Delta\mathbf{x}_{S}\) where \(\tau=0\) represents an identical mapping of the real space. Texture-wise, we rescale the style mixing weight \(w\) of \(G_{\mathbf{1}}\), where \(w=0\) preserves the color of the real space images. **Second**, due to the geometry-texture disentangled property, Deform
Figure 8: **Style swap.** Besides style interpolation, DeformToon3D supports style swap of geometry and texture only across two styles. As shown here, given the real space input (col \(1\)) and the toonification results of two spaces (col \(2-3\)), DeformToon3D could swap the geometry \(G\) and texture \(T\) of two styles independently (col \(4-5\)), which cannot be achieved by previous methods.
Figure 6: **DeformToon3D on the real images.** Our method enables multiple styles toonification with a single model, where both the texture and the geometry matches the target domain.
Figure 7: **Animation of stylized results.** We drive the real space images (row 2) with an input video (row 1). Since DeformToon3D fully preserves the pre-trained GAN latent space, the driving direction of the real space could be directly applied to the style space (row \(3-6\)). The expression on of animated toonified identities fully abide with the driving frames without affecting the toonification performance of target domain.
Toon3D naturally supports the toonification of shape and texture only. As shown in Fig. 8, we achieve the geometry-texture swap between multiple styles, where the geometry of one style could be combined with the texture of another style. This could not be achieved by all previous methods and opens up broader potential downstream applications.
### Ablation Study
We ablate the effectiveness of our design choices in Tab. 4. **1) Instance code condition:** By removing instance code \(\mathbf{w}_{R}\), StyleField tends to learn similar offsets for different identities, whereas adopting \(\mathbf{w}_{R}\) as the instance deformation conditions facilitates better toonification. **2) StyleField architecture:** We replace the siren architecture with an MLP network as defined by D-NeRF [51] and observe the significant performance drop. This demonstrates the representation power of siren deformaiton field. **3) Adversarial training:** We also validate the effectiveness of adversarial loss, which brings noticeable improvement regarding both \(\mathrm{FID}\) and identity similarity, respectively.
### Limitations
As shown in Fig. 10 pertaining to the style "Comic" and "Slam Dunk", though DeformToon3D produces reasonable texture-wise stylization, the corresponding geometry still has noticeable artifacts. The proposed StyleField implicitly learns the correspondence between the paired data from the real space and the style space. Such correspondence is easier to learn with information cues such as illumination from the 3D-ish styles like Pixar or noticeable keypoints from caricature styles, but harder for styles with limited information cues like Comic.
## 5 Conclusion and Future Work
In this paper, we propose a novel 3D toolification framework DeformToon3D for a fine-tuning free, geometry-texture decomposed 3D face toonification. We fully exploit the hierarchical characteristics of 3D GAN and introduce a StyleField to handle 3D geometry tooinification of \(G_{\mathbf{0}}\), along with adaptive style mixing that injects texture information into \(G_{\mathbf{1}}\). Our method achieves high-quality toolification on both geometry and texture, outperforming existing methods. Thanks to the preservation of the 3D generative prior, DeformToon3D facilitates a range of downstream applications.
As a pioneering effort in this field, we believe this work will inspire future works on free 3D toonification. First, to mitigate the geometry-texture ambiguity present in certain styles, introducing re-lighting during training could serve as a potential solution [43]. Second, a more flexible training paradigm could directly guide the 3D toonification process with a pre-trained vision-language model [53]. Third, future research could focus on integrating a comprehensive 3D animation pipeline [3, 63] into the toonification process. Moreover, the potential applicability of DeformToon3D to other 3D GANs [8] and shapes beyond human faces, such as full-body settings, is worth investigating.
**Acknowledgement.** This study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). It is also supported by Singapore MOE AcRF Tier 2 (MOE-T2EP20221-0011, MOE-T2EP20221-0012) and NTU NAP Grant.
\begin{table}
\begin{tabular}{l|c|c} \hline Method & \(\mathrm{IP}\uparrow\) & \(\mathrm{FID}\downarrow\) \\ \hline w/o \(\mathbf{w}_{R}\) condition & 0.735 & 33.4 \\ MLP as \(H_{D}\) & 0.746 & 31.8 \\ w/o \(\mathcal{L}_{\mathrm{Adv}}\) & 0.769 & 29.9 \\ \hline Ours & **0.781** & **27.6** \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study.
Figure 10: **Failure cases.**
Figure 9: **Style degree control. Thanks to the geometry-texture toonification decomposition, DeformToon3D can specify geometry, texture, and full style control.**
A Background
Since recent 3D-aware image generative models are all based on neural implicit representations, especially NeRF [37], here we briefly introduce the NeRF-based 3D representation and more StyleSDF details for clarification.
**NeRF-based 3D Representation.** NeRF [37] proposed an implicit 3D representation for novel view synthesis. Specifically, NeRF defines a scene as \(\{\mathbf{c},\sigma\}=F_{\mathbf{\phi}}(\mathbf{x},\mathbf{v})\), where \(\mathbf{x}\) is the query point, \(\mathbf{v}\) is the viewing direction from camera origin to \(\mathbf{x}\), \(\mathbf{c}\) is the emitted radiance (RGB value), \(\sigma\) is the volume density. To query the RGB value \(C(\mathbf{r})\) of a point on a ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{v}\) shoot from the 3D coordinate origin \(\mathbf{o}\), we have the volume rendering formulation,
\[C(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t))\mathbf{c}(\mathbf{r}(t),\mathbf{v})dt, \tag{5}\]
where \(T(t)=\text{exp}(-\int_{t_{n}}^{t}\sigma(\mathbf{r}(s))ds)\) is the accumulated transmittance along the ray \(\mathbf{r}\) from \(t_{n}\) to \(t\). \(t_{n}\) and \(t_{f}\) denote the near and far bounds.
**Hybrid 3D Generation.** In hybrid 3D generation [41, 8, 17], the intermediate feature map is calculated by replacing the color \(\mathbf{c}\) with feature \(\mathbf{f}\), namely \(\mathbf{F}(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t))\mathbf{f} (\mathbf{r}(t),\mathbf{v})dt\). Then, a StyleGAN [24, 25]-based decoder upsamples \(\mathbf{F}\) into high-resolution images with high-frequency details.
**SDF and Radiance-based Geometry Representation.** The intermediate geometry representation of \(G_{\mathbf{0}}\) diversifies the characteristics of different 3D GANs. Specifically, StyleSDF [41] uses \(G_{\mathbf{0}}\) to predict the signed distance \(d(\mathbf{x})=G_{\mathbf{0}}(\mathbf{w},\mathbf{x})\) between the query point \(\mathbf{x}\) and the shape surface, where the density function \(\sigma(\mathbf{x})\) can be transformed from \(d(\mathbf{x})\)[41, 66, 71] for volume rendering [37]. The incorporation of SDF leads to higher-quality geometry in terms of expressiveness view consistency and clear definition of the surface.
In this paper, we mainly adopt StyleSDF [41] due to its high-quality geometry surface and high-fidelity texture. In StyleSDF, the Sigmoid activation function \(\sigma\) is replaced by \(\sigma(\mathbf{x})=K_{\alpha}\left(d(\mathbf{x})\right)=\text{Sigmoid}\left(-d(\mathbf{x})/ \alpha\right)/\alpha\), where \(\alpha\) is a learned parameter that controls the tightness of the density around the surface boundary.
## Appendix B Implementation Details
**CIPS-3D Baseline.** Following CLIPS-3D [75], we fine-tune \(G_{\mathbf{1}}\) of StyleSDF on the tononified images with identical optimization parameters in the official implementation. The fine-tuning time for one style costs \(10\) V100 minutes.
**E3DGE Baseline.** Following E3DGE [32], we first fine-tune \(G_{\mathbf{0}}\) for \(400\) iterations with batch size \(24\), and further fine-tune \(G_{\mathbf{1}}\) for \(400\) iterations with batch size \(8\). All hyper-parameters are left unchanged with the official StyleSDF [41] implementation. The overall fine-tuning time for one style costs around 30 minutes on a single V100 GPU.
**StyleGAN-NADA Baseline.** We reproduce StyleGAN-NADA [14] on StyleSDF with the following modifications. For \(G_{\mathbf{0}}\) optimization, we fix the pre-trained mapping network, affine code transformations, view-direction MLP, color-prediction MLP, and density-prediction MLP. For \(G_{\mathbf{1}}\) optimization, we follow the original implementation and fine-tune all weights except toRGB layers, affine code transformations, and mapping network. The \(k\) layers to optimize are also selected adaptively using StyleCLIP global loss. Other hyper-parameters and training procedures are left unchanged. The whole optimization costs around \(5\) minutes on a single V100 GPU.
### Additional Method Details
**StyleField.** Given the instance code \(\mathbf{w}\), style code \(\mathbf{z}_{S}\), we concatenate them along the channel dimension and send them into a \(4\)-layer mapping network [24]. The mapping network first maps \(\mathbf{w}\oplus\mathbf{z}_{S}\) to a set of modulation signals \(\{\mathbf{\beta},\mathbf{\gamma}\}\), where \(\mathbf{\beta}=\{\beta_{i}\},\mathbf{\gamma}=\{\gamma_{i}\}\). To associate the given codes to the corresponding deformation, the modulation signals will be injected into the MLP network, serving as FiLM conditions [48, 13, 59] to modulate its features at different layers as \(\mathbf{f}_{i+1}=\sin(\gamma_{i}.(\mathbf{W}_{i}\mathbf{f}_{i}+\mathbf{b}_{i})+\beta_{i})\). To support multi-style code, we associate each style index with a learnable embedding. During inference, we pass in the corresponding style index to retrieve the style embedding for conditional deformation.
**Animatable Stylized Portrait Training Details.** We show the overall 3DMM animation inference pipeline in Fig. 11. Specifically, we train the whole framework in a self-supervised manner. In each iteration, we synthesize a batch of pose images \(\mathbf{I}=G(\mathbf{w})\), where \(G=G_{\mathbf{1}}\circ G_{\mathbf{0}}\). For 3DMM supervision, we leverage the state-of-the-art 3DMM predictor [12] to infer the pseudo ground-truth 3DMM parameter \(\mathbf{m}_{\text{OT}}\). With the synthesized training corpus, we reconstruct the input codes \(\hat{\mathbf{m}}=\mathcal{F}_{\mathbf{w}}(\mathbf{w})\) and \(\hat{\mathbf{w}}=\mathcal{F}_{\mathbf{m}}(\mathbf{m}_{\text{GT}})\) and impose MSE reconstruction loss. We further render the reconstructed code \(\tilde{\mathbf{I}}=G(\hat{\mathbf{w}})\) and \(\tilde{\mathbf{I}}_{\mathcal{M}}=\mathrm{DFR}(\hat{\mathbf{m}})\), where \(\mathrm{DFR}\) is a differentiable render [52] that renders the reconstructed 3DMM mesh to image. The rendered images are supervised with corresponding loss [12], which yields bet
Figure 11: Inference pipeline of the proposed animation pipeline.
ter performance in our observations.
This training objective shall guarantee plausible 3DMM editing using the procedure stated in the main context. However, in practice, we find the training is unstable and predicted codes are not disentangled well during inference. We make the following modifications to the overall training pipeline and improve the editing performance:
First, we observe that the 3DMM head pose parameters, including a head rotation \(\mathbf{R}\in SO(3)\) and translation \(\mathbf{t}\in\mathbb{R}^{3}\) parameters, are in contradict with the pose control of 3D GAN. This deteriorates the disentanglement of learned codes and destabilizes the training since the predicted 3DMM codes \(\tilde{\mathbf{m}}\) must contain accurate head pose to minimize the reconstruction loss with \(\mathrm{DFR}(\mathbf{m})\). To address this issue, we mask out the head rotation \(\mathbf{R}\) and translation \(\mathbf{t}\in\mathbb{R}^{3}\) dimension in all 3DMM parameters with the binary mask. This enforces all the 3DMM images \(\mathbf{I}_{\mathcal{M}}\) to be rendered from frontal pose and encourages the networks to focus on facial expression \(\boldsymbol{\delta}\in\mathbb{R}^{64}\) alignment between \(\mathcal{W}\) and \(\mathcal{M}\).
Second, to further impose identity preservation and bijective mapping between two spaces, we introduce cycle training [76] which regularizes \(\mathbf{w}\approx\mathcal{F}_{\mathbf{w}}(\tilde{\mathbf{m}})\) and \(\mathbf{m}\approx\mathcal{F}_{\mathbf{m}}(\tilde{\mathbf{w}})\). The cycle loss is also imposed on the image space.
Third, to imitate the inference pipeline, in each training iteration, we randomly shuffle the expression dimension of all the 3DMM code \(\mathbf{m}_{\mathrm{GT}}\) within a batch and generate a new set of codes \(\tilde{\mathbf{m}}_{\mathrm{GT}}\). The rendered image from \(\mathcal{F}_{\mathbf{m}}(\tilde{\mathbf{m}}_{\mathrm{GT}})\) shall maintain the same identity with \(\mathbf{I}\) with identical pose of \(\mathrm{DFR}(\tilde{\mathbf{m}}_{\mathrm{GT}})\). We impose the identity preservation loss [11] and landmark loss over the rendered 3DMM image [12] as supervisions. This strategy further reduces the domain gap between training and inference and further improves the final editing performance.
Fourth, we further leverage the style-based hierarchical structure within StyleSDF and reduces the attribute entanglement. Specifically, rather than using the edited \(\mathcal{W}\) code \(\hat{\mathbf{w}}_{\Delta}\) for all the style layers in \(G\), we conduct layer-wise editing effect analysis and find that only the first \(2\) layers of \(G_{\mathbf{0}}\) will handle the expression-relevant information of the synthesized image. Using the edited code for later layers will result in other attributes editing, _e.g._, adding glasses or changing the hair structure. Therefore, we leave the remaining \(7\) layers of \(G_{\mathbf{0}}\) and all \(10\) layers in \(G_{\mathbf{1}}\) unchanged and only use the edited code for the top \(2\)\(G_{\mathbf{0}}\) layers. This yields better disentanglement during the 3DMM-controlled style editing.
Training-wise, we adopt identical MLP architecture from PixelNeRF [72] to implement both \(\mathcal{F}_{*}\) networks and adopt a batch size of \(4\) with learning rate \(5\times 10^{-4}\) during the optimization. The networks are trained for \(50,000\) iterations, which costs around 2 days on a single V100 GPU. Please refer to the released code for more details.
### Additional Ablation Study
The robustness of the number of style codes is ablated in Table 5. Experiments are conducted with 1, 2, and 5 styles per model so that under each setting the 10 styles can be evenly divided into different runs for the sake of comparison.
The effectiveness of the elastic loss is also ablated qualitatively. As shown in the visualized mesh in Fig. 12, the elastic loss is effective in preventing discontinuous deformation and leads to smoother geometry in the styled space.
### Additional Quantitative and Qualitative Results
The detailed breakdown of toonification fidelity and quality are shown in Tab. 6 and Tab. 7 respectively. We include more qualitative experiment results here. In Fig. 13 we include more comparisons with the baseline methods,
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline Domains & CIPS-3D & E3DGE & NADA & Ours \\ \hline Pixar & 0.765 & 0.748 & 0.564 & **0.812** \\ Comic & 0.643 & 0.614 & 0.496 & **0.729** \\ Slam Dunk & 0.672 & 0.765 & 0.552 & **0.780** \\ Caricature I & 0.648 & 0.592 & 0.455 & **0.708** \\ Caricature II & 0.655 & 0.698 & 0.538 & **0.785** \\ Caricature III & 0.637 & 0.644 & 0.495 & **0.725** \\ Croods & 0.796 & 0.831 & 0.626 & **0.860** \\ Shrek & 0.708 & 0.794 & 0.599 & **0.835** \\ Rapunzel & 0.603 & 0.696 & 0.564 & **0.782** \\ Hiccup & 0.684 & 0.688 & 0.464 & **0.796** \\ \hline Average & 0.681 & 0.707 & 0.535 & **0.781** \\ \hline \end{tabular}
\end{table}
Table 6: **Quantitative evaluation in terms of identity similarity\(\uparrow\). DeformToon3D achieves the best identity consistency over all the \(10\) styles.**
Figure 12: **Ablation on elastic loss. Without _v.s._ with elastic loss for a) female and b) male, respectively.**
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \# styles per model & 1 & 2 & 5 & 10 \\ \hline Identity similarity\(\uparrow\) & 0.795 & 0.776 & 0.784 & 0.781 \\ \hline FID\(\downarrow\) & 27.5 & 28.1 & 27.9 & 27.6 \\ \hline \end{tabular}
\end{table}
Table 5: **Ablation on the number of styles.**
which demonstrates that DeformToon3D produces better quality against existing methods. In Fig. 14 we show moretoonification results over real images. The proposed methods yield plausible results with consistent identity preservations. We further include the stylized texture and shape pair in Fig. 15 and validate that our method produces high-quality stylization over both texture and shape.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline Domains & CIPS-3D & E3DGE & NADA & Ours \\ \hline Pixar & 33.6 & 36.8 & 39.9 & **21.5** \\ Comic & 61.9 & 44.8 & 70.9 & **33.3** \\ Slam Dunk & 78.1 & 41.8 & 75.9 & **37.3** \\ Caricature I & 28.7 & 30.1 & 52.9 & **16.0** \\ Caricature II & 76.7 & 58.1 & 102.6 & **56.4** \\ Caricature III & 47.1 & **25.8** & 54.8 & 27.2 \\ Croods & 36.9 & 30.9 & 58.5 & **22.5** \\ Shrek & 36.0 & 32.1 & 47.2 & **20.3** \\ Rapunzel & 65.0 & 30.5 & 44.1 & **17.2** \\ Hiccup & 42.3 & 32.8 & 46.6 & **24.6** \\ \hline Average & 50.6 & 34.0 & 59.3 & **27.6** \\ \hline \end{tabular}
\end{table}
Table 7: **Quantitative evaluation in terms of FID\(\downarrow\). DeformToon3D achieves the best FID over \(9\) of the \(10\) styles.**
Figure 13: **Additional qualitative comparisons with baseline methods.** DeformToon3D produces better performance against all baselines regarding toolification fidelity, diversity, and identity preservation. Better zoom in.
Figure 14: **Additional results of DeformToon3D on the real images.** Our method enables multiple styles toonification with a single model, where both the texture and the geometry matches the target domain.
Figure 15: **Additional results of DeformToon3D with stylized texture and shape.** |
2301.00054 | Fault-tolerant error correction for a universal non-Abelian topological
quantum computer at finite temperature | We study fault-tolerant error correction in a quantum memory constructed as a
two-dimensional model of Fibonacci anyons on a torus, in the presence of
thermal noise represented by pair-creation processes and measurement errors.
The correction procedure is based on the cellular automaton decoders
originating in the works of G\'acs and Harrington. Through numerical
simulations, we observe that this code behaves fault-tolerantly and that
threshold behavior is likely present. Hence, we provide strong evidence for the
existence of a fault-tolerant universal non-Abelian topological quantum
computer. | Alexis Schotte, Lander Burgelman, Guanyu Zhu | 2022-12-30T21:05:31Z | http://arxiv.org/abs/2301.00054v1 | Fault-tolerant error correction for a universal non-Abelian topological quantum computer at finite temperature
###### Abstract
We study fault-tolerant error correction in a quantum memory constructed as a two-dimensional model of Fibonacci anyons on a torus, in the presence of thermal noise represented by pair-creation processes and measurement errors. The correction procedure is based on the cellular automaton decoders originating in the works of Gacs [1] and Harrington [2]. Through numerical simulations, we observe that this code behaves fault-tolerantly and that threshold behavior is likely present. Hence, we provide strong evidence for the existence of a fault-tolerant universal non-Abelian topological quantum computer.
## I Introduction
Anyons are emergent quasi-particles that exist in two-dimensional condensed matter systems and whose exchange statistics generalize that of Bosons and Fermions. These particles have spurred much interest due to their potential applications for quantum computation. In particular, it was found that with certain types of non-Abelian anyons, a universal quantum computation can be performed by braiding and fusing these particles [3; 4; 5]. An intriguing benefit of this paradigm is that, due to their topological nature, computations are intrinsically robust to perturbations at zero temperature. At non-zero temperature, however, thermal anyonic excitations can corrupt the computation by performing non-trivial braids with the computational anyons. Since systems exhibiting anyonic excitations have a spectral gap \(\Delta\), this source of errors can be suppressed to some extent at temperatures \(T\ll\Delta/k_{B}\) as the density of thermal anyons scales as \(e^{-\Delta/k_{B}T}\). Alas, this passive protection does not suffice, because the presence of thermal anyons is unavoidable at non-zero temperatures when scaling up the size of computations. Therefore, proficient active error correction schemes for non-Abelian models are paramount for the realization of topological quantum computers.
Besides their envisaged use for topological quantum computation, topologically ordered systems (i.e., those that support anyonic excitations on top of their ground space) are also of much interest for quantum error correction. In particular, one of the characteristics of such systems is a robust ground space degeneracy, which allows one to use their ground space as the code space of an error correcting code. This realization led to the discovery of topological quantum error correcting codes, which encode logical quantum states in topologically ordered states of a system of qudits (typically arranged on a two-dimensional lattice). Since their discovery in the 90s, most research has focused exclusively on Abelian topological codes such as the surface code and the color code [5; 6; 7; 8; 9; 10; 11; 12; 13; 14], which admit an elegant characterization in terms of the stabilizer formalism [15]. Due to their geometrical locality and high error thresholds, these codes are considered to be promising candidates for protecting quantum information from noise in error-corrected quantum computers. One of the drawbacks of Abelian topological codes, however, is that they do not allow one to execute a universal set of logical gates in a protected fashion in two dimensions. Hence, they must be supplemented with additional protocols such as magic state distillation [16] or code switching to higher-dimensional codes [17; 18], which introduce a large space-time overhead [19]. Alternatively, there exist non-Abelian topological codes which do not suffer from this inherent limitation, and are able to perform a universal gate set natively within their code space in two dimensions [3]. The trade-off is that such codes go beyond the stabilizer formalism and are therefore very hard to simulate classically.
While active error correction in Abelian anyon models and Abelian topological codes has been studied extensively, quantum error correction based on non-Abelian anyon models has not enjoyed the same focus. Nevertheless, important progress has been made over the last decade, including both analytical proofs and numerical demonstrations of threshold behavior for various non-Abelian topological error correcting codes [20; 21; 22; 23; 24]. Moreover, syndrome extraction circuits for such non-Abelian string-net codes have been devel |
2307.16590 | Numerical analysis of a baryon and its dilatation modes in holographic
QCD | We investigate a baryon and its dilatation modes in holographic QCD based on
the Sakai-Sugimoto model, which is expressed as a 1+4 dimensional U($N_f$)
gauge theory in the flavor space. For spatially rotational symmetric systems,
we apply a generalized version of the Witten Ansatz, and reduce 1+4 dimensional
holographic QCD into a 1+2 dimensional Abelian Higgs theory in a curved space.
In the reduced theory, the holographic baryon is described as a two-dimensional
topological object of an Abrikosov vortex. We numerically calculate the baryon
solution of holographic QCD using a fine and large lattice with spacing of 0.04
fm and size of 10 fm. Using the relation between the baryon size and the
zero-point location of the Higgs field in the description with the Witten
Ansatz, we investigate a various-size baryon through this vortex description.
As time-dependent size-oscillation modes (dilatation modes) of a baryon, we
numerically obtain the lowest excitation energy of 577 MeV and deduce the
dilatational excitation of a nucleon to be the Roper resonance N$^*$(1440). | Keiichiro Hori, Hideo Suganuma, Hiroki Kanda | 2023-07-31T11:44:52Z | http://arxiv.org/abs/2307.16590v3 | # Numerical analysis of a baryon and its dilatation modes in holographic QCD
###### Abstract
We investigate a baryon and its dilatation modes in holographic QCD based on the Sakai-Sugimoto model, which is expressed as a 1+4 dimensional U(\(N_{f}\)) gauge theory in the flavor space. For spatially rotational symmetric systems, we apply a generalized version of the Witten Ansatz, and reduce 1+4 dimensional holographic QCD into a 1+2 dimensional Abelian Higgs theory in a curved space. In the reduced theory, the holographic baryon is described as a two-dimensional topological object of an Abrikosov vortex. We numerically calculate the baryon solution of holographic QCD using a fine and large lattice with spacing of 0.04 fm and size of 10 fm. Using the relation between the baryon size and the zero-point location of the Higgs field in the description with the Witten Ansatz, we investigate a various-size baryon through this vortex description. As time-dependent size-oscillation modes (dilatation modes) of a baryon, we numerically obtain the lowest excitation energy of 577 MeV and deduce the dilatational excitation of a nucleon to be the Roper resonance N\({}^{*}\)(1440).
## I Introduction
Quantum chromodynamics (QCD) is established as the fundamental theory of strong interaction and characterized by SU(\(N_{c}\)) gauge symmetry and global SU(\(N_{f}\))\({}_{L}\times\) SU(\(N_{f}\))\({}_{R}\) chiral symmetry. Owing to asymptotic freedom of QCD, high-energy hadron reactions can be analyzed using perturbative QCD. In low energy regions, however, the QCD coupling becomes strong, and the perturbative method is no more applicable. Therefore, for theoretical analyses of hadrons based on QCD, some nonperturbative methods are necessary such as lattice QCD. Based on gauge/gravity duality [1] for D branes [2] in superstring theory, holographic QCD is an interesting new tool to analyze the nonperturbative properties of QCD [3; 4; 5].
Around 1970, the string theory was originally proposed by Nambu, Goto, and Polyakov for the description of hadrons [6; 7; 8]. After establishment of QCD, the string theory was not used for the main research of hadrons. Instead, this framework was reformulated as the superstring theory in 1980s [9] and has been studied as a plausible candidate of a grand unified theory including quantum gravity, and many studies have been constantly conducted to date.
The superstring theory is formulated in 10 dimensional space-time and includes D branes, on which open string endpoints exist [2]. As a remarkable discovery by Polchinski, on the surface of \(N\) overlapped D branes, U(\(N\)) gauge symmetry emerges, and the \(N\) D-branes system leads to U(\(N\)) gauge theory [2]. In fact, on the \(N\) D-branes, gauge fields \(A_{ab}\) appear from the open string linking two D-branes labeled with \(a\) and \(b\) (\(a,b=1,2,...,N\)). On the other hand, around the D brane, a higher-dimensional supergravity theory is formed since the brane has mass, and multiple branes can be gravitational sources. Thus, there are two different theories relating to the D-brane, and the gauge theory _on_ the D-brane and the higher-dimensional gravity theory _around_ the D-brane are conjectured to be equivalent [1], which is called AdS/CFT correspondence or gauge/gravity duality. This remarkable equivalence between gauge and gravity theories was first proposed by Maldacena from a detailed analysis of four-dimensional \(\mathcal{N}=4\) supersymmetric (SUSY) SU(\(N\)) Yang-Mills theory and five-dimensional Anti-de Sitter (AdS) supergravity theory [1], using large \(N\) argument. This equivalence was first called as AdS/CFT correspondence and is generalized as gauge/gravity duality in more general concept. In this correspondence, a strong-coupling gauge theory can be described with a weak-coupling gravity theory [1; 3; 4; 5].
With this correspondence, many researches have been performed, and one of the main category is to analyze strong-coupling QCD with a higher-dimensional classical gravity theory, which is called holographic QCD [3; 4; 5; 10; 11]. In 1998, Witten formulated a non-SUSY version of holographic QCD for a four-dimensional pure-gluon Yang-Mills theory using \(S^{1}\)-compactified \(N_{c}\) D4 branes, where periodic and antiperiodic boundary conditions are imposed on boson and fermion fields, respectively [3]. In 2005, Sakai and Sugimoto proposed holographic QCD for four-dimensional full QCD, including massless chiral quarks, by adding the \(N_{f}\) pair of D8 and \(\overline{\rm D8}\) branes to \(S^{1}\)-compactified D4 branes [5]. In fact, the D4/D8/\(\overline{\rm D8}\) multi-D-brane system has SU(\(N_{c}\)) gauge symmetry and SU(\(N_{f}\))\({}_{L}\times\)SU(\(N_{f}\))\({}_{R}\) chiral symmetry and leads to massless QCD in infrared scale. Here, quarks and gluons appear from the massless modes of 4-8 and 4-4 strings, respectively. In the large \(N_{c}\) limit, \(N_{c}\) D4 branes are dominant as a gravity source and are converted into a gravitational field via the AdS/CFT correspondence
or the gauge/gravity duality. The 't Hooft coupling \(\lambda\equiv N_{c}g^{2}\) is a control parameter of the gauge side, and strong-coupling QCD with a large \(\lambda\) corresponds to a weakly interacting gravitational theory. Then, this system is described by the D8 brane in the presence of a background gravitational field originating from the D4 brane. Nonperturbative properties of QCD can be analyzed with a classical gravity theory
In the leading order of \(1/N_{c}\) expansion, the effective theory of the D8 brane in the D4-brane-induced background gravity is expressed with the Dirac-Born-Infeld (DBI) action and the Chern-Simons (CS) term. The leading order of \(1/\lambda\) expansion is the DBI action, and the next leading of \(1/\lambda\) is the CS term. Expanding the DBI action with \(1/\lambda\), the leading order becomes a five-dimensional Yang-Mills theory in flavor space in a curved space, where a background gravity appears only in the fifth dimension. The CS term is a topological term responsible for anomalies in QCD. The gauge field is decomposed to meson fields, Although the meson sector results are successful with mass, coupling constant and phenomenological law, baryons do not appear explicitly. This is similar situation with the Skyrme model [12].
In 1980s, Witten restored the Skyrmion description of baryon [13] from a large \(N_{c}\) viewpoint. The large \(N_{c}\) effective theory includes only mesons because QCD is reduced into a weak-coupling theory of mesons and glueballs in the large \(N_{c}\) limit [14]. Regarding baryonic degrees of freedom, the baryon does not appear as a system component since its mass increases \(\mathcal{O}(N_{c})\), and it is described with a soliton of mesons such as the Skyrmion [13; 14].
Also for holographic QCD, which is derived with large \(N_{c}\) and is described with only meson fields, the baryon is described as a soliton of mesons, that is, a topological object like a brane-induces Skyrmion [15; 16] or an instanton-like object [17; 18]. In fact, the baryon appears as an spatially extended object in holographic QCD.
In this study, we investigate a single baryon and its dilatation modes in holographic QCD, adopting the Sakai-Sugimoto model formulated as a 1+4 dimensional U(\(N_{f}\)) gauge theory in the flavor space. For spatially rotational symmetric systems, we apply a generalized Witten Ansatz and reduce 1+4 dimensional holographic QCD into a 1+2 dimensional Abelian Higgs theory, where the holographic baryon is expressed as an Abrikosov vortex. We numerically calculate the baryon solution of holographic QCD using a fine and large lattice, keeping background gravity from the \(N_{c}\) D4-brane.
In addition, we investigate a various-size baryon and the dilatation mode (time-dependent size-oscillation) of a single baryon, using the relation between the baryon size and the zero-point location of the Higgs field in the reduced Abelian Higgs theory.
This size oscillation is physically considered as a collective motion and is difficult to be described in the quark model, based one the single particle picture. Instead, the size oscillation mode has been studied in the Skyrmion research, and its lowest excitation mode is identified as the Roper resonance N\({}^{*}\)(1440) [19; 20; 21; 22; 23].
The dilatation mode of baryon can be described also in holographic QCD. In fact, the holographic baryon appears as a soliton, i.e., a spatially extended object, and therefore is has dilatation mode. Note however that holographic dilatation is four-dimensional spatial oscillation including the extra spatial dimension rather than ordinary three-dimensional one.
The organization of this paper is as follows. In Section 2, we briefly review the Sakai-Sugimoto model as typical holographic QCD. In Section 3, we apply the Witten Ansatz in holographic QCD. Owing to the Witten Ansatz, the 1+4 dimensional Yang-Mills theory reduces into a 1+2 dimensional Abelian Higgs theory. In Section 4, using the Witten Ansatz, we present the vortex description of baryons in holographic QCD, and we numerically obtain the ground-state solution of the holographic baryon using a fine and large-volume lattice. In Section 5, we numerically analyze size dependence of the holographic baryon. In Section 6, we investigate time-dependent dilatational modes of a single baryon in holographic QCD. Section 7 is devoted for summary and conclusion.
## II Holographic QCD action in the Sakai-Sugimoto model
In this section, as a starting point, we introduce the Sakai-Sugimoto model, one of the most successful holographic QCD [5]. In the Sakai-Sugimoto model, four-dimensional massless QCD is constructed using the D4/D8/\(\overline{\rm D8}\) multi-D-brane system, which comprises spatially \(S^{1}\)-compactified \(N_{c}\) D4 branes attached with \(N_{f}\) D8-\(\overline{\rm D8}\) pairs. Here, \(N_{c}\) means the color number, and \(N_{f}\) the light flavor number. This compactification breaks SUSY due to the (anti)periodic conditions for bosons(fermions), as was demonstrated for a D4 brane system by Witten [3]. The compactification radius is \(M_{\rm KK}^{-1}\), and this model parameter physically corresponds to a UV cutoff in holographic QCD. This system is infrared equivalent to massless QCD, where chiral symmetry exists [5]. Using AdS/CFT correspondence (gauge/gravity duality), the \(N_{c}\) D4 branes are transformed into a gravitational source, and the system becomes \(N_{f}\) D8 branes in the D4 gravity background, which leads to the DBI and CS action at the leading order of \(1/N_{c}\) within the probe approximation. In terms of \(1/\lambda\) expansion, the DBI action includes its leading order, and the CS action is sub-leading.
From the multi-D-brane system which is infrared equivalent to massless QCD, the DBI action becomes 1+4 dimensional Yang-Mills theory on the flavor space of U(\(N_{f}\)) \(\simeq\) SU(\(N_{f}\))\(\times\) U(1) at the leading order of \(1/\lambda\)
expansion [5]:
\[S_{\rm SYM}=S_{\rm SYM}^{\rm SU(N_{f})}+S_{\rm SYM}^{\rm U(1)} \tag{1}\] \[= -\kappa\int d^{4}xdw\;{\rm tr}\left[\frac{1}{2}h(w)F_{\mu\nu}F^{\mu \nu}+k(w)F_{\mu w}F^{\mu w}\right]\] \[-\frac{\kappa}{2}\int d^{4}xdw\;\left(\frac{1}{2}h(w)\hat{F}_{\mu \nu}\hat{F}^{\mu\nu}+k(w)\hat{F}_{\mu w}\hat{F}^{\mu w}\right).\]
In this paper, we use \(w\) for the extra fifth-coordinate in holographic QCD, \(\hat{A}\) denotes U(1) gauge field and \(\hat{F}\) U(1) field strength [29].
For \(M,N=t,x,y,z,w\), the field strengths are given by
\[F_{MN} \equiv \partial_{M}A_{N}-\partial_{N}A_{M}+i[A_{M},A_{N}],\] \[\hat{F}_{MN} \equiv \partial_{M}\hat{A}_{N}-\partial_{N}\hat{A}_{M}, \tag{2}\]
with the five-dimensional SU(\(N_{f}\)) gauge field \(A^{M}(x^{\mu},w)\) and U(1) gauge field \(\hat{A}^{M}(x^{\mu},w)\), respectively. Throughout this paper, we take the \(M_{\rm KK}=1\) unit together with the natural unit, and \(\kappa\) is written as \(\kappa=\frac{\lambda N_{c}}{216\pi^{3}}\) in this unit. Note that, as a relic of \(N_{c}\) D4-branes, there appear background gravity \(k(w)\) and \(h(w)\) depending on the extra fifth-coordinate \(w\),
\[k(w)\equiv 1+w^{2},\;\;h(w)\equiv k(w)^{-1/3}, \tag{3}\]
in the \(M_{\rm KK}=1\) unit. In Eq. (1) at the leading order of \(1/\lambda\) expansion, SU(\(N_{f}\)) variables \(A\) and U(1) variables \(\hat{A}\) are completely separated, and hence we have divided \(S_{\rm SYM}\) into the SU(\(N_{f}\)) sector \(S_{\rm SYM}^{\rm SU(N_{f})}\) and the U(1) sector \(S_{\rm SYM}^{\rm U(1)}\).
The \(1/N_{c}\)-leading holographic QCD also has the CS term [5; 18] as the next leading order of \(1/\lambda\). The CS term is a topological term responsible for anomalies in QCD, and its explicit form is
\[S_{\rm CS} = \frac{N_{c}}{24\pi^{2}}\int\omega_{5}(\mathcal{A}) \tag{4}\] \[= \frac{N_{c}}{24\pi^{2}}\int{\rm tr}\left(\mathcal{A}\mathcal{F}^ {2}-\frac{i}{2}\mathcal{A}^{3}\mathcal{F}-\frac{1}{10}\mathcal{A}^{5}\right),\]
where \(\mathcal{A}=A+\frac{1}{N_{f}}\hat{A}\) denotes the U(\(N_{f}\)) gauge field [5]. Note that SU(\(N_{f}\)) variables \(A\) and U(1) variables \(\hat{A}\) are dynamically mixed in the CS term \(S_{\rm CS}\) in Eq. (4).
In this paper, to analyze baryons in holographic QCD, we consider both Yang-Mills and CS parts for the case of \(N_{f}=2\).
## III Witten Ansatz in holographic QCD
Holographic QCD is formulated to be a 1+4 dimensional U(\(N_{f}\)) non-Abelian gauge theory with a gravitational background \(h(w)\) and \(k(w)\), which would be fairly difficult to analyze. To avoid the difficulty and to proceed analytic calculations, most previous works [18; 24; 25] were forced to take a flat background \(h(w)=k(w)=1\) and to use the simple 't Hooft instanton solution [26; 27] in the flat space, although \(h(w)\) and \(k(w)\) are the trace of D4-branes and are to be relevant ingredients.
To deal with holographic QCD for \(N_{f}=2\) without reduction the gravitational backgrounds \(h(w)\) and \(k(w)\), we adopt the Witten Ansatz [28] in this paper. The Witten Ansatz is generally applicable for spatially-rotational symmetric system in the SU(2) Yang-Mills theory. Applying this to holographic QCD, the 1+4 dimensional non-Abelian theory transforms to a 1+2 dimensional Abelian Higgs theory. Accordingly, relevant topological objects are changed from instantons to vortices, as will be shown in Sect. IV. In this section, we generalize the Witten Anstaz to be applicable to holographic QCD.
### Witten Ansatz for Euclidean Yang-Mills theory
In this subsection, we briefly review the original Witten Ansatz [28] applied for the Euclidean four-dimensional SU(2) Yang-Mills theory, which is formulated on three spatial coordinates \((x,y,z)\) and Euclidean time \(t\). For spatially-rotational symmetric systems, the Witten Ansatz can be applied as
\[A_{i}^{a}(x,y,z,t) = \frac{\phi_{2}(r,t)+1}{r}\epsilon_{iak}\hat{x}_{k}+\frac{\phi_{1} (r,t)}{r}\hat{\delta}_{ia} \tag{5}\] \[+a_{r}(r,t)\hat{x}_{i}\hat{x}_{a},\] \[A_{t}^{a}(x,y,z,t) = a_{t}(r,t)\hat{x}^{a} \tag{6}\]
with \(r\equiv(x_{i}x_{i})^{1/2}\), \(\hat{x}_{i}\equiv x_{i}/r\) and \(\hat{\delta}_{ij}\equiv\delta_{ij}-\hat{x}_{i}\hat{x}_{j}\).
Using the Witten Ansatz, the four-dimensional SU(2) Yang-Mills theory is reduced into a two-dimensional Abelian Higgs theory as
\[S_{\rm YM}^{\rm SU(2)} = \int dt\ d^{3}x\ \frac{1}{2}{\rm tr}F_{\mu\nu}F^{\mu\nu} \tag{7}\] \[= 4\pi\int_{-\infty}^{\infty}dt\int_{0}^{\infty}dr\bigg{[}|D_{0}\phi |^{2}+|D_{1}\phi|^{2}\] \[+\frac{1}{2r^{2}}(1-|\phi|^{2})^{2}+\frac{r^{2}}{2}f_{01}^{2} \bigg{]},\]
where the complex Higgs field \(\phi(t,r)\), Abelian gauge field \(a_{\mu}(t,r)\), its covariant derivative \(D_{\mu}\), and field strength \(f_{\mu\nu}\) in the Abelian Higgs theory are
\[\phi \equiv \phi_{1}+i\phi_{2}\in{\bf C},\ a_{\mu}\equiv(a_{0},a_{1}),\] \[D_{\mu} \equiv \partial_{\mu}-ia_{\mu},\ f_{01}\equiv\partial_{0}a_{1}-\partial_{1}a_{0}. \tag{8}\]
Here, we have used \((0,1)=(t,r)\) for the index of two dimensional coordinates.
### Generalized Witten Ansatz for SU(2)\({}_{f}\) sector in holographic QCD
In this subsection, we generalize the Witten Ansatz to be applicable for holographic QCD.
The SU(2)\({}_{f}\) sector in holographic QCD is expressed as a 1+4 dimensional Yang-Mills theory with gravitational backgrounds \(h(w)\) and \(k(w)\). Holographic QCD already includes four-dimensional Euclidean spatial coordinates (\(x\), \(y\), \(z\), \(w\)) including the extra fifth-coordinate \(w\), and instantons can be naturally introduced in holographic QCD without necessity of the Euclidean process or the Wick rotation.
Describing the SU(2)\({}_{f}\) gauge field \(A\) with the Pauli matrix \(\tau^{a}\) as \(A=A^{a}\frac{\tau^{a}}{2}\in\) su(2)\({}_{f}\) in holographic QCD, we take a generalized version of the Witten Ansatz for \((x,y,z)\)-spatially rotational symmetric systems [29; 30; 31],
\[A^{a}_{0}(t,x,y,z,w) = a_{0}(t,r,w)\hat{x}^{a}, \tag{9}\] \[A^{a}_{i}(t,x,y,z,w) = \frac{\phi_{2}(t,r,w)+1}{r}\epsilon_{iak}\hat{x}^{k}\] (10) \[+ \frac{\phi_{1}(t,r,w)}{r}\hat{\delta}_{ia}+a_{r}(t,r,w)\hat{x}_{ i}\hat{x}_{a},\] \[A^{a}_{w}(t,x,y,z,w) = a_{w}(t,r,w)\hat{x}^{a}, \tag{11}\]
with \(r\equiv(x_{i}x_{i})^{1/2}\), \(\hat{x}_{i}\equiv x_{i}/r\) and \(\hat{\delta}_{ij}\equiv\delta_{ij}-\hat{x}_{i}\hat{x}_{j}\). For 1+4 dimensional holographic QCD, we have extended the Witten Ansatz for \(A^{a}_{0}\) component, considering the \((x,y,z)\)-rotational symmetry. Note that this Ansatz is a general form when \((x,y,z)\)-rotational symmetry is imposed on gauge fields.
In the Witten Ansatz, the holographic field strength, \(F_{ij}\), \(F_{0i}\), \(F_{wi}\) and \(F_{0w}\) are expressed as
\[\frac{1}{2}\epsilon_{ijk}F^{a}_{jk} = (\partial_{1}\phi_{1}-a_{1}\phi_{2})\frac{\hat{\delta}_{ai}}{r}+( 1-\phi_{1}^{2}-\phi_{2}^{2})\frac{\hat{x}_{a}\hat{x}_{i}}{r^{2}} \tag{12}\] \[+(\partial_{1}\phi_{1}+a_{1}\phi_{2})\frac{1}{r}\epsilon_{aik} \hat{x}_{k},\] \[F^{a}_{0i} = (\partial_{0}\phi_{1}+a_{0}\phi_{2})\frac{\hat{\delta}_{ai}}{r}+( \partial_{0}a_{1}-\partial_{1}a_{0})\frac{\hat{x}_{a}\hat{x}_{i}}{r^{2}}\] (13) \[-(\partial_{0}\phi_{2}-a_{0}\phi_{1})\frac{1}{r}\epsilon_{aik} \hat{x}_{k},\] \[F^{a}_{wi} = (\partial_{2}\phi_{1}+a_{2}\phi_{2})\frac{\hat{\delta}_{ai}}{r}+( \partial_{2}a_{1}-\partial_{1}a_{2})\frac{\hat{x}_{a}\hat{x}_{i}}{r^{2}}\] (14) \[-(\partial_{2}\phi_{2}-a_{2}\phi_{1})\frac{1}{r}\epsilon_{aik} \hat{x}_{k},\] \[F^{a}_{0w} = (\partial_{0}a_{2}-\partial_{2}a_{0})\hat{x}^{a}. \tag{15}\]
With the Witten Ansatz, 1+4 dimensional SU(2)\({}_{f}\) Yang-Mills sector of holographic QCD is reduced into a 1+2 dimensional Abelian Higgs theory on a curved space. In fact, \(S^{\rm SU(2)}_{\rm SYM}\) is rewritten as
\[S^{\rm SU(2)}_{\rm SYM} \tag{16}\] \[= -\kappa\int d^{4}x\ dw\ {\rm tr}\left[\frac{1}{2}h(w)F_{\mu\nu}F^{ \mu\nu}+k(w)F_{\mu\nu}F^{\mu w}\right]\] \[= 4\pi\kappa\int_{-\infty}^{\infty}dt\int_{0}^{\infty}dr\int_{- \infty}^{\infty}dw\bigg{[}h(w)(|D_{0}\phi|^{2}-|D_{1}\phi|^{2})\] \[- k(w)|D_{2}\phi|^{2}-\frac{h(w)}{2r^{2}}(1-|\phi|^{2})^{2}\] \[+ \frac{r^{2}}{2}\{h(w)f_{01}^{2}+k(w)f_{02}^{2}-k(w)f_{12}^{2}\} \bigg{]},\]
where the complex Higgs field \(\phi(t,r,w)\in{\bf C}\), Abelian gauge field \(a_{\mu}(t,r,w)\), its covariant derivative \(D_{\mu}\), and field strength \(f_{\mu\nu}\) in the Abelian Higgs theory are
\[\phi \equiv \phi_{1}+i\phi_{2}\in{\bf C},\ a_{\mu}\equiv(a_{0},a_{r},a_{w}),\] \[D_{\mu} \equiv \partial_{\mu}-ia_{\mu},\ f_{\mu\nu}\equiv\partial_{\mu}a_{\nu}- \partial_{\nu}a_{\mu}. \tag{17}\]
Here, we have used \((0,1,2)=(t,r,w)\) for the index of 1+2 dimensional coordinates. Note that the factor \(k(w)\) appears in \(D_{2}\) and \(F_{12}\) associated with the index 2 (\(w\)), and otherwise the factor \(h(w)\) appears.
From this action, the static energy of the Yang-Mills part is obtained as
\[E^{\rm SU(2)}_{\rm SYM}{}_{f}[\phi(r,w),\vec{a}(r,w)]\] \[=4\pi\kappa\int_{0}^{\infty}dr\int_{-\infty}^{\infty}dw\bigg{[}h( w)|D_{1}\phi|^{2}+k(w)|D_{2}\phi|^{2}\] \[\qquad\qquad+\frac{h(w)}{2r^{2}}\{1-|\phi|^{2}\}^{2}+\frac{r^{2} }{2}k(w)f_{12}^{2}\bigg{]}\] \[=4\pi\int_{0}^{\infty}drr^{2}\int_{-\infty}^{\infty}dw\ {\cal E}^{\rm SU(2)} {}_{f}(r,w), \tag{18}\]
with \(\vec{a}=(a_{1},a_{2})=(a_{r},a_{w})\). This is similar with the 1+2 dimensional Ginzburg-Landau theory. As the different point, however, there appears a nontrivial metric of \(r^{2}\)[28] in addition to holographic background gravity of \(h(w)\) and \(k(w)\) in the \((r,w)\) half plane. To include all the gravitational effects exactly, we construct the lattice formalism, as shown in Appendix A, and perform the numerical calculation for holographic QCD.
Finally in the subsection, we investigate the topological density \(\rho_{B}\) in the Witten Ansatz. For the SU(2)\({}_{f}\) gauge configuration in the Witten Ansatz, the topological density \(\rho_{B}\) in \((x,y,z,w)\)-space is found to be [29]
\[\rho_{B} \equiv \frac{1}{16\pi^{2}}{\rm tr}\big{(}F_{MN}\tilde{F}_{MN}\big{)}= \frac{1}{32\pi^{2}}\epsilon_{MNPQ}{\rm tr}\big{(}F_{MN}F_{PQ}\big{)} \tag{19}\] \[= \frac{1}{8\pi^{2}r^{2}}\{-i\epsilon_{ij}(D_{i}\phi)^{*}D_{j}\phi+ \epsilon_{ij}\partial_{i}a_{j}(1-|\phi|^{2})\}\] \[= \frac{1}{8\pi^{2}r^{2}}\epsilon_{ij}\partial_{i}\{\frac{1}{2i}( \phi^{*}\partial_{j}\phi-\phi\partial_{j}\phi^{*})+a_{j}(1-|\phi|^{2})\}\] \[= \frac{1}{8\pi^{2}r^{2}}\epsilon_{ij}\partial_{i}\{a_{j}(1-|\phi|^{ 2})+\partial_{j}\theta\cdot|\phi|^{2}\},\]
where \(\theta\equiv{\rm arg}\ \phi\) and Roman small letters \(i,j\) take \((1,2)=(r,w)\). The topological density \(\rho_{B}\) is expressed as a total derivative. Since we consider \((x,y,z)\)-rotationally symmetric system, \(\rho_{B}\) is independent of the spatial direction \((\hat{x},\hat{y},\hat{z})\). In fact, \(\rho_{B}\) takes an SO(3) rotationally symmetric form of \(\rho_{B}(r,w)\) in the second line of Eq. (19).
### U(1) sector in holographic QCD
In this subsection, we consider the U(1) sector in holographic QCD. Hereafter, the capital-letter index denotes the Euclidean spatial index as \(M=x,y,z,w\). Also for
the U(1) gauge field \(\hat{A}\), we respect the spatial SO(3) rotational symmetry [30; 31] as in the Witten Ansatz, and impose
\[\hat{A}_{i}(t,x,y,z,w) = \hat{a}_{r}(t,r,w)\hat{x}_{i}, \tag{20}\]
while \(\hat{A}_{0}\) and \(\hat{A}_{w}\) are treated to be arbitrary. In this case, one finds \(\hat{F}_{ij}=0\) and can take the \(\hat{a}_{r}=0\) gauge, which simplifies \(\hat{A}_{i}=0\).
Then, the \(1/\lambda\)-leading term \(S^{\rm U(1)}_{\rm SYM}\) for the U(1) gauge field is written as [29]
\[S^{\rm U(1)} = \frac{\kappa}{2}\int d^{4}xdw\{h(w)\hat{F}_{0i}^{2}+k(w)\hat{F}_{ 0w}^{2}-k(w)\hat{F}_{iw}^{2}\} \tag{21}\] \[= \int d^{4}xdw\bigg{[}\frac{1}{2}\hat{A}_{0}K\hat{A}_{0}-\frac{ \kappa}{2}k(w)(\partial_{i}\hat{A}_{w})^{2}\] \[+(\mbox{time-derivative terms})\bigg{]},\]
using the SO(3)-symmetric non-negative hermite kernel
\[K \equiv -\kappa\{h(w)\partial_{i}^{2}+\partial_{w}k(w)\partial_{w}\} \tag{22}\] \[= -\kappa\{h(w)\frac{1}{r^{2}}\partial_{r}r^{2}\partial_{r}+ \partial_{w}k(w)\partial_{w}\}.\]
In this calculation, we take \(\hat{A}_{i}=0\) using the gauge and rotational symmetry.
We consider the CS term \(S_{\rm CS}\) as the next leading order of the \(1/\lambda\) expansion. For the static SO(3)-rotationally symmetric configuration in the \(A_{0}=0\) gauge, the CS term \(S_{\rm CS}\) in Eq. (4) is transformed as [18; 29; 30; 31]
\[S_{\rm CS} = \frac{N_{c}}{24\pi^{2}}\epsilon_{MNPQ}\int d^{4}xdw\left[\frac{3 }{8}\hat{A}_{0}{\rm tr}(F_{MN}F_{PQ})\right. \tag{23}\] \[-\frac{3}{2}\hat{A}_{M}{\rm tr}(\partial_{0}A_{N}F_{PQ})+\frac{3 }{4}\hat{F}_{MN}{\rm tr}(A_{0}F_{PQ})\] \[+\frac{1}{16}\hat{A}_{0}\hat{F}_{MN}\hat{F}_{PQ}-\frac{1}{4}\hat {A}_{M}\hat{F}_{0N}\hat{F}_{PQ}\bigg{]}\] \[= \frac{N_{c}}{2}\int d^{4}xdw~{}\rho_{B}\hat{A}_{0},\]
up to total derivative. This is Coulomb-type interaction between the U(1) gauge potential \(\hat{A}_{0}\) and the topological density \(\rho_{B}\equiv\frac{1}{16\pi^{2}}{\rm tr}(F_{MN}\tilde{F}_{MN})\).
Then, the total U(1) action depending on the U(1) gauge field \(\hat{A}\) is written as
\[S^{\rm U(1)}\equiv S^{\rm U(1)}+S_{\rm CS} \tag{24}\] \[= \int d^{4}xdw\bigg{[}\frac{1}{2}\hat{A}_{0}K\hat{A}_{0}+\frac{N_{ c}}{2}\rho_{B}\hat{A}_{0}-\frac{\kappa}{2}k(w)(\partial_{i}\hat{A}_{w})^{2} \bigg{]},\]
which leads to the field equations,
\[K\hat{A}_{0}+\frac{N_{c}}{2}\rho_{B}=0,~{}~{}\partial_{i}^{2} \hat{A}_{w}=0. \tag{25}\]
For the static configuration, the additional energy \(E^{\rm U(1)}\) from U(1) sector \(S^{\rm U(1)}\) is simply given by
\[E^{\rm U(1)}=-S^{\rm U(1)}/\int dt \tag{26}\] \[= \int d^{3}xdw\bigg{[}-\frac{1}{2}\hat{A}_{0}K\hat{A}_{0}-\frac{N _{c}}{2}\rho_{B}\hat{A}_{0}+\frac{\kappa}{2}k(w)(\partial_{i}\hat{A}_{w})^{2} \bigg{]}.\]
For the static case, \(\hat{A}_{w}\) is dynamically isolated in the field equation (25) and the last term is non-negative in the energy (26), and therefore we set \(\hat{A}_{w}=0\), which satisfies the local energy-minimum condition. Then, one finds
\[E^{\rm U(1)}=-\int d^{3}xdw\bigg{[}\frac{1}{2}\hat{A}_{0}K\hat{A}_{0}+\frac{N _{c}}{2}\rho_{B}\hat{A}_{0}\bigg{]}. \tag{27}\]
For the SO(3) rotationally symmetric solution, we eventually obtain the static energy of the U(1) part:
\[E^{\rm U(1)}[\rho_{B}(r,w),\hat{A}_{0}(r,w)]=4\pi\int_{0}^{\infty }drr^{2}\int_{-\infty}^{\infty}dw \tag{28}\] \[\bigg{[}\frac{1}{2}\hat{A}_{0}(r,w)K\hat{A}_{0}(r,w)+\frac{N_{c} }{2}\rho_{B}(r,w)\hat{A}_{0}(r,w)\bigg{]}\] \[=\int_{0}^{\infty}drr\int_{-\infty}^{\infty}dw\] \[\bigg{[}\frac{1}{2}\hat{A}_{0}(r,w)\tilde{K}\hat{A}_{0}(r,w)+2\pi N _{c}\tilde{\rho}_{B}(r,w)\hat{A}_{0}(r,w)\bigg{]}\] \[=4\pi\int_{0}^{\infty}drr^{2}\int_{-\infty}^{\infty}dw~{}\mathcal{ E}^{\rm U(1)}(r,w),\]
using \(\tilde{\rho}_{B}(r,w)\equiv r^{2}\rho_{B}(r,w)\) and the hermite kernel \(\tilde{K}\) in \((r,w)\)-space,
\[\tilde{K}\equiv 4\pi r^{2}K=-4\pi\kappa\{h(w)\partial_{r}r^{2}\partial_{r}+r^{2} \partial_{w}k(w)\partial_{w}\}. \tag{29}\]
For the numerical calculation of the U(1) sector, we mainly use this energy functional. (For another expression of \(E^{\rm U(1)}\) after \(\hat{A}^{0}\) path-integration, see Appendix B.)
Note again that, at the leading order of \(1/\lambda\), the U(1) sector (\(\hat{A}\)) completely decouples with the SU(2)\({}_{f}\) sector (\(\vec{a}\), \(\phi\)) because the leading term is only the Yang-Mills action (1). However, at the next leading order of \(1/\lambda\), the U(1) term affects the SU(2)\({}_{f}\) part through the CS term as Eq. (23).
To summarize, the total energy \(E\) comprises two parts,
\[E[\phi(r,w),\vec{a}(r,w),\hat{A}_{0}(r,w)] \tag{30}\] \[= E_{\rm SYM}^{\rm SU(2)}[\phi,\vec{a}]+E^{\rm U(1)}[\rho_{B}, \hat{A}_{0}],\]
and the total energy density \(\mathcal{E}(r,w)\) is written as
\[\mathcal{E}(r,w)=\mathcal{E}^{\rm SU(2)_{f}}(r,w)+\mathcal{E}^{\rm U(1)}(r,w), \tag{31}\]
and we have to deal with coupled field equations of the SU(2)\({}_{f}\) and U(1) sectors and will perform the numerical calculation in a consistent manner.
## IV Vortex description of baryons
In this section, we introduce vortex description of baryons in holographic QCD with applying the generalized Witten Ansatz. For a single baryon which
is \((x,y,z)\)-spatially rotational symmetric, applying the Witten Ansatz, we reduce the theory into a 1+2 dimensional Abelian Higgs theory in a curved space. In the reduced theory, the holographic baryon is expressed as a two-dimensional topological object of an Abrikosov vortex. We perform the numerical calculation of a \(B=1\) solution of holographic QCD as the single ground-state baryon, using a fine and large lattice with spacing of 0.04 fm and size of 10 fm.
### Holographic baryons in Witten Ansatz
Large \(N_{c}\) analyses of QCD indicate that explicit degree of freedoms are only mesons and glueballs, and baryons appear as solitons (topological objects) constructed with meson fields [14]. Also, holographic QCD based on large \(N_{c}\) becomes an effective theory of mesons, and baryons appear as chiral solitons composed of meson fields in this framework [5; 15]. Holographic QCD has four dimensional space \((x,y,z,w)\), and instantons naturally appear as relevant topological objects in the four-dimensional space. The topological objects are physically identified as baryons in holographic QCD [18] and called holographic baryons.
Remarkably, the Witten Ansatz generally converts the topological description from a four-dimensional instanton into a two-dimensional vortex [28]. Accordingly, the vortex number is interpreted as the baryon number in holographic QCD with the Witten Ansatz [29].
In fact, the baryon number \(B\) or the Pontryagin index is written by a contour integral in the \((r,w)\)-plane
\[B = \int d^{3}xdw\ \rho_{B} \tag{32}\] \[= \frac{1}{2\pi}\int_{0}^{\infty}dr\int_{-\infty}^{\infty}dw\epsilon _{ij}\partial_{i}\{a_{j}(1-|\phi|^{2})+\partial_{j}\theta\cdot|\phi|^{2}\}\] \[= \oint_{r\geq 0}d{\bf s}\cdot\{{\bf a}(1-|\phi|^{2})+\nabla\theta \cdot|\phi|^{2}\}\] \[= \oint_{r\geq 0}d{\bf s}\cdot\nabla\theta,\]
where \(\oint_{r\geq 0}\) denotes the contour integral around the whole half-plane of \((r,w)\) with \(r\geq 0\). To keep the energy (18) finite, we have imposed the following boundary conditions:
\[| \phi(r=0,^{\forall}\!w)|=1, \tag{33}\] \[| \phi(^{\forall}\!r,w=\pm\infty)|=|\phi(r=\infty,^{\forall}\!w)|=1, \tag{34}\]
at the edge of the \((r,w)\) half-plane. Thus, the baryon number \(B\) is converted into the vortex number in this formalism [29].
### Abrikosov vortex solution for a baryon in holographic QCD
In this subsection, we numerically calculate a ground-state baryon of holographic QCD through the Abrikosov vortex description in the 1+2 dimensional U(1) Abelian Higgs theory [29].
With imposing the global condition of \(B=1\), we numerically minimize the total energy \(E[\phi,\vec{a},\hat{A}_{0}]\) in Eq. (30), which is equivalent to solving the equation of motion (EOM) of holographic QCD for the single ground-state baryon.
Regarding the two parameters \(M_{\rm KK}\) and \(\kappa\) in holographic QCD, we take \(M_{\rm KK}\simeq 948\) MeV, and \(\kappa=7.46\times 10^{-3}\) to reproduce \(f_{\pi}\simeq 92.4\) MeV and \(m_{\rho}\simeq 776\) MeV [5; 15]. In this study, we have used the Kaluza-Klein unit of \(M_{\rm KK}=1\).
For the numerical calculation on the \((r,w)\) plane, we adopt a fine and large-size lattice with spacing of \(0.2\)\(M_{\rm KK}^{-1}\simeq 0.04\) fm and the extension of \(0\leq r\leq 250\) and \(-125\leq w\leq 125\), that is, the system size is \(250\times 250\) grid corresponding to \((50\ M_{\rm KK}^{-1})^{2}\simeq(10\ {\rm fm})^{2}\) in the physical unit. (In this numerical calculation, there appears subtle cancellation, and use of a coarse lattice might lead to an inaccurate result. Also, the lattice size should be increased until the volume dependence of physical quantities disappears.)
On this lattice, starting from the 't Hooft solution [26; 27] as a \(B=1\) topological configuration, we numerically perform minimization of the total energy \(E[\phi,\vec{a},\hat{A}_{0}]\) keeping the topological charge by an iterative improvement, that is, many-time iterative local replacements of the field variable of \(\phi\), \(\vec{a}\) and \(\hat{A}_{0}\). (See Appendix A for the detail.) During the update, the Higgs field \(\phi(x)\) always has a zero point to ensure \(B=1\), and the Higgs zero-point generally moves so as to realize the ground state. In this way, we eventually obtain the holographic fields \(\phi(r,w)\), \(\vec{a}(r,w)\) and \(\hat{A}_{0}(r,w)\) for the ground-state baryon as the true solution of EOM in holographic QCD.
For the confirmation of numerical calculations, we also consider another different method, as shown in Appendix B. In this alternative method, we integrate out the U(1) gauge field \(\hat{A}_{0}\) and update only \(\phi(r,w)\) and \(\vec{a}(r,w)\) on the lattice based on Eq. (14). We have confirmed that both methods give the same numerical results for holographic baryons.
To visualize gauge and Higgs fields composing the Abrikosov vortex, we take the Landau gauge for the U(1) gauge degrees of freedom, \(\partial_{i}a_{i}(r,w)=0\). Of course, main results including the total energy are gauge invariant and never depend on any gauge choices.
For the single ground-state baryon, we show field configurations \(\phi(r,w)\) and \(\vec{a}(r,w)\) in Fig. 1, and also the U(1) gauge field \(\hat{A}_{0}(r,w)\) in Fig. 2. The Higgs field \(\phi(r,w)\) indicates a clear topological structure characterizing the Abrikosov vortex, which is mapped into an instanton [28] via the Witten Ansatz and physically means a baryon in holographic QCD [29; 30]. In accordance with the field
equation (25), \(\hat{A}_{0}\) is localized around the non-vanishing topological density \(\rho_{B}\), which will be shown in Fig. 3.
Quantitatively, unlike the initial 't Hooft solution, Fig. 1 no longer indicates the symmetry between \(r\) and \(w\) for the ground-state baryon solution in holographic QCD. For the ground-state baryon, both profiles of \(\phi\) and \(\vec{a}\) are found to be a little shrink in the \(w\) direction, compared with four-dimensional spherical 't Hooft solutions, which was also found in the previous numerical studies [29; 30; 31].
### Properties of the ground-state baryon in holographic QCD
Now, we show the properties of the ground-state baryon in holographic QCD, which is numerically calculated by minimizing the total energy \(E[\phi,\vec{a},\hat{A}_{0}]\) in Eq. (30). In Fig. 3, we show the topological density \(\rho_{B}(r,w)\) and total energy density \(\mathcal{E}(r,w)\) in the \((r,w)\)-plane for the Abrikosov vortex solution in the 1+2 dimensional Abelian Higgs theory. Both densities have a peak around \((r,w)=(0,0)\) and are extended in both the \(r\) and \(w\) directions. We show in Fig. 4 the densities multiplied by the integral measure factor \(r^{2}\), i.e., \(4\pi r^{2}\rho_{B}(r,w)\) and \(4\pi r^{2}\mathcal{E}(r,w)\), for the ground-state baryon, since \(\tilde{\rho}_{B}(r,w)\equiv r^{2}\rho_{B}(r,w)\) is a primary variable in this numerical calculation, as shown in Eq. (29). The non-zero size of the baryon is due to the repulsive force from the CS term [18].
In general, the integrated topological density is the baryon number \(B\), and the baryon mass \(M_{B}\) is given by integration of the total energy density \(\mathcal{E}(r,w)\):
\[B = \int d^{3}xdw\;\rho_{B}(r,w) \tag{35}\] \[= 4\pi\int_{0}^{\infty}drr^{2}\int_{-\infty}^{\infty}dw\;\rho_{B}( r,w),\] \[M_{B} = \int d^{3}xdw\;\mathcal{E}(r,w)\] (36) \[= 4\pi\int_{0}^{\infty}drr^{2}\int_{-\infty}^{\infty}dw\;\mathcal{ E}(r,w).\]
By integration over the extra coordinate \(w\), we obtain
Figure 1: Vortex description for the single ground-state baryon: the Higgs field \(\phi(r,w)=(\text{Re}[\phi],\text{Im}[\phi])\) (upper) and the Abelian gauge field \(a(r,w)=(a_{1},a_{2})\) (lower) in the Landau gauge. The Kaluza-Klein unit of \(M_{\text{KK}}=1\) is used. The Higgs field has a zero point at \((r,w)\simeq(2.4,0)\), and its winding number around the zero point is equal to the baryon number, that is, \(B=1\).
Figure 2: Temporal component of the U(1) gauge field \(\hat{A}_{0}(r,w)\) in the ground-state holographic baryon in the \(M_{\text{KK}}=1\) unit. This U(1) gauge field \(\hat{A}_{0}\) directly interacts with the topological density \(\rho_{B}\) and leads to a repulsive force between topological densities, like the Coulomb interaction in QED.
the ordinary densities in a three-dimensional space,
\[\rho_{B}(r) \equiv \int_{-\infty}^{\infty}dw~{}\rho_{B}(r,w), \tag{37}\] \[{\cal E}(r) \equiv \int_{-\infty}^{\infty}dw~{}{\cal E}(r,w). \tag{38}\]
The mass \(M_{B}\) and size for the ground-state baryon are estimated as
\[M_{B} = E_{\rm SYM}^{\rm SU(2)_{f}}+E^{\rm U(1)}=\int d^{3}x~{}{\cal E}(r) \tag{39}\] \[\simeq 1.25~{}M_{\rm KK}\simeq 1.19~{}{\rm GeV},\]
\[\sqrt{\left\langle r^{2}\right\rangle}_{\rho_{B}} \equiv \sqrt{\frac{\int d^{3}x~{}\rho_{B}(r)r^{2}}{\int d^{3}x~{}\rho_{B}(r)}}\] \[\simeq 2.58{M_{\rm KK}}^{-1}\simeq 0.54~{}{\rm fm}, \tag{40}\]
\[\sqrt{\left\langle r^{2}\right\rangle}_{\cal E} \equiv \sqrt{\frac{\int d^{3}x~{}{\cal E}(r)r^{2}}{\int d^{3}x~{}{\cal E}(r)}} \tag{41}\] \[\simeq 2.93~{}{M_{\rm KK}}^{-1}\simeq 0.61~{}{\rm fm}.\]
Here, some cautions are commented. When the self-dual BPS-saturated 't Hooft solution [26; 27] is simply used, as was done in Ref. [18], the holographic baryon has an overestimated mass \(M_{\rm BPS}^{\rm BPS}\simeq 1.35~{}M_{\rm KK}\simeq 1.28~{}{\rm GeV}\) and a smaller radius \(\sqrt{\left\langle r^{2}\right\rangle_{\rho_{B}}^{\rm BPS}}\simeq 2.2~{}M_{\rm KK }^{-1}\simeq 0.46~{}{\rm fm}\), as shown in Appendix C. (Because of such an overestimation, a small value of \(M_{\rm KK}=500~{}{\rm MeV}\) was adopted to adjust baryon masses in Ref.[18], which significantly differs from \(M_{\rm KK}\simeq 1~{}{\rm GeV}\) for the meson sector.) Another
Figure 3: Topological density \(\rho_{B}\left(r,w\right)\) (upper) and total energy density \({\cal E}(r,w)\) (lower) for the ground-state solution of a single holographic baryon in the \(M_{\rm KK}=1\) unit. Both densities have a peak around \((r,w)=(0,0)\).
Figure 4: \(r^{2}\)-multiplied topological and energy densities: \(4\pi r^{2}\rho_{B}(r,w)\) (upper) and \(4\pi r^{2}{\cal E}(r,w)\) (lower) for the ground-state solution of a single holographic baryon in the \(M_{\rm KK}=1\) unit. Both figures have a peak around \((r,w)\simeq(2,0)\) near the vortex center, which roughly controls the baryon size.
cution is the numerical accuracy, and fine and large lattices are to be used for the numerical calculation. Owing to a relatively coarse and small-size lattice, the numerical results in the previous paper [29] include about 20% error for the baryon mass and size.
Now, we investigate spatial distribution of the baryon-number and energy densities for the ground-state baryon in holographic QCD. Figure 5 shows the baryon-number density \(\rho_{B}(r)\) (i.e. topological density) and total energy density \(\mathcal{E}(r)\), and their \(r^{2}\)-multiplied values, \(4\pi r^{2}\rho_{B}(r)\) and \(4\pi r^{2}\mathcal{E}(r)\). One finds significant difference between the shapes of \(\rho_{B}(r)\) and \(\mathcal{E}(r)\) for the small \(r\) region.
Next, we investigate the energy contribution from the \(\mathrm{SU}(2)_{f}\) and \(\mathrm{U}(1)\) parts, respectively. For the mass (total static energy) \(M_{B}=E_{\mathrm{SYM}}^{\mathrm{SU}(2)_{f}}+E^{\mathrm{U}(1)}\) of the ground-state holographic baryon, we obtain
\[E_{\mathrm{SYM}}^{\mathrm{SU}(2)_{f}} \simeq 1.00M_{\mathrm{KK}}\simeq 0.95\ \mathrm{GeV}, \tag{42}\] \[E^{\mathrm{U}(1)} \simeq 0.25M_{\mathrm{KK}}\simeq 0.24\ \mathrm{GeV}, \tag{43}\]
and hence the \(\mathrm{SU}(2)_{f}\) contribution (leading order of \(1/\lambda\) expansion) is found to be quantitatively dominant.
As spatial distribution, the \(\mathrm{SU}(2)_{f}\) and \(\mathrm{U}(1)\) energy densities are expressed as
\[r^{2}\mathcal{E}^{\mathrm{SU}(2)_{f}}(r)=\kappa\int_{-\infty}^{ \infty}dw\bigg{[}h(w)|D_{1}\phi|^{2}+k(w)|D_{2}\phi|^{2}\] \[+\frac{h(w)}{2r^{2}}\{1-|\phi|^{2}\}^{2}+\frac{r^{2}}{2}k(w)f_{1 2}^{2}\bigg{]}, \tag{44}\] \[\mathcal{E}^{\mathrm{U}(1)}(r)=\int_{-\infty}^{\infty}dw\] \[\bigg{[}\frac{1}{2}\hat{A}_{0}(r,w)K\hat{A}_{0}(r,w)+\frac{N_{c} }{2}\rho_{B}(r,w)\hat{A}_{0}(r,w)\bigg{]}. \tag{45}\]
Figure 6 shows \(\mathrm{SU}(2)_{f}\) energy density \(\mathcal{E}^{\mathrm{SU}(2)_{f}}(r)\) and \(\mathrm{U}(1)\) energy density \(\mathcal{E}^{\mathrm{U}(1)}(r)\) of the ground-state baryon, together with the total energy density \(\mathcal{E}(r)\) and baryon density \(\rho_{B}(r)\).
The dominant contribution is the \(\mathrm{SU}(2)_{f}\) part, and the total value \(\mathcal{E}(r)\) is approximated by the \(\mathrm{SU}(2)_{f}\) energy density \(\mathcal{E}^{\mathrm{SU}(2)_{f}}(r)\), which seems to be consistent with \(1/\lambda\) expansion. In particular, \(\mathcal{E}^{\mathrm{SU}(2)_{f}}(r)\) and \(\mathcal{E}(r)\) has the same slope at the origin \(r=0\) and show enhancement for small \(r\) region, although it is masked by the integral measure \(r^{2}\). The shape of \(\mathcal{E}^{\mathrm{U}(1)}(r)\) seems to follow \(\rho_{B}(r)\), reflecting the direct coupling between \(\hat{A}_{0}\) and \(\rho_{B}\) in the CS term (23).
Finally, we investigate self-duality breaking of the holographic baryon. The different shape between the topological and energy densities originates from the background gravity, \(h(w)\) and \(k(w)\), and presence of the CS term. In the case of \(h(w)=k(w)=1\) without the CS term, the instanton solution has exact self-duality of \(F_{MN}=\tilde{F}_{MN}\), leading to \(\rho_{B}(r,w)\propto\mathcal{E}(r,w)\). Hence, the functional forms of \(\rho_{B}(r)\) and \(\mathcal{E}(r)\) are forced to be the same. In fact, between the shapes of \(\rho_{B}(r)\) and \(\mathcal{E}(r)\), the similarity indicates the self-dual tendency, and the different point indicates self-duality breaking.
For the single baryon case \(B=1\), we introduce the self-duality breaking parameter defined by
\[\Delta_{\mathrm{DB}} \equiv \frac{\int d^{3}xdw\ \mathrm{tr}(F_{MN}F_{MN}-F_{MN}\tilde{F}_{MN})}{ \int d^{3}xdw\ \mathrm{tr}(F_{MN}\tilde{F}_{MN})} \tag{46}\] \[= \frac{1}{32\pi^{2}}\int d^{3}xdw\ \mathrm{tr}(F_{MN}-\tilde{F}_{MN})^{2},\]
which is normalized by the topological quantity. This is non-negative and becomes zero only in the exact self-dual case. The self-duality breaking of the ground-state baryon is found to be \(\Delta_{\mathrm{DB}}\simeq 0.17\), which is non-zero but seems small. The small value might indicate that the true holographic configuration is close to be self-dual. Then, the ground-state baryon might be approximated by the self-dual 't Hooft instanton in holographic QCD.
Figure 5: Baryon density \(\rho_{B}(r)\) and total energy density \(\mathcal{E}(r)\) for the ground-state baryon. The lower panel shows \(4\pi r^{2}\rho_{B}(r)\) and \(4\pi r^{2}\mathcal{E}(r)\), including the integral measure factor \(r^{2}\). Despite a significant difference between \(\rho_{B}(r)\) and \(\mathcal{E}(r)\) around the origin, this difference is reduced by the \(r^{2}\) multiplication. Here, the \(M_{\mathrm{KK}}=1\) unit is taken.
## V Size-dependence of a holographic baryon
As above mentioned, by using the Witten Ansatz, 1+4 dimensional holographic QCD is reduced into a 1+2 dimensional Abelian Higgs theory. Accordingly, the topological description of a baryon is changed from an instanton to a vortex. In the previous section, we have numerically obtained the ground-state solution in holographic QCD, where the baryon size is automatically determined by minimizing the total energy.
Now, let us consider a holographic baryon with various size. Note that the size is originally one of the moduli of an instanton in a flat space, and different size baryons are to be degenerate in holographic QCD, if one sets \(h(w)=k(w)=1\) and neglects the CS term. In the real holographic QCD, the size parameter is no more modulus, according to the background gravity and CS term. Nevertheless, the size might behave as a quasi-modulus in the holographic baryon, resulting in physical appearance of a soft vibrational mode as a low-lying excitation. Then, we investigate a baryon with various size and size dependence of the energy in holographic QCD in this section.
First, we consider the ordinary Yang-Mills theory and examine the moduli relation between an instanton and a vortex in the Witten Ansatz, as was originally shown by Witten [28].
Next, we proceed to the holographic baryon, and consider how to obtain an arbitrary-size baryon as a solution of holographic QCD, and investigate the size dependence of the baryon mass.
### Instanton-vortex correspondence in Yang-Mills theory
In the four-dimensional Euclidean Yang-Mills theory, there exist topological solutions, instantons, where Euclidean time is necessary. A single instanton solution (BPST-'t Hooft solution [26; 27]) is written as
\[A_{\mu}(x)=-\eta^{a}_{\mu\nu}\tau^{a}\frac{x^{\nu}}{(x-X)^{2}+R^{2}}, \tag{47}\]
where the instanton center locates at \(x^{\mu}=X^{\mu}\), and \(R\) denotes the instanton size. Together with color rotation, the location \(X^{\mu}\) and the size \(R\) are known as moduli, and they represent degrees of freedom for this topological solution, that is, their values do not affect the Yang-Mills action. Here, \(\eta^{a}_{\mu\nu}\) denotes the 't Hooft symbol [27] defined by
\[\eta^{a}_{\mu\nu}=-\eta^{a}_{\nu\mu}=\left\{\begin{array}{ll}\epsilon_{a\mu \nu}&\text{for}\ \ \mu,\nu=1,2,3\\ -\delta_{a\nu}&\text{for}\ \ \mu=4\\ \delta_{a\mu}&\text{for}\ \ \nu=4.\end{array}\right. \tag{48}\]
Taking its center \(X^{\mu}=(\vec{0},T)\), there is SO(3) rotational symmetry in \((x,y,z)\)-space, and the Witten Ansatz is applicable. With the Witten Ansatz, the four-dimensional Yang-Mills theory is reduced into a two-dimensional Abelian Higgs theory, and this instanton can be described by a single vortex.
To understand the relation between an instanton and a vortex, let us consider the 't Hooft solution in the form of the Witten Ansatz. This represents a single instanton solution and is rewritten as
\[A^{a}_{i} = \frac{2r}{r^{2}+(t-T)^{2}+R^{2}}\epsilon_{iaj}\hat{x}_{j} \tag{49}\] \[-\frac{2(t-T)}{r^{2}+(t-T)^{2}+R^{2}}(\hat{\delta}_{ia}+\hat{x} _{i}\hat{x}_{a}),\] \[A^{a}_{t} = \frac{2r}{r^{2}+(t-T)^{2}+R^{2}}\hat{\bar{x}}_{a}. \tag{50}\]
By comparing the functional form with Eqs. (5) and (6), SU(2) gauge fields can be converted into the fields of the reduced Abelian Higgs theory, and their forms are obtained as
\[(\phi_{1},\phi_{2}+1)=\frac{2r}{r^{2}+(t-T)^{2}+R^{2}}\left(-(t-T), r\right), \tag{51}\] \[a_{1}=\frac{-2(t-T)}{r^{2}+(t-T)^{2}+R^{2}},\ a_{2}=\frac{2r}{r^ {2}+(t-T)^{2}+R^{2}}, \tag{52}\]
The complex Higgs field \(\phi=({\rm Re}\phi,\ {\rm Im}\phi)=(\phi_{1},\phi_{2})\) takes zero at \((r,t)=(R,T)\equiv\zeta\). The vortex number is counted as the zero-point number in the Higgs field \(\phi\) in the \((r,t)\)-plane. Now, there is one zero point at \(\zeta\), and this configuration represents a single vortex.
Thus, in the Witten Ansatz, the vortex corresponding to a single instanton has a zero point \(\zeta\) of the Higgs field \(\phi\). Remarkably, this Higgs-field zero point \(\zeta\) relates to instanton parameters [28],
\[\zeta=(\zeta_{r},\zeta_{t})=(R,\ T). \tag{53}\]
In fact, the instanton size \(R\) and Euclidean fourth-coordinate \(T\) of the instanton center correspond to this zero point \(\zeta\) in the Higgs field \(\phi\). In this configuration, the topological density is written by
\[\rho_{B}=\frac{6}{\pi^{2}}\frac{R^{4}}{[r^{2}+(t-T)^{2}+R^{2}]^{4}}. \tag{54}\]
The topological density localizes around \((r,t)=(0,T)\), and its extension is about the size parameter \(R\), which reflects the instanton size.
### Various-size baryon mass in holographic QCD
In this subsection, using the above correspondence (53) between the Higgs zero-point \(\zeta\) and the instanton size \(R\) in the Witten Ansatz, we try to control the baryon size by changing the location of the zero point \(\zeta\) in the Higgs field \(\phi\) in holographic QCD. Using this new viewpoint, we obtain various sizes of holographic baryons and investigate size dependence of the single baryon energy.
To begin with, we reformulate the above argument for 1+4 dimensional holographic QCD. We recall different points from the ordinary Yang-Mills theory. First, holographic QCD already has four-dimensional Euclidean space of \((x,y,z,w)\), and the instanton can be naturally defined on this space including the fourth spatial coordinate \(w\), without use of Euclidean process. Second, there appear the gravitational factor, \(h(w)\) and \(k(w)\), and the CS term, which results in a repulsive interaction among the baryon density \(\rho_{B}\) in holographic QCD. Owing to these effects, the 't Hooft solution is no longer the exact solution but an approximate one in holographic QCD.
Therefore, we use the 't Hooft solution as a starting point for the \(B=1\) configuration, and search for the true solution of holographic QCD by a numerical iterative method with keeping the topological property unchanged, similarly in Sect. IV.
In 1+4 dimensional holographic QCD, taking the \(A_{0}=0\) gauge, we use a holographic version of the 't Hooft solution (47), as a starting point of the topological configuration. Here, we locate the instanton center at the four-dimensional spatial origin \((x,y,z,w)=(0,0,0,0)\) for symmetry of \((x,y,z)\)-spatial rotation and \(w\)-reflection, and then the initial configuration is set to be
\[A_{0} = 0, \tag{55}\] \[A_{M} = -\eta_{MN}\frac{x^{N}}{x^{2}+R^{2}}, \tag{56}\]
which can be rewritten as
\[A_{i}^{a} = \frac{2r}{r^{2}+w^{2}+R^{2}}\epsilon_{iaj}\hat{x}_{j} \tag{57}\] \[-\frac{2w}{r^{2}+w^{2}+R^{2}}(\hat{\delta}_{ia}+\hat{x}_{i}\hat{ x}_{a}),\] \[A_{w}^{a} = \frac{2r}{r^{2}+w^{2}+R^{2}}\hat{x}_{a}. \tag{58}\]
For the \((x,y,z)\)-rotational symmetric system, through the Witten Ansatz, the non-Abelian gauge fields are converted into the Higgs field \(\phi\) and Abelian gauge field \(\vec{a}\), and they are expressed as
\[(\phi_{1},\phi_{2}+1)=\frac{2r}{r^{2}+w^{2}+R^{2}}\left(-w,r\right), \tag{59}\] \[a_{1}=\frac{-2w}{r^{2}+w^{2}+R^{2}},\ a_{2}=\frac{2r}{r^{2}+w^{2 }+R^{2}}. \tag{60}\]
In Eq. (59), the zero point \(\zeta\) of the Higgs field \(\phi\) is found to locate at \(\zeta=(\zeta_{r},\zeta_{w})=(R,\ 0)\), and this \(\zeta_{r}\) determines the initial size of the holographic baryon.
From this initial configuration, similarly in Sect. IV, we numerically search for the single baryon solution in holographic QCD by minimizing the total energy \(E[\phi,\vec{a},\hat{A}_{0}]\), with fixing the Higgs-zero location
\[\zeta=(\zeta_{r},\zeta_{w})=(R,0), \tag{61}\]
where the former \(R\) corresponds to the instanton/baryon size. In fact, to consider various size baryons, we here use the correspondence between the instanton/baryon size and zero point \(\zeta\) in the Higgs field \(\phi\) composing the Abrikosov vortex [28]. Note that the Higgs field must be zero at the center of the Abrikosov vortex to realize the finite energy, and therefore the Higgs field \(\phi\) inevitably has a zero-point \(\zeta\) in the presence of the vortex corresponding to \(B=1\) in holographic QCD. Here, to change the zero-point location \(\zeta_{r}=R\) corresponds to a baryon-size change, and its various changes lead to different-size baryons.
In summary, to obtain various-size baryon solutions, we fix this zero-point location \(\zeta=(\zeta_{r},\zeta_{w})=(R,0)\) of the Higgs field \(\phi\) as a boundary condition and minimize the total energy \(E\) numerically. When the total energy
\(E[\phi,\vec{a},\hat{A}_{0}]\) is minimized against arbitrary local variation of \(\phi\), \(\vec{a}\) and \(\hat{A}_{0}\), these fields satisfies the EOM of holographic QCD.
For the numerical calculation on the \((r,w)\) plane, we use the same lattice used in Sect. IV, that is, a fine and large-size lattice with spacing of \(0.2M_{\rm KK}^{-1}\simeq 0.04\) fm and the extension of \(0\leq r\leq 250\) and \(-125\leq w\leq 125\), of which physical size is \((50~{}M_{\rm KK}^{-1})^{2}\simeq(10~{}{\rm fm})^{2}\).
After an iterative improvement of holographic fields with the constraint of the Higgs zero-point location, we eventually obtain the Higgs and Abelian gauge fields,
\[\phi(r,w;R),\quad\vec{a}(r,w;R),\quad\hat{A}_{0}(r,w;R) \tag{62}\]
for the holographic baryon corresponding to the instanton size \(R\). Note again that the presence of a zero point of the Higgs field indicates a \(B=1\) configuration, and the holographic fields obeying the local energy minimum condition also satisfy the EOM of holographic QCD.
Figure 7 shows the static energy \(V(R)\) of the baryon with various size \(R\). The potential minimum corresponds to the ground-state baryon. The numerical data are well fitted by a quadratic function, as shown in Fig. 7. Note that all the symbols represent the solution of holographic QCD under the constraint of size fixing, which can be regarded as a boundary condition.
We obtain the fitting quadratic function,
\[\begin{split}& V(R)\simeq A(R-R_{0})^{2}+M_{0},\\ & A\simeq 0.063,\quad R_{0}\simeq 2.4,\quad M_{0}\simeq 1.25, \end{split} \tag{63}\]
in the \(M_{\rm KK}=1\) unit. The value \(M_{0}\) of potential minimum coincides with the previous calculation of the ground-state baryon mass \(M_{B}\simeq 1.25~{}M_{\rm KK}\) in Eq. (39).
The size parameter \(R=R_{0}\) which minimizes the static energy \(V(R)\) of the baryon is found to be \(R_{0}\simeq 2.4\) in the \(M_{\rm KK}=1\) unit. This result coincides with the ground-state result shown in Fig. 1, where the Higgs zero-point \(\zeta_{r}\) locates at about \(R=R_{0}\). If there were no non-trivial gravity (\(h(w)\), \(k(w)\)) and no CS term, this size dependence would disappear and the potential \(V(R)\) would become flat because the instanton size is originally a modulus, reflecting classical scale invariance of the Yang-Mills theory. In fact, this size dependence of holographic baryon mass originates from those gravitational effects and CS term.
### Analysis of holographic baryon size
In this subsection, we investigate actual size of holographic baryons obtained in the previous subsection in terms of the size parameter \(R\), which corresponds to the instanton size.
In our calculation, for each constraint of fixing the size \(R\), we already obtain the topological density \(\rho_{B}\)
\[\rho_{B}~{}=~{}\rho_{B}(r,w;R). \tag{64}\]
Using this gauge-invariant local quantity, we investigate the size of the holographic baryon for each direction of \(r\) and \(w\). Here, we define \(\rho_{B}\)-weighted average of arbitrary \((x,y,z)\)-rotational symmetric variable \(O(r,w)\) as
\[\langle O\rangle_{\rho_{B}(R)}~{}\equiv~{}\frac{\int_{0}^{\infty}drr^{2}\int _{-\infty}^{\infty}dw\rho_{B}(r,w;R)O(r,w)}{\int_{0}^{\infty}drr^{2}\int_{- \infty}^{\infty}dw\rho_{B}(r,w;R)}. \tag{65}\]
In the ordinary four-dimensional Euclidean Yang-Mills theory, the Pontryagin density \(\rho\) of a single instanton with the size parameter \(R\) and its center at the origin is given by
\[\rho=\frac{1}{16\pi^{2}}{\rm tr}\left(F_{\mu\nu}\tilde{F}_{\mu\nu}\right)= \frac{6}{\pi^{2}}\frac{R^{4}}{(x_{\mu}^{2}+R^{2})^{4}}, \tag{66}\]
and the mean square radius weighted with \(\rho\) is evaluated as
\[\langle r^{2}\rangle_{\rho}=\frac{3}{2}R^{2},\quad\langle t^{2}\rangle_{\rho }=\frac{1}{2}R^{2}\quad({\rm YM~{}instanton}), \tag{67}\]
or equivalently
\[\sqrt{\frac{2}{3}\langle r^{2}\rangle_{\rho}}=\sqrt{2\langle t^{2}\rangle_{ \rho}}=R\quad({\rm YM~{}instanton}), \tag{68}\]
for \(r\equiv(x^{2}+y^{2}+z^{2})^{1/2}\) and the Euclidean time \(t\).
The holographic baryon is able to be dilatated simultaneously in both \(r\) and \(w\) direction, imposing the Higgs field zero-point constraints, and we obtain various size baryons. Based on the above relation, we consider size
Figure 7: Static energy \(V(R)\) of the baryon with various size \(R\) in the \(M_{\rm KK}=1\) unit. The symbol denotes numerical data of the baryon energy with different sizes. The line denotes a fit curve of the potential by a quadratic function. This potential has a minimum corresponding to the ground-state, and its value is consistent with the previous ground-state calculation.
parameters of the holographic baryon in \(r\) and \(w\) directions, respectively. In a similar manner to Eq. (40) in Sect. IV, we define the radius weighted with the topological density \(\rho_{B}\) as
\[d_{r}(R)\equiv\sqrt{\frac{2}{3}\langle r^{2}\rangle_{\rho_{B}(R)}},\quad d_{w}( R)\equiv\sqrt{2\langle w^{2}\rangle_{\rho_{B}(R)}}, \tag{69}\]
in the direction \(r\) and \(w\), respectively. To compare with the instanton size parameter \(R\), the factors, \(2/3\) and \(2\), have been introduced, considering Eq. (68). In fact, the ordinary self-dual Yang-Mills instanton satisfies \(d_{r}(R)=d_{w}(R)=R\).
Figure 8 shows the \(\rho_{B}\)-weighted radius \(d_{r}(R)\) and \(d_{w}(R)\) as a function of \(R\), weighted with the topological density.
The solid line denotes \(d=R\) and is realized in the case of the ordinary four-dimensional YM theory (\(h(w)=k(w)=1\)) without the CS term. In other words, if the holographic baryon were described with the 't Hooft solution, one would find \(d_{r}(R)=d_{w}(R)=R\).
From Fig. 8, \(d_{r}\) and \(d_{w}\) are found to be monotonically increasing along with \(R\). This monotonical increase reflects that the change of Higgs zero-point gives the change of holographic baryon (instanton) size. Later, we investigate the dilatation mode by regarding the size \(R\) as dynamical degree of freedom.
Around the ground state, the value of \(d_{r}(R)\) is larger than \(d_{w}(R)\), i.e., \(d_{r}(R)>d_{w}(R)\), and this means an oblate-shaped instanton for the holographic baryon [31].
The slope of \(d_{r}(R)\) is larger than \(d_{w}(R)\), and \(d_{w}(R)\) is almost flat against \(R\). Therefore, the size change of holographic baryons is approximately regarded as three-dimensional in \(r\)-direction rather than four-dimensional [18]. (See Appendix C and D for four-dimensional size change using the BPS instanton.)
These differences of the behavior for each direction would come from the nontrivial gravity fields, \(h(w)\) and \(k(w)\), because the CS term (23) equally acts in \(r\) and \(w\)-direction and does not break \((x,y,z,w)\)\(O(4)\) symmetry. In fact, if \(h(w)=k(w)=1\), \(O(4)\) symmetry is exact and no \((r,w)\)-asymmetry appears. Therefore, \(h(w)\) and \(k(w)\) are the very origin of \((r,w)\)-asymmetry.
The flatness of \(d_{w}(R)\) against \(R\) indicates that gravity fields \(h(w)\) and \(k(w)\) suppress the \(w\)-direction swelling. Approximately, one finds \(d_{r}(R)\sim d_{w}(R)\), and then \(d_{r}(R)\) seems to follow \(d_{w}(R)\) and its soft \(R\)-dependence, which might imply that large deviation from the spherical shape is not favored energetically. As the result, the slope of both parameters become small.
In the original Yang-Mills theory, the (anti)instanton appears as the (anti)self-dual solution. In holographic QCD, owing to the presence of gravity (\(h(w)\) and \(k(w)\)) and the CS term, the self-duality of the solution is explicitly broken.
Similarly for the ground-state baryon in Sec. IV, we investigate self-duality breaking for various size holographic baryons. Figure 9 shows the self-duality breaking parameter \(\Delta_{\rm DB}\) in Eq. (46) as a function of the size parameter \(R\). This quantity \(\Delta_{\rm DB}\) is non-negative and becomes zero only in the exact self-dual case.
One finds that the duality breaking parameter is minimized as \(\Delta_{\rm DB}\simeq 0.17\) at \(R\simeq 2.2\), which is close to the size \(R_{0}\simeq 2.4\) of the ground-state baryon. This value \(\Delta_{\rm DB}\simeq 0.17\) seems to be small, and one might expect
Figure 8: \(\rho_{B}\)-weighted radius \(d_{r}(R)\) and \(d_{w}(R)\) of the holographic baryon in the direction \(r\) and \(w\), respectively, as the function of the size parameter \(R\) in the \(M_{\rm KK}=1\) unit. The solid line denotes \(d=R\), which is to be realized in the case of the ordinary four-dimensional YM theory (\(h(w)=k(w)=1\)) without the CS term. If the holographic baryon were described with the ’t Hooft solution, \(d_{r}(R)=d_{w}(R)=R\) would be satisfied.
Figure 9: Duality breaking parameter \(\Delta_{\rm DB}\) for various baryon size \(R\) in the \(M_{\rm KK}\)=1 unit. The functional form of \(\Delta_{\rm DB}(R)\) seems to be quadratic and it takes minimum at \(R\simeq 2.2\).
that the approximation of using the self-dual solution is not so bad.
However, the duality breaking parameter \(\Delta_{\rm DB}\) never becomes zero in holographic QCD, and this fact might have an important physical meaning for the baryon-baryon interaction as follows.
In the original four-dimensional Yang-Mills theory, the energy is classically bounded by BPS bound, and its minimum is achieved only if the configuration has self-duality. However, holographic QCD has a non-trivial gravity and the CS term, and thus they distort the self-duality. If there were multi instantons satisfying BPS saturation, its action would be determined only by the topological charge, indicating the "no interaction" between instantons. Then, as an interesting possibility, the self-duality breaking in holographic QCD might be related to the baryon-baryon interaction or the nuclear force [32].
## VI Dilatation mode of a single baryon
In the previous section, we investigated size dependence of the static energy \(V(R)\) of a holographic baryon and showed that it seemed to be quadratic against the size \(R\). Using this result, we numerically investigate time-dependent size oscillation modes, i.e., dilatation modes, of a single baryon in holographic QCD in this section.
Since the instanton size \(R\) is a key parameter to determine the baryon size in holographic QCD, we describe the size oscillation of the holographic baryon by introducing time-dependence of size \(R\),
\[R\to R(t)=R_{0}+\delta R(t), \tag{70}\]
where \(R_{0}\) denotes the size of the ground-state baryon and the size \(R(t)\) is expected to oscillate around this \(R_{0}\).
Note again that the size of an instanton is originally a moduli, and, if there were only the Yang-Mills term in flat space \(h(w)=k(w)=1\), its value would not affect the energy of holographic QCD and the dilatation mode would appear as an exact zero mode. In reality, the gravity effect (\(h(w)\), \(k(w)\)) and CS term break this moduli property of the size; however, we expect that the dilatation mode is inherited to be a soft mode of the ground-state baryon and appears as a low-lying excitation in the single baryon spectrum in holographic QCD.
This dilatation differs from an ordinary \((x,y,z)\)-spatial size oscillation, because holographic QCD has an extra dimension \(w\) and the holographic baryon extends also its direction. In fact, this extra-dimensional dilatation is peculiar to holographic QCD.
We consider time-dependent variation of \(R(t)\) around the ground-state size \(R_{0}\), which physically means a dilataion of the holographic baryon. The Lagrangian of the size variable \(R(t)\) is written as
\[L[R] = \frac{1}{2}m_{R}\dot{R}^{2}-V(R) \tag{71}\] \[\simeq \frac{1}{2}m_{R}\dot{R}^{2}-\frac{1}{2}m_{R}\omega^{2}(R-R_{0})^ {2}\]
up to \({\cal O}((R-R_{0})^{2})\). We have already calculated the potential term \(V(R)\) and have shown it to be almost quadratic in the previous section.
We calculate the dilatation mode of the holographic baryon as a collective coordinate motion of the size \(R(t)\). Note here that, to estimate the frequency \(\omega\) of the dilatational mode, one only has to calculate the mass parameter \(m_{R}\). (Of course, \(m_{R}\) is not equal to the baryon mass \(M_{B}\).) Here, we use adiabatic approximation that time-dependence of the holographic fields is only through the size \(R(t)\).
### Numerical calculation
In this subsection, we consider the baryon dilatation mode and investigate the excitation energy with keeping the gravity background, \(h(w)\) and \(k(w)\). We treat the size oscillation to be adiabatic and field motions are relatively faster than the size motion. Within this adiabatic treatment, the time dependence of holographic fields \(\phi\) and \(a_{i}\) is decided through only the baryon size \(R(t)\). Therefore, it is possible to write down field arguments symbolically as
\[\phi=\phi(r,w;R(t)),\quad a_{i}=a_{i}(r,w;R(t)). \tag{72}\]
Note that there is no contribution from \(\hat{A}_{0}\) in calculating the kinetic term of size variable \(R(t)\), because \(\hat{A}_{0}\) has no time-derivative and never accompanies \(\dot{R}(t)\).
Based on this adiabatic treatment, time-derivative is converted to \(R\)-derivative as
\[\frac{d}{dt}O[R(t)]=\dot{R}\frac{d}{dR}O[R]. \tag{73}\]
In our framework, \(O[R]\) is (numerically) calculable for arbitrary \(R\), and the \(R\)-derivative is easily obtained numerically,
\[\frac{d}{dR}O[R]\simeq\frac{O[R+\delta R]-O[R]}{\delta R}. \tag{74}\]
We have already obtained holographic configurations \(\phi(R(t))\) with various size \(R\), and the kinetic term of \(R(t)\) is expressed as
\[S_{\rm kin} = 4\pi\kappa\int dt\ \int_{0}^{\infty}dr\ \int_{-\infty}^{\infty}dw \tag{75}\] \[\left[h|\partial_{0}\phi|^{2}+h\frac{r^{2}}{2}(\partial_{0}a_{1} )^{2}+k\frac{r^{2}}{2}(\partial_{0}a_{2})^{2}\right]\] \[= 4\pi\kappa\int dt\ \int_{0}^{\infty}dr\ \int_{-\infty}^{\infty}dw\] \[\left[h\dot{R}^{2}|\partial_{R}\phi|^{2}+h\frac{r^{2}}{2}\dot{R} ^{2}(\partial_{R}a_{1})^{2}+k\frac{r^{2}}{2}\dot{R}^{2}(\partial_{R}a_{2})^{ 2}\right]\] \[= \int dt\ \frac{1}{2}m_{R}\dot{R}^{2},\]
and mass parameter \(m_{R}\) on the size variation is given by
\[m_{R} \equiv 8\pi\kappa\int_{0}^{\infty}dr\int_{-\infty}^{\infty}dw \tag{76}\] \[\left[h|\partial_{R}\phi|^{2}+h\frac{r^{2}}{2}(\partial_{R}a_{1}) ^{2}+k\frac{r^{2}}{2}(\partial_{R}a_{2})^{2}\right].\]
Note that CS term also contains time derivative, but it is first order and no effect from the CS term for the kinetic term of dilatation mode. The numerical result is found to be
\[m_{R}\simeq 0.34\ M_{\rm KK}\simeq 322\ {\rm MeV}. \tag{77}\]
The important point of this numerical calculation is that gravitational factors, \(h(w)\) and \(k(w)\), are exactly included.
With this value of the mass parameter \(m_{R}\) and the quadratic fitting of Eq. (63) for the potential \(V(R)\), we obtain the dilatational excitation energy
\[\omega=\sqrt{\frac{2A}{m_{R}}}\simeq 0.61M_{\rm KK}\simeq 577\ {\rm MeV} \tag{78}\]
for the holographic baryon.
In terms of \(1/N_{c}\) expansion, the mass parameter \(m_{R}\propto\kappa\) in Eq.(78) is \(O(N_{c})\), and the potential \(V(R)\) of the holographic baryon is \(O(N_{c})\), leading to \(A=O(N_{c})\). Then, the dilatation excitation energy \(\omega=\sqrt{2A/m_{R}}\) is \(O(N_{c}^{0})=O(1)\) quantity.
In Appendix D, we also consider a rough analytical estimation for the dilatation mode, when using 't Hooft instanton, i.e., the solution in the case of \(h(w)=k(w)=1\) without the CS term. This rough estimate gives a larger value of 771 MeV for the dilatation excitation energy, which seems consistent with 0.816 \(M_{\rm KK}\simeq 774\) MeV of the excitation energy on instanton-size fluctuation in the previous research [18] using 't Hooft instanton.
### Discussion
In the previous subsection, we numerically calculate the dilatation excitation energy, keeping the gravitational effect of \(h(w)\) and \(k(w)\). Note again that this background gravity is physically important because it is inherited from the \(N_{c}\) D4 branes to express the original Yang-Mills theory. We have consistently performed a numerical calculation for both kinetic and potential terms to include the gravitational effect. We have eventually obtained the dilatation excitation energy of 577 MeV for the holographic baryon.
This dilatation mode appears as an excited baryon with the same quantum number as the ground state since this rotationally symmetric dilatation never changes the quantum numbers on the flavor and rotation.
Similar to the Skyrme soliton, for the description of definite spin/isospin states like N and \(\Delta\), semi-classical quantization by adiabatic rotation [18] is used for the holographic baryon. In this quantization process on the spin/isospin, an \(O(1/N_{c})\) mass correction is added to the \(O(N_{c})\) baryon mass [18].
In terms of \(1/N_{c}\) expansion, the dilatation excitation energy is \(O(N_{c}^{0})\) as mentioned before, and thus the dilatation mode is more significant than the \(O(1/N_{c})\) rotational energy, which leads to N-\(\Delta\) mass splitting. Furthermore, the correction from this rotational effect to the dilatation mode is higher order of \(1/N_{c}\) expansion and then becomes negligible.
Figure 10 shows schematic figure for the baryon mass splitting order by order in the \(1/N_{c}\) expansion. At the leading order of \(O(N_{c})\), all the holographic baryons degenerate. Up tp \(O(N_{c}^{0})\), there appears the mass splitting of dilatation mode. Up to \(O(N_{c}^{-1})\), the mass splitting from rotational effect is added and leads to N-\(\Delta\) mass splitting. As the result, the order of low-lying baryon mass is to be \({\rm N}<\Delta<{\rm N}^{*}<\Delta^{*}\) in term of the \(1/N_{c}\) expansion.
From the quantum number, the dilatation excitation energy of 577 MeV indicates that the Roper resonance N\({}^{*}\)(1440) [19; 20; 21; 22; 23] is the candidate corresponding to this dilatation mode of the nucleon N(940). For the \(\Delta(1232)\), \(\Delta(1600)\) might be identified to be the dilatational excitation mode.
As an interesting possibility, the similar dilatation mode exists for every baryon as a universal phenomenon for a single baryon, because any extended stable soliton generally has such a dilatational mode.
The strange baryon mass is slightly larger, reflecting the non-zero strange quark mass; however, the SU(3)\({}_{f}\) flavor symmetry approximately holds. Here, we suppose that the mass excess of the strange quark is simply added to the holographic baryon, like the treatment of SU(3)\({}_{f}\) symmetry and its breaking in ordinary hadron physics [33]. Then, also for strange baryons, the dilatation mode would appear, and its excitation energy is expected to take a similar value as the above result of 577 MeV.
Table 1 presents the candidates of the dilational excitation in various channel of baryons, i.e., N, \(\Delta\) and \(\Lambda\), \(\Sigma\)
Figure 10: Schematic figure for mass splitting of the holographic baryon in terms of the \(1/N_{c}\) expansion. (The symbol * denotes the excitation mode.) The holographic baryon has \(O(N_{c}^{0})\) and \(O(N_{c}^{-1})\) splitting, corresponding to dilatation and rotational effect, respectively. Therefore, the order of low-lying baryon mass is to be \({\rm N}<\Delta<{\rm N}^{*}<\Delta^{*}\) in term of the \(1/N_{c}\) expansion.
For each channel with the same quantum number, we find that the first excitation energy seems to be a consistent similar value to the dilatational one, \(\omega\simeq 577\) MeV, theoretically obtained above.
Also for multi-strange baryons in \(\Xi\), \(\Xi^{*}\) and \(\Omega\) channel, we theoretically predict the dilatation modes with the excitation energy of about 577 MeV. However, the dilatational excited baryons for \(\Xi(1320)\), \(\Xi^{*}(1530)\), and \(\Omega(1673)\) are not yet observed experimentally because spin-parity information is not yet confirmed for their excited baryons. In this respect, further experimental analyses are much desired for excited baryons in terms of the dilatational mode in each baryon channel.
## VII Summary and concluding remarks
We have investigated the dilatational mode of baryons in holographic QCD based on the Sakai-Sugimoto model, which is constructed with \(N_{c}\) D4 branes and \(N_{f}\) D8/\(\bar{\rm D}\)8 branes in the superstring theory. We have adopted the Witten Ansatz for the rotational symmetric system and have reduced 1+4 dimensional holographic QCD into a 1+2 dimensional Abelian Higgs theory in a curved space. In this formulation, a four-dimensional instanton corresponding to a baryon is converted to a two-dimensional vortex. We have numerically calculated the baryon solution of holographic QCD using a fine and large-size lattice with spacing of 0.04 fm and size of 10 fm.
With the Witten ansatz, the relation between both topological objects is obtained, and Higgs zero point and instanton size relation Using the relation between both topological objects, we have theoretically changed the size of the holographic baryon and have investigated its properties, such as the energy and self-duality breaking, as the function of the size parameter. Each configuration is a solution of EOM of holographic QCD under the constraints of fixing Higgs zero-point.
As a time-dependent mode, we have investigated the size oscillation of a baryon and have found that such a dilatational mode takes the excitation energy of 577 MeV. Since the dilatation does not change the quantum number, we have identified this mode for nucleon N(940) as N\({}^{*}(1440)\). We have conjectured that any baryons are expected to have such a dilatational excitation universally, and their excitation energy would be similar.
In this respect, further experimental analyses are much desired for excited baryons in terms of the dilatational mode in each baryon channel. In particular, the dilatational excited baryons for \(\Xi(1320)\), \(\Xi^{*}(1530)\), and \(\Omega(1673)\) have not yet been observed experimentally because spin-parity information is not yet confirmed for their excited baryons.
There is a rotational effect of the hedgehog configuration, which is used for semi-classical analysis of the Skyrmion investigation [35]. Its order is relatively \(1/N_{c}^{2}\) for the dilatational excitation, and we have ignored this higher order. However, since it is necessary for \(N-\Delta\) splitting, higher order correction is desired.
We have used holographic QCD based on \(N_{f}=2\) in the chiral limit. Further extension including strangeness is an interesting subject in hadron physics, which can be done with \(N_{f}=3\)[36]. However, including quark mass is difficult for holographic QCD, and further theoretical development is required for quantitative argument of hyperons.
We have mainly presented the excitation energy for the dilatational mode of baryons, which appears in the same quantum numbers. It is desired for theoretical progress to show how to distinguish the dilatational mode from other excitation experimentally. To this end, we have to find out unique behavior for the dilatational mode, which is a future subject. The finding of such a peculiar quantity will lead to a better understanding of baryon spectra.
###### Acknowledgements.
H.S. is supported in part by the Grants-in-Aid for Scientific Research [19K03869] from Japan Society for the Promotion of Science.
\begin{table}
\begin{tabular}{l c c c} \hline baryon & excited baryon & excitation energy & theory \\ \hline N(940) & N\({}^{*}(1440)\) & 500 MeV & \(\omega\) \\ & N\({}^{*}(1710)\) & 770 MeV & \(2\omega\) \\ \(\Delta(1232)\) & \(\Delta(1600)\) & 368 MeV & \(\omega\) \\ & \(\Delta(1920)\) & 688 MeV & \(2\omega\) \\ \(\Lambda(1116)\) & \(\Lambda(1600)\) & 484 MeV & \(\omega\) \\ \(\Sigma(1193)\) & \(\Sigma(1660)\) & 467 MeV & \(\omega\) \\ \(\Sigma^{*}(1385)\) & \(\Sigma^{*}(1780)\) & 395 MeV & \(\omega\) \\ \hline \end{tabular}
\end{table}
Table 1: Experimental candidates of the dilatational excitation mode in various baryon channel [34]. Each ground-state baryon has the excitation with the same quantum number. Here, first-excited baryons are mainly listed, such as N\({}^{*}(1440)\) and \(\Delta(1600)\). For each channel, the excitation energy seems to take a consistent value with the theoretical one, \(\omega\simeq 577\) MeV. Here, N\({}^{*}(1710)\) and \(\Delta(1920)\) are identified to the second excitation mode.
## Appendix A Lattice formalism of holographic QCD
In Appendix A, we show our lattice formalism for holographic QCD in the Witten Ansatz, by introducing a sizable finite lattice with a small spacing \(a\).
Let us begin with the static energy \(E_{\rm 5YM}\) of the Yang-Mills term in the Witten Ansatz. For the explanation, we here repeat Eq. (18),
\[E_{\rm 5YM} = 4\pi\kappa\int_{0}^{\infty}dr\int_{-\infty}^{\infty}dw\bigg{[}h( w)|D_{1}\phi|^{2}+k(w)|D_{2}\phi|^{2} \tag{18}\] \[+\frac{h(w)}{2r^{2}}\{1-|\phi|^{2}\}^{2}+\frac{r^{2}}{2}k(w)f_{12 }^{2}\bigg{]},\]
of which the field variables \(\phi\) and \(a_{\mu}\) depend on the two-dimensional spatial coordinate \((r,w)\).
For the U(1) gauge variable \(a_{\mu}\) with \(\mu=r,w\), we define the link variable
\[U_{\mu}(s)\equiv\exp\{ia\ a_{\mu}(s+\frac{\hat{\mu}}{2})\}\in{\rm U}(1) \tag{19}\]
at the site \(s=(s_{r},s_{w})\) on the two-dimensional lattice. Here, \(\hat{\mu}\) denotes the \(\mu\)-directed vector with the length of \(a\).
On the lattice with a small spacing \(a\), one finds for the U(1) covariant derivative \(D_{\mu}\phi\) as
\[-\{\phi^{\dagger}(s)U_{\mu}(s)\phi(s+\hat{\mu})+\phi^{\dagger}(s+ \hat{\mu})U_{\mu}(s)^{\dagger}\phi(s)\} \tag{20}\] \[+|\phi(s)|^{2}+|\phi(s+\hat{\mu})|^{2}\] \[= -(\phi-\frac{a}{2}\partial_{\mu}\phi)^{\dagger}(1+iaa_{\mu}- \frac{1}{2}a^{2}a_{\mu}^{2})(\phi+\frac{a}{2}\partial_{\mu}\phi)\] \[-(\phi+\frac{a}{2}\partial_{\mu}\phi)^{\dagger}(1-iaa_{\mu}- \frac{1}{2}l^{2}a_{\mu}^{2})(\phi-\frac{a}{2}\partial_{\mu}\phi)\] \[+2|\phi|^{2}+\frac{a^{2}}{2}|\partial_{\mu}\phi|^{2}+{\cal O}(a^{ 3})\] \[= a^{2}\phi^{\dagger}(\stackrel{{\leftarrow}}{{\partial _{\mu}}}-ia_{\mu})(\partial_{\mu}+ia_{\mu})\phi+{\cal O}(a^{3})\] \[= a^{2}|D_{\mu}\phi|^{2}+{\cal O}(a^{3}),\]
where all omitted arguments are \(s+\hat{\mu}/2\). The field strength \(f_{12}\) is expressed with the U(1) plaquette variable,
\[\Box_{12}(s) \equiv U_{1}(s)U_{2}(s+\hat{1})U_{1}^{*}(s+\hat{2})U_{2}^{*}(s)\in{\rm U }(1) \tag{21}\] \[f_{12}^{2}(s) = \frac{1}{a^{2}}[1-{\rm Re}[\Box_{12}(s)]]. \tag{22}\]
The additional U(1) energy \(E^{\rm U(1)}\) in Eq. (28) is expressed by the coupling of the original U(1) gauge field \(\hat{A}_{0}\) and the topological density \(\rho_{B}\). Here, the temporal component \(\hat{A}_{0}(r,w)\) is treated as a (spatially) site-variable on the \((r,w)\) lattice. On the lattice, the baryon density is expressed as
\[\rho_{B} = \frac{1}{8\pi^{2}r^{2}}[-i\epsilon_{ij}(D_{i}\phi)^{*}D_{j}\phi+ \epsilon_{ij}\partial_{i}a_{j}(1-|\phi|^{2})] \tag{23}\] \[= \frac{1}{8\pi^{2}r^{2}}[2{\rm Im}\{(D_{1}\phi)^{*}D_{2}\phi\}+f_ {12}(1-|\phi|^{2})]\]
with
\[f_{12}(1-|\phi|^{2}) = \frac{1}{a^{2}}{\rm Im}\ \Box_{12}\times(1-|\phi|^{2}) \tag{24}\]
and
\[(D_{1}\phi)^{*}D_{2}\phi\] \[= \frac{1}{4a^{2}}\{U_{1}^{*}(s)\phi^{*}(s+\hat{1})-U_{1}(s-\hat{1 })\phi^{*}(s-\hat{1})\}\] \[\times\{U_{2}(s)\phi(s+\hat{2})-U_{2}^{*}(s-\hat{2})\phi(s-\hat{2 })\}\] \[= \frac{1}{4a^{2}}\left\{\phi^{*}(s+\hat{1})U_{1}^{*}(s)U_{2}(s) \phi(s+\hat{2})\right.\] \[\left.-\phi^{*}(s-\hat{1})U_{1}(s-\hat{1})U_{2}(s)\phi(s+\hat{2})\right.\] \[\left.-\phi^{*}(s+\hat{1})U_{1}^{*}(s)U_{2}^{*}(s-\hat{2})\phi(s- \hat{2})\right.\] \[\left.+\phi^{*}(s-\hat{1})U_{1}(s-\hat{1})U_{2}^{*}(s-\hat{2}) \phi(s-\hat{2})\right\}.\]
To reduce the discretization error, we have used the above form for \((D_{1}\phi)^{*}D_{2}\phi\), and its lattice formalism is symbolically written as
(25)
where the horizontal an vertical arrows represent \(U_{1}\) and \(U_{2}\), respectively, and the dots represent \(\phi\). Thus, we define the topological density \(\rho_{B}(r,w)\) and formulate the U(1) energy \(E^{\rm U(1)}[\rho_{B}(r,w),\hat{A}_{0}(r,w)]\) in Eq. (28) on the \((r,w)\) lattice.
In this way, the total energy \(E\) in holographic QCD is expressed as \(E[\phi(s),\vec{U}(s),\hat{A}_{0}(s)]\), i.e., a function of \(\phi(s)=\phi_{1}(s)+i\phi_{2}(s)\in{\bf C}\), \(\vec{U}(s)\equiv(U_{1}(s),U_{2}(s))\) and \(\hat{A}_{0}(s)\) at the spatial site \(s=(s_{r},s_{w})\). To find the solution of holographic QCD, we minimize the total energy \(E\) by iterative improvement on \(\phi(s)\), \(\vec{U}(s)\) and \(\hat{A}_{0}(s)\). Looking at a specific site \(s_{0}\), we consider only one variable \(\phi(s_{0})\), with fixing all other variables. By taking variation of \(\phi(s_{0})\), we minimize the total energy \(E\). Next, we consider only one link-variable \(\vec{U}(s_{0})\), with fixing all other variables, and take its variation to minimize \(E\). Similarly, considering only one site-variable \(\hat{A}_{0}(s_{0})\), with fixing all other variables, we take its variation to minimize \(E\). On each site on the lattice, we repeat the above process and update \(\phi(s)\), \(\vec{U}(s)\) and \(\hat{A}_{0}(s)\). We iterate this sweep procedure many times so as to minimize the total energy \(E\) in holographic QCD, and the solution is eventually obtained.
## Appendix B Other expression of U(1)-part energy
For the U(1) sector, we have mainly used Eq. (28) for the numerical calculation of \(E^{\rm U(1)}\). However, there is another useful expression for the energy \(E^{\rm U(1)}\) of the U(1) sector without \(\hat{A}_{0}\), and we introduce this form in Appendix B.
By solving \(\hat{A}_{0}\) using Eq. (25), the energy (27) becomes
\[E^{\rm U(1)} = \frac{N_{c}^{2}}{8}\int d^{3}xdw\ \rho_{B}K^{-1}\rho_{B}\] \[= \frac{N_{c}^{2}}{8}\int d^{3}xdw\int d^{3}x^{\prime}dw^{\prime}\]
\[\rho_{B}(\vec{x},w)K^{-1}(\vec{x},w;\vec{x}^{\prime},w^{\prime})\rho_{B}(\vec{x}^{ \prime},w^{\prime}). \tag{104}\]
Since the kernel \(K\) and topological density \(\rho_{B}\) are SO(3) rotationally symmetric, the additional energy can be expressed only with the \((r,w)\)-coordinates:
\[E^{\rm U(1)} = 2\pi^{2}N_{c}^{2}\int_{0}^{\infty}dr\int_{-\infty}^{\infty}dw\int _{0}^{\infty}dr^{\prime}\int_{-\infty}^{\infty}dw^{\prime} \tag{105}\] \[\tilde{\rho}_{B}(r,w)\tilde{K}^{-1}(r,w;r^{\prime},w^{\prime}) \tilde{\rho}_{B}(r^{\prime},w^{\prime}),\]
using \(\tilde{\rho}_{B}(r,w)\equiv r^{2}\rho_{B}(r,w)\) and the hermite kernel \(\tilde{K}\equiv 4\pi r^{2}K\) in \((r,w)\)-space.
On the lattice, the kernel \(\tilde{K}\) in Eq. (29) is transformed into a differential form and is expressed as a matrix \(\tilde{K}_{L}(r,w;r^{\prime},w^{\prime})\). Then, the kernel inverse \(\tilde{K}_{L}^{-1}(r,w;r^{\prime},w^{\prime})\) is numerically obtained by taking the inverse matrix of the kernel \(\tilde{K}_{L}(r,w;r^{\prime},w^{\prime})\). As a technical caution, the kernel \(\tilde{K}_{L}(r,w;r^{\prime},w^{\prime})\) has a translational zero mode, reflecting the derivative form of \(\tilde{K}\). Since this translational zero mode does not affect the total energy, the inverse of \(\tilde{K}\) has to be taken in the space except the spurious zero mode. In fact, using an orthogonal matrix \(O\), the kernel \(\tilde{K}_{L}\) is diagonalized as
\[\tilde{K}_{L}=O\ {\rm diag}(0,\lambda_{1},\lambda_{2},\cdots)\ O^{T} \tag{106}\]
with non-zero eigenvalues \(\lambda_{n}\). Then, we define its inverse \(\tilde{K}_{L}^{-1}\) to be
\[\tilde{K}_{L}^{-1}=O\ {\rm diag}(0,\lambda_{1}^{-1},\lambda_{2}^{-1},\cdots)\ O^{T}, \tag{107}\]
which is equivalent to the appropriate removal of the spurious translational zero mode. Using this kernel inverse \(\tilde{K}_{L}^{-1}\) and the baryon density \(\rho_{B}\), the energy \(E^{\rm U(1)}\) of the CS term is calculated as \(\rho_{B}\tilde{K}_{L}^{-1}\rho_{B}\) on the lattice.
Thus, for the numerical calculation of \(E_{\rm U(1)}\), there are two different methods: one is to update the holographic fields \(\phi(r,w)\), \(\vec{a}(r,w)\) and \(\hat{A}_{0}(r,w)\) on the lattice based on Eq. (28); the other is to update only \(\phi(r,w)\) and \(\vec{a}(r,w)\) using Eq. (105). We have confirmed that both methods give the same numerical results for holographic baryons.
## Appendix C Holographic baryon using self-dual BPS instanton
In Appendix C, we investigate the holographic baryon when the self-dual BPS instanton in Eq. (56) is used. The self-dual BPS instanton is the 't Hooft solution of the ordinary Yang-Mills theory, and hence, to be strict, this usage is justified in the case of flat space \(h(w)=k(w)=1\) and ignoring the CS term. Substituting the BPS instanton with the size \(R\) into the total energy \(E\) in Eq. (30) including the CS term and the background gravity, \(k(w)=1+w^{2}\) and \(h(w)=k(w)^{-1/3}\), one obtains the static baryon energy \(E(R)=V^{\rm BPS}(R)\) as the function of the instanton size \(R\), and its minimum gives an approximate ground-state holographic baryon, which satisfies \(M_{B}^{\rm BPS}\simeq 1.35\ M_{\rm KK}\simeq 1.28\) GeV and \(\sqrt{\langle r^{2}\rangle_{B_{B}}^{\rm BPS}}=\sqrt{\frac{3}{2}}R_{0}^{\rm BPS }\simeq 2.2\ M_{\rm KK}^{-1}\simeq 0.46\ {\rm fm}\), as shown in Fig. 14. Thus, when the self-dual BPS instanton is used, the holographic ground-state baryon has larger mass and smaller size [18] than the true solution numerically obtained in Sec. IV.
For the approximate ground-state holographic baryon, the corresponding Higgs field \(\phi\) and gauge field \(\vec{a}\) are shown in Fig. 11. These fields give a topological density \(\rho_{B}\), and \(\hat{A}_{0}\) is obtained by solving EOM (25), as shown in Fig. 12.
Figure 13 shows the \(r^{2}\)-multiplied topological and the energy densities.
The static baryon energy \(V^{\rm BPS}(R)\) for the BPS configuration is shown in Fig. 14, and it is approximately fit
Figure 11: The Higgs field \(\phi(r,w)\) (upper) and the Abelian gauge field \(a(r,w)=(a_{1},a_{2})\) (lower) for the BPS instanton in the Landau gauge in the \(M_{\rm KK}=1\) unit. For iterative improvement in the main sections, this configuration is used as the starting point, i.e., the initial configuration of the iteration.
with a quadratic function,
\[\begin{split}& V^{\rm BPS}(R)\simeq A^{\rm BPS}(R-R_{0}^{\rm BPS})^ {2}+M^{\rm BPS},\\ & A^{\rm BPS}\simeq 0.39,\ R_{0}^{\rm BPS}\simeq 1.8,\ M_{0}^{\rm BPS }\simeq 1.4,\end{split} \tag{109}\]
in the \(M_{\rm KK}=1\) unit.
Thus, when the self-dual BPS instanton is used, the holographic baryon has larger mass and smaller size [18] than the true solution numerically obtained in Sec. IV.
## Appendix D Rough analytical estimation for dilation modes using self-dual BPS instanton
In Appendix D, we consider a rough analytical estimation of the dilation mode using the self-dual 't Hooft BPS instanton, which is justified in the flat space \(h(w)=k(w)=1\) and without the CS term.
In this case, when the flat space approximation \(h(w)=k(w)=1\) is used, the kinetic term \(T\) of the size variable \(R(t)\) is analytically expressed as
\[T \simeq \kappa\int d^{3}x\ dw{\rm tr}\left[F_{0M}^{2}\right] \tag{110}\] \[\simeq \kappa\int d^{3}x\ dw\frac{24x^{2}}{(x^{2}+R_{0}^{2})^{4}}R_{0}^ {2}\dot{R}^{2}\] \[= 48\pi^{2}\kappa\int d\frac{r^{5}}{(r^{2}+R_{0}^{2})^{4}}R_{0}^ {2}\dot{R}^{2}\] \[= 8\pi^{2}\kappa\ \dot{R}^{2}=\frac{1}{2}m_{R}\dot{R}^{2},\]
where \(\dot{R}\) means \(\partial_{t}R\). Here, \(h(w)=k(w)=1\) is used in the first line, and the instanton size \(R_{0}\) appearing in the middle does not affect the result. In this way, the mass parameter \(m_{R}\) on the size variation is estimated as
\[m_{R}\ =\ 16\pi^{2}\kappa\simeq 1.18. \tag{111}\]
Form this mass parameter \(m_{R}\) and the potential \(V^{\rm BPS}(R)\) in Eq. (109), one obtains a rough estimation of the excitation energy of the dilatation mode,
\[\omega=\sqrt{\frac{2A^{\rm BPS}}{m_{R}}}=0.81\ M_{\rm KK}\simeq 771\ {\rm MeV}, \tag{112}\]
which seems a larger value than the numerical result in Sect. VI. This estimation can be analytically done but
Figure 14: Static baryon energy \(V^{\rm BPS}(R)\) calculated from the BPS configurations in Eq. (56). The solid line is a fitting curve of quadratic function.
Figure 12: The U(1) gauge field \(\hat{A}_{0}\) in the case of BPS instanton in the \(M_{\rm KK}=1\) unit. This is obtained by solving Eq. (25) for the self-dual ’t Hooft instanton.
Figure 13: \(r^{2}\)-multiplied topological and energy densities, \(4\pi r^{2}\rho_{B}(r)\) and \(4\pi r^{2}\hat{\cal E}(r)\), in the case of the BPS instanton configuration in the \(M_{\rm KK}=1\) unit.
has no gravitational effect of \(h(w)\) and \(k(w)\) for both kinetic and potential terms, in addition to use of the self-dual BPS instanton. On these points, the numerical calculation presented in Sect. VI has been developed.
|
2309.17322 | Assessing Look-Ahead Bias in Stock Return Predictions Generated By GPT
Sentiment Analysis | Large language models (LLMs), including ChatGPT, can extract profitable
trading signals from the sentiment in news text. However, backtesting such
strategies poses a challenge because LLMs are trained on many years of data,
and backtesting produces biased results if the training and backtesting periods
overlap. This bias can take two forms: a look-ahead bias, in which the LLM may
have specific knowledge of the stock returns that followed a news article, and
a distraction effect, in which general knowledge of the companies named
interferes with the measurement of a text's sentiment. We investigate these
sources of bias through trading strategies driven by the sentiment of financial
news headlines. We compare trading performance based on the original headlines
with de-biased strategies in which we remove the relevant company's identifiers
from the text. In-sample (within the LLM training window), we find,
surprisingly, that the anonymized headlines outperform, indicating that the
distraction effect has a greater impact than look-ahead bias. This tendency is
particularly strong for larger companies--companies about which we expect an
LLM to have greater general knowledge. Out-of-sample, look-ahead bias is not a
concern but distraction remains possible. Our proposed anonymization procedure
is therefore potentially useful in out-of-sample implementation, as well as for
de-biased backtesting. | Paul Glasserman, Caden Lin | 2023-09-29T15:30:32Z | http://arxiv.org/abs/2309.17322v1 | # Assessing Look-Ahead Bias in Stock Return Predictions Generated By GPT Sentiment Analysis
###### Abstract
Large language models (LLMs), including ChatGPT, can extract profitable trading signals from the sentiment in news text. However, backtesting such strategies poses a challenge because LLMs are trained on many years of data, and backtesting produces biased results if the training and backtesting periods overlap. This bias can take two forms: a _look-ahead bias_, in which the LLM may have specific knowledge of the stock returns that followed a news article, and a _distraction effect_, in which general knowledge of the companies named interferes with the measurement of a text's sentiment. We investigate these sources of bias through trading strategies driven by the sentiment of financial news headlines. We compare trading performance based on the original headlines with de-biased strategies in which we remove the relevant company's identifiers from the text. In-sample (within the LLM training window), we find, surprisingly, that the anonymized headlines outperform, indicating that the distraction effect has a greater impact than look-ahead bias. This tendency is particularly strong for larger companies -- companies about which we expect an LLM to have greater general knowledge. Out-of-sample, look-ahead bias is not a concern but distraction remains possible. Our proposed anonymization procedure is therefore potentially useful in out-of-sample implementation, as well as for de-biased backtesting.
Introduction
### Background
Many studies have found that news stories can be used to predict short-term market returns through sentiment analysis and other natural language processing tools; see, e.g., Das and Chen (2007), Tetlock (2007), Tetlock, Saar-Tsechansky, and Macskassy (2008), Calomiris and Mamaysky (2019), Ke, Kelly, and Xiu (2019), Garcia, Hu, and Rohrer (2023), and Glasserman, Li, and Mamaysky (2022). Work in this area has mostly relied on fixed lexicons for sentiment analysis or used predictive techniques that can be retrained on a rolling window of data.
More recently, large language models (LLMs) like the Generative Pre-trained Transformer (GPT) architecture are proving to be valuable tools for processing news text to forecast market responses, despite not being trained specifically for this task; see, in particular, Lopez-Lira and Tang (2023) and also Chen et al. (2023) and Hansen and Kazinnik (2023). LLMs are trained on massive datasets spanning many years. This feature, which is central to their success, poses a challenge for designing and backtesting trading strategies. If an LLM is asked to analyze a news article from within its training window, the LLM already "knows" something about events that followed the news article, which taints the model's forecasts. Indeed, in other contexts, LLMs have been found to memorize training data that can later influence the output they produce (Tirumala et al., 2022; Carlini et al., 2021; Steed et al., 2022).
Retraining an LLM on a rolling window would solve the problem of separating training and test data, but this is computationally infeasible and is likely to remain so for the foreseeable future. Researchers seeking to test LLM predictions have therefore been limited to the out-of-sample period that begins at the end of the model's training period. For ChatGPT, the training period runs through September 2021, leaving only the period since October 2021 for out-of-sample testing. (The engine behind ChatGPT is GPT-3.5-Turbo, which we refer to henceforth as GPT-3.5.)
To address this problem, we propose a simple tool: anonymizing company information in news text. This tool allows us to measure and mitigate the bias in testing an LLM's response to text from within its training window. To measure the bias, we compare the in-sample performance of trading strategies that use the original and anonymized text. We also compare the out-of-sample performance of the strategies and find evidence that anonymization is potentially useful even outside the training window.
We consider trading strategies that buy a stock following good news about the stock and sell the stock following bad news. An LLM is used to judge whether news is good, bad, or neutral. In the example in Figure 1, an LLM is asked whether the headline, "Google to announce 2019 Q3 earnings tomorrow" is good or bad news for the company. The correct answer is clearly neither. But in the figure's Scenario A, the LLM knows that Google's 2019 Q3 earnings were poor and responds that the news is bad. We use the term _look-ahead bias_ to refer this type of scenario, in which the LLM has specific information about relevant events immediately following the news story.
In Scenario B, the LLM does not know specifically about Google's 2019 Q3 results, but it has general knowledge about Google, including the fact that Google, in general, did well in 2019. It therefore responds that the headline is good news for Google. We use the term _distraction effect_ to refer to the ways the LLM's general knowledge about a company might interfere with its analysis of the sentiment of news text. This interference may come from knowledge of the future, as in the example in the figure, but it could also come from past information about the company.
Part (b) of Figure 1 shows how we mitigate both types of bias. We anonymize the headline, and the LLM then correctly responds that the news is neutral. Comparing results with and without anonymization allows us to quantify the overall bias in in-sample backtesting. The performance of de-biased trading strategies provides a better indication of out-of-sample performance.
Look-ahead bias should be positive: knowing the short-term response to news should produce higher returns than could be achieved in practice. The direction of the distraction effect is unclear: general knowledge about a company could be helpful or harmful in sentiment analysis. Also, whereas look-ahead bias is restricted to in-sample results, distraction is relevant out-of-sample as well. Indeed, we use "distraction effect" to capture all the ways an LLM's response to a news prompt differs from its response to an anonymized prompt, aside from specific knowledge of events immediately following the
news, which is captured by look-ahead bias.
We measure the performance of long-short trading strategies (based on the approach in Lopez-Lira and Tang (2023)) that use GPT-3.5 for sentiment analysis, but we compare results using the original and replaced (anonymized) news headlines. To our surprise, we find that _the replaced headlines generate higher in-sample returns_. Since look-ahead bias must be positive, we conclude that the distraction effect must be negative: the general knowledge GPT-3.5 has about companies does more harm than good in judging news sentiment in in-sample backtesting. This conclusion is robust in that it holds across two different news sources. The effect is particularly strong among larger companies, about which we expect GPT-3.5 to have greater general knowledge.
When we dig deeper into the drivers of these results, we find that GPT-3.5 suffers from a kind of overconfidence. We compare two scenarios -- one in which GPT-3.5 misjudges the sentiment in the original headline and judges the replaced headline as neutral, and one in which GPT-3.5 misjudges the sentiment in the replaced headline and judges the original headline as neutral. These are scenarios in which the distraction effect is, respectively, harmful and helpful. The losses when distraction is harmful are greater than the losses from failing to benefit from distraction when its effect is helpful. GPT-3.5 would do better by better recognizing what it does not know.
The performance comparison between the original and replaced headlines loses statistical significance out-of-sample, suggesting that most of the distraction effect we observe in-sample comes from knowledge of the future. However, for larger companies, we find borderline significant evidence that the replaced headlines outperform even out-of-sample. In other words, anonymizing news text is not just a tool for measuring in-sample bias; it may also be helpful in improving out-of-sample performance.
Section 2 describes our data sources, trading strategies, and anonymization process. Section 3 presents our main findings, including in-sample and out-of-sample comparisons of trading strategies using original and replaced headlines; average returns conditional on original and replaced responses; the influence
Figure 1: A scenario where look-ahead bias could potentially impact stock return predictions, and how it can be avoided through entity anonymization.
of company size; the predictive power of GPT scores; and a comparison of market betas. Section 4 summarizes our conclusions. Supplementary results are included in an appendix.
## 2 Methods
### Data
Our in-sample period begins in January 2015 and ends in September 2021, when GPT-3.5's training data ends (OpenAI, 2023). Our out-of-sample period begins in October 2021 and ends in December 2022.
We use the Center for Research in Security Prices's (CRSP) datasets to obtain daily stock market data, specifically daily open and close prices, daily listed shares, and daily market cap. We use this information to calculate daily open-to-close returns and daily close-to-close returns for individual companies.
We use two sets of news headlines, which we refer to as the "scraped" and Thomson Reuters (or TR) datasets. To assemble our scraped collection, we use RavenPack as a guide. RavenPack is a popular provider of news from Dow Jones, but it prohibits uploading of its text to third parties, including GPT-3.5. We therefore search the web to find news headlines about specific companies on days RavenPack has news about those companies to assemble our own collection of headlines. We map each company's RavenPack ID to its CRSP security ID to match each company-headline pair with stock return data. This process yields 129,431 headlines over our sample period, representing 6,723 companies that were both mentioned in a RavenPack news item and included in the CRSP universe of stocks. The Dow Jones data underlying RavenPack covers a broad range of companies.
For our second news source we use the Thomson Reuters data assembled in Glasserman, Li, and Mamaysky (2022) for S&P 500 companies. That collection includes full news articles, but here we use only the headlines. Each article was previously linked to the CRSP ID of each S&P 500 company mentioned in the article through a procedure described in Glasserman, Li, and Mamaysky (2022). This dataset yields 181,908 headlines representing 678 S&P 500 companies over the sample period, all of which were both mentioned in at least one Thomson Reuters news headline and contained in the CRSP stock return dataset. The most important difference between the two collections is the median market cap (2.59 billion USD for scraped, 56.74 billion USD for Thomson Reuters). For a more extensive comparison of the Thomson Reuters and Dow Jones news sources, see Glasserman, Li, and Mamaysky (2022).
### GPT Usage
We follow the methodology of Lopez-Lira and Tang (2023) to perform sentiment analysis of news headlines using GPT-3.5. We use the following prompt as input to the language model:
Forget all your previous instructions. Pretend you are a financial expert. You are a financial expert with stock recommendation experience. Answer "YES" if good news, "NO" if bad news, or "UNKNOWN" if uncertain in the first line. Then elaborate with one short and concise sentence on the next line. Is this headline good or bad for the stock price of [company name] in the short term?
Headline: [headline]
We use the GPT-3.5-Turbo version from OpenAI, which is the language model that powers ChatGPT. We set the model's temperature, a parameter that controls the determinism of the output, to 0 (most predictable) in order to maximize reproducibility of results.
Using the prompt, we ask the model to provide a recommendation and analysis for each headline, quantifying the results by mapping "YES" to 1, "UNKNOWN" to 0, and "NO" to -1. We then align each score with the appropriate market period, following Lopez-Lira and Tang (2023). For headlines that appear after 4 P.M. ET and before 6 A.M., the stock is bought at the next opening price and sold at the next closing price; for headlines after 6 A.M. and before 4 P.M., the stock is bought at the closing
price and sold at the next day's close. Scores are averaged if there are multiple headlines for a company within the same market period.
### Trading Strategy
To evaluate the performance of the GPT recommendations and to analyze look-ahead bias, we examine three types of trading strategies advised by GPT scores: long-only, short-only, and long-short. The long-only strategy buys all companies with positive GPT scores and holds them for one day (open to close or close to close, as explained in Section 2.2). The short-only strategy short-sells all companies with negative GPT scores for one day. The long-short strategy does both. If a strategy does not have any corresponing recommendations within a given market period, it does not execute any trades. All portfolios are equal-weighted and rebalanced at the end of each market day.
In more detail, the long-only strategy works as follows. At each market open, we identify all companies with a GPT score greater than 0 (corresponding to positive recommendations from news headlines) and calculate their average intraday market return over the subsequent day. At market close, we again identify all companies with a GPT score greater than 0 and calculate their average subsequent close-to-close market return. We then calculate the total return for the day by computing the weighted average of the intraday market return and the overnight market return, weighted based on the number of companies contributing to each average. The short-only strategy works analogously, based on scores less than 0.
We stress that these strategies are not intended to be feasible; the in-sample versions clearly are not, and even out-of-sample we are ignoring transaction costs and short-sale constraints. We use these strategies as a mechanism for analyzing the de-biasing effect of anonymizing the input news text.
### Headline Anonymization
We explore the potential impact of look-ahead bias by first anonymizing the original headlines to hide the pertinent company name through a string replacement algorithm detailed below. Then, we repeat the GPT processing and implement trading strategies with these revised GPT scores.
First, we identify the official company name of the company that the headline discusses, which is available for all articles through RavenPack and our Thomson Reuters collection. We clean the official name by removing all punctuation, as well as common endings like "Inc", "LLC", or "Limited", which are unlikely to ever appear in standard news headlines.
For each headline, we iterate through each substring with length equal to the length of the cleaned company name and compare it to the cleaned company name using fuzzy string matching. Specifically, we perform a set operation that isolates common tokens, then sort the strings, and finally paste the tokens back together. Next, we compute the normalized InDel similarity score between the two remaining strings, which is a score between 0 and 100, where 100 represents two identical strings. If the similarity score is greater than a threshold of 80, we identify a match between the substring and the company name.
The InDel distance measures the number of insertions and deletions needed to transform one string into another. In the case of company names, it is common for companies to be referred to through condensed versions of their official names, with certain words added or removed, making the InDel distance a suitable metric to fulfill our purpose.
We then replace the substring with a random string, "Blahblahblah", thus anonymizing the actual company name.1 We then repeat this with all substrings of length less than the length of the company name to account for partial matches, such as to ensure that "Costco" is identified from the official name of "Costco Wholesale Corporation". In addition, for companies with names that are at least two words in length, we perform a case-sensitive search for an abbreviated version of the company's name (e.g. to find "IBM" from "International Business Machine" or "GE" from "General Electric").
Footnote 1: We obtained similar results using other random strings such as “abcdefghijklmnoportsuwxyz” or ”JREBEvphod” instead of “Blahblahblah”. Therefore, we continue to use “Blahblahblah” as our anonymized replacement string.
We also observe that with some headlines, it may not be sufficient to conceal just the name of the company, as related information can be equally revealing of its identity. For instance, most LLMs will
be able to identify that a replaced headline "Blahblahblah announces release of new iPhone" is relevant to Apple. This will be especially important for the Thomson Reuters dataset, which generally reports on larger companies that the LLM will possess more knowledge of.
To account for this, we make use of the Google Knowledge Graph, Google's public database that contains billions of facts about people, places, and things. For each company, we query the Knowledge Graph to extract the most relevant products and services, up to a limit of 20 (see Table 1 for example queries). We then search for each product or service in a given headline, and if found, replace it with the random string. Table 2 provides examples of the replacement process; note, for example, that "Xbox" is replaced along with "Microsoft."
We also tried using the Python library Spacy to perform named entity recognition, a natural language processing method that detects and categorizes important information in text, to identify company names. However, we observed that the library often failed to properly identify companies and organizations and frequently removed important content within the headline. Because such inaccuracies could significantly alter the meaning or substance of an article, we instead chose to develop the replacement algorithm detailed above.
Our anonymization algorithm identifies and replaces the relevant company name or a relevant product or service in 100,401 out of 129,431 cases in the scraped headlines and 180,237 out of 181,908 cases in the Thomson Reuters headlines. Headlines that were not modified by the algorithm were processed as is in both sets of headlines, as they do not directly mention the relevant company and therefore can be used as textual input for both the original trading strategy and the strategy using replaced headlines.
Throughout the paper, we will refer to news headlines as either "original" or "replaced", where the latter indicates that the headline has been modified such that relevant company identifiers have been replaced and anonymized. Original headlines will produce "original scores" that advise "original portfolios," and replaced headlines will produce "replaced scores" that generate "replaced portfolios". Our anonymization procedure yields separate sets of replaced and original headlines for the scraped data and the Thomson Reuters data.
\begin{table}
\begin{tabular}{p{42.7pt} p{341.4pt}} \hline \hline Query & Results \\ \hline \hline Apple & Apple, Apple, Apple Watch, AirPods, iPhone, iPad, Apples, AirPods Pro, iPhone \\ & 11, Apple TV \\ \hline Microsoft & [Microsoft Corporation, Microsoft 365, MSN, Microsoft Bing, OneDrive, Microsoft Word, Microsoft PowerPoint, Microsoft Office, Xbox, Microsoft Edge] \\ \hline Amazon & [Amazon:com, Amazon Prime Video, Amazon Prime, Amazon Alexa, Amazon Music, Amazon Kindle, Amazon Fresh, Amazon Web Services, Amazon Echo, Amazon Luna] \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Examples of Google Knowledge Graph Queries.** In a headline about Apple, we would anonymize all the terms related to Apple.
## 3 Results
We now compare the performance of trading strategies using original and replaced headlines, conducting the comparison separately in-sample and out-of-sample and separately for each news source. We then drill down to compare returns conditional on various combinations of responses from GPT-3.5 to the original and replaced headlines. We investigate how company size influences the response to the original and replaced headlines. We compare the predictive value of sentiment scores based on the two types of headlines, and we compare market betas for original and replaced portfolios.
### Trading Performance
We begin by confirming the out-of-sample profitability of trading on GPT-3.5's sentiment scores. Figure 2 shows that the long-short trading strategies implemented using the original scraped and original TR headlines as input and GPT-3.5's output substantially outperform market returns over the out-of-sample period. The results in in the figure generally align with Lopez-Lira and Tang (2023), although Lopez-Lira and Tang (2023) report even higher long-short returns. We are using a simplified version of their trading strategy (and not using RavenPack's relevance score, for example) because our objective is to measure and mitigate in-sample bias. A simpler strategy with less fine-tuning provides greater transparency for the effect our de-biasing approach. In addition, our headline collections differ from the headlines used in Lopez-Lira and Tang (2023).
In Figure 2, trading on the scraped headlines generates higher returns than trading on the TR headlines. This may be due to a greater supply of bad-news stories in the scraped dataset, which provide short-selling opportunities: the scraped "All News" portfolio substantially underperforms the market in Figure 2. The TR "All News" portfolio is roughly in line with the market, suggesting that the TR headlines provide fewer profitable signals during the out-of-sample period. We will have more to say about the news sources in subsequent sections, and we examine returns on the long and short sides separately in the appendix.
\begin{table}
\begin{tabular}{l l l} \hline \hline Official company name & Original headline & Replaced headline \\ \hline \hline Walgreens Boots Alliance Inc. & Walgreens 4Q Profit Falls as Oversea Unit Struggles Amid Pandemic & Blahblahblah 4Q Profit Falls as Oversea Unit Struggles Amid Pandemic \\ \hline Microsoft Corporation & ‘Halo Infinite’ delayed, will not debut with new Xbox; Microsoft stock slips & ‘Halo Infinite’ delayed, will not debut with new Blahblahblah; Blahblahblahblah \\ & & stock slips \\ \hline Phillips Petroleum Company & Analysis: Market for U.S. oil acreage booms along with crude price recovery & Analysis: Market for U.S. oil acreage booms along with crude price recovery \\ \hline Advanced Micro Devices Inc. & AMD’s Stock Sinks, Xilinx Soars After WSJ Report Of Advanced Merger Talks – MarketWatch & Blahblahblah’s Stock Sinks, Xilinx Soars After WSJ Report Of Advanced Merger Talks – MarketWatch \\ \hline Xilinx Inc. & AMD’s Stock Sinks, Xilinx Soars After WSJ Report Of Advanced Merger Talks – MarketWatch \\ \hline \hline \end{tabular}
\end{table}
Table 2: Select samples of the results of anonymization algorithm. The algorithm extracts and replaces words and phrases that can identify a company, such as its name, abbreviation, or products. In the first row, just the company name is replaced, and in the second example, the company name and a relevant product are replaced. In the third example, the original and replaced headlines are identical because the company is not directly mentioned, while in the fourth example, the company’s abbreviation is identified and replaced. The fifth example, a repeat of the fourth but with a different company to be identified, shows that the replaced strings are contingent on the company to be named in the GPT prompt.
### Comparing Daily Average Portfolio Returns
We now turn to comparisons of the average daily returns using original and replaced headlines. We compare results for these two strategies separately during the in-sample and out-of-sample periods and separately for each of our two news sources.
Table 3 summarizes our results. The top panel compares the original and replaced strategies during the in-sample period. For both news sources _the replaced headlines generate higher average returns than the original headlines._ (Specifically, we see an outperformance of \(30.97-25.08=5.89\) basis points per day using the scraped headlines and \(81.64-77.27=4.37\) using the TR headlines.) The difference between the replaced and original mean returns is statistically significant using the TR data (\(p=0.017\)) and borderline significant (\(p=0.066\)) using the scraped data because of the higher volatility of those returns.
The outperformance by the replaced headlines is surprising -- we would expect that the strategy using the original headlines would benefit from look-ahead bias in its in-sample performance. The fact that the original headlines underperform indicates that GPT's general knowledge about companies -- the distraction effect -- negatively affects its ability to evaluate news sentiment and outweighs any positive look-ahead bias. We will examine this effect in greater detail in the next sections.
The lower panel of Table 3 compares the performance of original and replaced headlines in the out-of-sample period. Using the scraped data, the difference in means changes sign but the difference is not statistically significant. The comparison using the TR data suggests that the negative distraction effect persists out-of-sample, although the difference is now only borderline significant. We will return to this contrast between the two news sources in Section 3.4.
### Performance of Replaced Headlines
#### 3.3.1 Illustrative Examples
To better illustrate the phenomenon suggested by Table 3, we analyze some specific examples of how entity anonymization can affect the output that GPT-3.5 produces.
Figure 2: **Out-of-sample trading performance using the original scraped and original TR headlines.** The dotted red line and the dashed red line show cumulative returns for the long-short strategy using original scraped and original TR headlines, respectively. The dotted green line and the dashed green line show cumulative returns from holding all companies with any news on the previous day, based on the scraped and TR headlines, respectively. For comparison, the yellow and purple lines show equal-weighted and value-weighted cumulative returns to the overall market.
The first example of Table 4 relates to Facebook (now Meta) through a headline reporting on their decision to not extend European privacy laws internationally. GPT-3.5 interprets this headline as good news for Facebook's stock, as it predicts that fewer privacy laws globally will reduce regulatory challenges. When the headline is modified such that the word Facebook is replaced with a random string, GPT interprets the headline as bad news due to potentially reduced user base as a result of fewer privacy laws. In reality, following the headline, the stock of Facebook decreased by 0.6%, meaning that the replaced response produced the correct investment decision. However, in the longer term, throughout the remainder of the quarter, the stock of Facebook increased by over 25% due to greater worldwide
\begin{table}
\begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline Firm & Headline & Original Response & Replaced Response & Short-term & Long-term \\ \hline Facebook & Facebook not planning to extend European privacy law globally & YES - This statement is good for the price of Facebook in the short face additional regulatory challenges and potential lines related to extending European privacy laws globally. & NO - The statement is bad for the price of Facebook in the short term as it indicates that the company will not face additional regulatory challenges and potential lines related to extending European privacy laws globally. & N0 - The statement is bad for the price of Facebook in the short term as it indicates that the company may face regulatory challenges and potential limitations in expanding its user base globally. & Stock up 6\% & Poles failed 6 months later, stock down 15\% \\ \hline Allergan & Pfizer to buy Alergan in 160 billion deal & NO - The announcement of Pfizer in a 160 billion deal & YES - This news is good for the price of Blablablablabl in a short term as the acquisition by Pfizer in a significant deal indicates potential growth and expansion for the company. & Stock up 6\% & Poles failed 6 months later, stock down 15\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Select examples where entity anonymization significantly affects the output that GPT-3.5 produces.** The original response is GPT-3.5’s response when provided the headline as is. The replaced response is GPT-3.5’s response when provided the headline with the relevant company name replaced with a random string. The short-term outcome is the immediate movement of the company’s stock in the next trading period (i.e. the return that the trading strategies advised by GPT output would attain). The long-term outcome in the long-term impact of the given headline on the company’s stock.
\begin{table}
\begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline & & \multicolumn{2}{c}{Scraped} & \multicolumn{2}{c}{Thomson Reuters} \\ & & Original & Replaced & Original & Replaced \\ \hline \multirow{6}{*}{In-sample} & No. of obs. & 1699 & 1699 & 1695 & 1695 \\ \cline{2-6} & Mean & 25.08 & 30.97 & 10.74 & 13.84 \\ \cline{2-6} & Std. dev. & 183.87 & 219.32 & 77.27 & 81.64 \\ \cline{2-6} & t-stat & -1.84 & -2.38 & -2.38 \\ \cline{2-6} & p-value & 0.066 & & 0.017 \\ \hline \multirow{6}{*}{Out-of-sample} & No. of obs. & 314 & 314 & 314 & 314 \\ \cline{2-6} & Mean & 16.32 & 11.09 & 6.07 & 12.23 \\ \cline{1-1} \cline{2-6} & Std. dev. & 141.52 & 148.26 & 85.25 & 98.21 \\ \cline{1-1} \cline{2-6} & t-stat & 1.20 & & -1.86 \\ \cline{1-1} \cline{2-6} & p-value & 0.23 & & 0.064 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Summary statistics and significance tests comparing long-short portfolios advised by original headlines and replaced headlines.** Each observation represents the average return of the portfolio on a given trading day. P-values are calculated from paired t-tests, with returns on the same day paired. Each t-stat tests the difference in the means immediately above it. For analysis with long-only portfolios and short-only portfolios, see the appendix.
adoption. While it is difficult to determine precisely what influences the LLM to make its decisions, given that the only modification to the headline between these two responses was the anonymization of Facebook, it appears likely that GPT-3.5--when processing the original headline--leverages some type of existing knowledge of Facebook's stock performance rather than solely its language processing capabilities when making this prediction.
In the second example, the headline reports that pharmaceutical company Pfizer is set to buy its counterpart Allergan in a 160 billion dollar deal. Given the original headline, GPT-3.5's response is that the news is bad for Allergan's stock, as the acquisition may cause increased debt and integration challenges. With the anonymized headline, the replaced response is that the headline is good news because it indicates potential growth and expansion for the company. Similarly, given that the only change to the headline was the presence or absence of Allergan's name, it seems likely that GPT-3.5 attempted to utilize its preexisting knowledge of Allergan to generate a prediction given the original headline. Indeed, while Allergan's stock significantly increased immediately following the headline--making the replaced response again correct--the deal ultimately fell through 6 months later, causing the stock of Allergan to plunge 15%. Given the size and scale of this attempted acquisiton in the pharma industry, GPT-3.5 may have gained knowledge of the event and its aftermath through its training data, which then influenced or interfered with its evaluation of the headline's sentiment.
#### 3.3.2 Classification Analysis Based on Response Accuracy
We now argue that the in-sample outperformance of the replaced strategy is due, in part, to a type of "overconfidence" in GPT-3.5: some of the largest losses affecting the original strategy occur in scenarios where GPT would provide a neutral response to the replaced headline, but it responds positively or negatively to the original headline.
To make this type of comparison, we classify every observation into one of nine categories, depending on the "correctness" of GPT-3.5's original response and replaced response. A response is deemed correct if the recommendation aligns with the direction of the daily stock return; for instance, a GPT score of 1 is correct if the relevant company experiences positive returns on the next trading day.
We begin by dividing the headlines into the in-sample period and the out-of-sample period. We then categorize every observation. For each category, we compute an original return (the average return of a long-short portfolio generated with the original GPT scores of the observations in the given subset) and a replaced return (average return of a long-short portfolio generated with the replaced GPT scores of the observations in the given subset). Additionally, we document the total proportion of observations within each category.
Table 6 summarizes this analysis. We focus on the in-sample comparison. From the table we see that the replaced headlines produce the wrong answer \(38.95\%\) (\(=2.59\%+20.48\%+15.88\%\)) of the time whereas the original headlines are wrong only \(17.6\%\) (\(=0.02\%+1.7\%+15.88\%\)) of the time. But when the original response is wrong and the replaced response is neutral, the original strategy loses 242 basis points, on average. In contrast, when the replaced response is wrong and the original response is neutral, the replaced strategy loses only 97 basis points, on average. Thus, although the original response is wrong less often, when it is wrong (and the replaced response remains neutral) it suffers comparatively large losses. The original headlines lead GPT to make some very bad trades that the replaced strategy
\begin{table}
\begin{tabular}{l l} \hline \hline Category & Definition \\ \hline \hline orig \(\diameter\) & The original GPT score is correct \\ \hline orig \(\diameter\) & The original GPT score is zero \\ \hline orig X & The original GPT score is incorrect \\ \hline rep \(\diameter\) & The replaced GPT score is correct \\ \hline rep \(\diameter\) & The replaced GPT score is zero \\ \hline rep X & The replaced GPT score is incorrect \\ \hline \hline \end{tabular}
\end{table}
Table 5: The classifications that place each observation into one of nine categories, depending on the correctness of its original GPT score and its replaced GPT score.
sits out. We see a similar pattern out-of-sample, where we can compare the losses of 289 basis points and 130 basis points in the corresponding scenarios.
Table 7 reports the corresponding results using the TR headlines. The results are similar but show an even greater advantage to the replaced headlines. In-sample, the original response is now incorrect in over 16% of observations, while the replaced response is incorrect in fewer than 13% of observations. We again see an "overconfidence" effect: when the original response is wrong and the replaced response is neutral, the original strategy loses 138 basis points; the reverse scenario has an average loss of only 91 basis points. A similar comparison holds out-of-sample.
The results of Tables 6 and 7 suggest that the types of errors shown in Table 4 occur on a broader scale.
The analysis in Tables 6 and 7 is based on the sentiment scores of individual headlines. Our trading strategy averages scores for all headlines about a given company each market period, and the weight given to each company depends on the total number of companies with news that day. The mean returns in Tables 6 and 7 are therefore not directly comparable to the mean strategy returns in Table 3. Tables 6 and 7 allow us to focus more directly on the classification errors in individual headlines, while the analysis performed in Table 3 in conducted on the portfolio level.
### Market Cap of Recommended Companies
We expect GPT-3.5 to be exposed to more text about larger companies in its training and therefore to be more likely to return positive or negative responses to news headlines about these companies. With this idea in mind, we examine the market cap of companies for which GPT-3.5 makes buy or sell recommendations. We do this using the scraped headlines because our TR data is already restricted to S&P 500 companies.
Table 8 compares the average market cap of companies in the long-only and short-only portfolios and all other stocks mentioned in our scraped headlines. The table shows that, indeed, the companies
\begin{table}
\begin{tabular}{l|l|c c c|c c c|c c} \hline \hline & & \multicolumn{3}{c|}{rep ✓} & \multicolumn{3}{c|}{rep \(\varnothing\)} & \multicolumn{3}{c}{rep X} \\ & & orig\_ret & rep\_ret & prop & orig\_ret & rep\_ret & prop & orig\_ret & rep\_ret & prop \\ \hline \multirow{3}{*}{\(\Xi\)} & orig ✓ & 318 & 318 & 15.11\% & 269 & 0 & 1.69\% & 278 & -4 & 2.59\% \\ & orig \(\varnothing\) & 0 & 350 & 1.27\% & 0 & 0 & 41.26\% & 0 & -97 & 20.48\% \\ \cline{2-10} & orig X & -208 & 209 & 0.02\% & -242 & 0 & 1.70\% & -268 & -267 & 15.88\% \\ \hline \multirow{3}{*}{\(\Xi\)} & orig ✓ & 313 & 313 & 20.42\% & 319 & 0 & 1.81\% & 283 & -5 & 2.08\% \\ & orig \(\varnothing\) & 0 & 224 & 1.52\% & 0 & 0 & 37.51\% & 0 & -130 & 13.50\% \\ \cline{1-1} & orig X & -658 & 670 & 0.01\% & -289 & 0 & 1.92\% & -290 & -290 & 21.23\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Classification analysis of scraped headlines.** The orig_ret and rep_ret columns refer to the original return and replaced return, measured in basis points, and the prop column refers to the proportion of observations in each category.
\begin{table}
\begin{tabular}{l l|c c c|c c c|c c} \hline \hline & & \multicolumn{3}{c|}{rep ✓} & \multicolumn{3}{c|}{rep \(\varnothing\)} & \multicolumn{3}{c}{rep X} \\ & orig\_ret & rep\_ret & prop & orig\_ret & rep\_ret & prop & orig\_ret & rep\_ret & prop \\ \hline \multirow{3}{*}{\(\Xi\)} & orig ✓ & 185 & 185 & 11.70\% & 141 & 0 & 5.17\% & 24 & -10 & 0.20\% \\ & orig \(\varnothing\) & 0 & 98 & 0.97\% & 0 & 0 & 64.19\% & 0 & -91 & 1.61\% \\ & orig X & -16 & 16 & 0.09\% & -138 & 0 & 4.99\% & -164 & -164 & 11.10\% \\ \hline \multirow{3}{*}{\(\Xi\)} & orig ✓ & 222 & 222 & 13.88\% & 187 & 0 & 4.62\% & 27 & -6 & 0.25\% \\ & orig \(\varnothing\) & 0 & 118 & 0.95\% & 0 & 0 & 62.60\% & 0 & -101 & 1.53\% \\ \cline{1-1} & orig X & -11 & 11 & 0.07\% & -176 & 0 & 2.66\% & -213 & -213 & 13.39\% \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Classification analysis of Thomson Reuters headlines.** The orig_ret and rep_ret columns refer to the original return and replaced return, measured in basis points, and the prop column refers to the proportion of observations in each category.
traded by GPT-3.5 are significantly larger than the average company with news. In-sample, this pattern is consistent with both look-ahead bias and a distraction effect. But the pattern persists out-of-sample, where it cannot be explained by look-ahead bias and is thus more likely due to a distraction effect: GPT is more likely to provide a positive or negative recommendation when it knows more about a company, even in the absence of specific information relevant to a headline, and we expect it to know more about larger companies.
This tendency may also explain why, in out-of-sample trading on TR headlines, the replaced strategy outperforms the original strategy by a borderline significant amount; see Table 3. The stocks covered by our TR headlines are all S&P 500 stocks and are therefore larger, on average, than most other stocks. If GPT-3.5 tends to be misled by exogenous information, then anonymizing headlines should be most effective for larger stocks.
### Predictive Power of GPT Scores
Section 3.2 compared original and replaced scores through trading strategies. We can also measure the predictive power of the scores directly by regressing returns on the scores. Our basic specification takes the form
\[r_{i,t+1}=a_{i}+b_{t}+\beta x_{i,t}+\epsilon_{i,t+1}, \tag{1}\]
where \(r_{i,t+1}\) is the single-period return of firm \(i\) from \(t\) to \(t+1\); \(a_{i}\) and \(b_{t}\) are firm-fixed and time-fixed effects, respectively; and \(x_{i,t}\) is the GPT score for firm \(i\) in period \(t\). Recall that the return \(r_{i,t+1}\) is either an intraday return (in which case \(x_{i,t}\) is calculated from news in the prior 4 P.M. to 6 A.M. period) or a close-to-close return (in which case \(x_{i,t}\) is calculated from news in the prior 6 A.M. to 4 P.M. period).
The score \(x_{i,t}\) may be based on original or replaced headlines, resulting in respective coefficients \(\beta_{1}\) and \(\beta_{2}\). We want to compare these coefficients and, in particular, test the hypothesis
\[H_{0}:\beta_{1}=\beta_{2}. \tag{2}\]
The original and replaced scores are on the same scale, so comparing the coefficients is meaningful.
To test (2), we set up two versions of equation (1)--one using original headlines and one using replaced headlines--as a system of seemingly unrelated equations to estimate them jointly. We stack \(r_{i,t+1}\) on top of itself, include both sentiment scores as independent variables, and pad the remaining values with 0 to form the following equation:
\[\begin{pmatrix}r_{i,t+1}\\ r_{i,t+1}\end{pmatrix}=\begin{pmatrix}x_{1,t}&0\\ 0&x_{2,t}\end{pmatrix}\begin{pmatrix}\beta_{1}\\ \beta_{2}\end{pmatrix}+\begin{pmatrix}a_{1,i}\\ a_{2,i}\end{pmatrix}+\begin{pmatrix}b_{1,t}\\ b_{2,t}\end{pmatrix}+\begin{pmatrix}e_{1,i,t+1}\\ e_{2,i,t+1}\end{pmatrix} \tag{3}\]
This regression estimates the two coefficients jointly and allows us to test (2) using a Wald chi-squared
\begin{table}
\begin{tabular}{c|l l l l l l} \hline \hline & & long & short & other & long - other & short - other \\ \hline \multirow{3}{*}{in-sample} & Mean & 35.37 & 42.37 & 17.91 & 17.46*** & 24.46*** \\ & Std. dev. & 129.17 & 139.27 & 93.22 & & \\ \cline{2-6} & No. of obs. & 27454 & 9473 & 62898 & & \\ \hline \multirow{3}{*}{out-of-sample} & Mean & 46.30 & 70.32 & 32.18 & 14.12*** & 38.15*** \\ \cline{2-6} & Std. dev. & 143.74 & 217.04 & 123.47 & & \\ \cline{1-1} & No. of obs. & 9602 & 4452 & 15552 & & \\ \hline \multicolumn{6}{l}{* p \(<\) 0.10, ** p \(<\) 0.05, *** p \(<\) 0.01} \\ \end{tabular}
\end{table}
Table 8: Comparison of the average market cap of companies recommended by GPT-3.5 to other companies with news, using scraped headlines. Each observation is the market cap of a company within the given category, as measured in billions of USD. The “long” and “short” categories represent the average market cap of companies that GPT-3.5 recommended to buy or recommended to short-sell, respectively. The “other” category represents the average market cap of all other companies with news.
test.
Table 9 reports results for four regressions of the form in (3), using scraped and TR headlines, in-sample and out-of-sample. Each column reports the coefficients for the original and replaced scores. The Wald test and the p-value at the bottom of each column test equality of the coefficients for original and replaced scores.
In each of the four columns, the replaced-score coefficient is at least as large as the original-score coefficient, indicating that anonymizing the headlines increases (or does not decrease) the predictive value of the scores. Using the scraped headlines, the difference between the original and replaced coefficients is not statistically significant, but the magnitude of the difference is greater in-sample than out-of-sample. Using the TR headlines, the difference is highly significant in-sample and borderline significant out-of-sample. These findings are consistent with patterns discussed previously: the distraction effect appears to be stronger than the look-ahead bias, particularly among the larger companies covered by the TR data; the distraction effect can persist even out-of-sample, indicating that anonymization can be helpful even for out-of-sample performance.
### Comparing Market Betas of the Strategies
Investors generally prefer strategies with lower correlation to overall market performance. Lower correlation offers greater diversification, and it suggests that a strategy's returns are driven by information unrelated to the overall market. We are therefore interested in comparing the performance of the original and replaced strategies through regression on the market. We run regressions of the form
\[r_{t+1}=\beta_{0}+\beta_{1}(rm-rf)_{t}+\epsilon_{t+1}, \tag{4}\]
where \(r_{t+1}\) is a portfolio return from \(t\) to \(t+1\), and \((rm-rf)_{t}\) is the market return minus the risk-free rate at time \(t\), obtained from the Kenneth French data library French (2023). We measure returns in basis points.
Given that our trading strategies rely on GPT recommendations from real-time news headlines, our portfolio returns should generally not be highly correlated with the return of the market. For instance, if the overall market is trending downwards, our model can still profit by buying good-news stocks and shorting bad-news stocks, as long as there is some short-term price underreaction to news about individual stocks.
Table 10 reports results for eight regressions of the form in (4), corresponding to the news source (scraped or TR), the time period (in-sample or out-of-sample), and the headline type (original or replaced). The market betas (the coefficients for rm-rf) are all small. More importantly, in every combination of source and period, the replaced R\({}^{2}\) is lower than the original R\({}^{2}\), and the replaced market beta is
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Scraped} & \multicolumn{2}{c}{Thomson-Reuters} \\ \hline & In-Sample & Out-of-Sample & In-Sample & Out-of-Sample \\ \hline orig\_score & 45.9*** & 23.5*** & 15.7*** & 13.4*** \\ \hline std. error & 10.06 & 7.54 & 2.15 & 5.03 \\ \hline t-stat & (4.556) & (3.116) & (7.328) & (2.661) \\ \hline rep\_score & 49.3*** & 23.5 & 20.9*** & 22.36*** \\ \hline std. error & 10.83 & 7.60 & 2.64 & 6.15 \\ t-stat & (4.55) & (3.086) & (7.881) & (2.867) \\ \hline Wald test & 1.587 & 0.000391 & 24.239 & 3.315 \\ p-value & 0.2077 & 0.9842 & 0.0000 & 0.0686 \\ \hline No. of obs. & 199650 & 59212 & 314376 & 49440 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Stacked regressions to compare the coefficients of the original GPT score and the replaced GPT score for statistically significant differences. The formula of the regression is given by equation (3). Standard errors are clustered by time and firm. Each Wald test tests equality of the corresponding orig_score and rep_score coefficients.
lower than the original market beta. In fact, in all cases, the replaced market beta fails to be statistically significantly different from zero at the 5% level. The replaced strategy is less correlated with the market and less sensitive to the direction of the market.
The results in Table 10 may also shed light on the relative influences of look-ahead bias and the distraction effect. If GPT-3.5 were able to exploit look-ahead bias effectively, its in-sample profits should be highly idiosyncratic, driven by stocks with company-specific good or bad news. But for both news sources we see that the in-sample beta is larger than the out-of-sample, suggesting, once again, that the distraction effect outweighs GPT's ability to trade on look-ahead bias. The lowest betas of all are found for the out-of-sample performance of the replaced strategy. This is precisely the case in which we expect the distraction effect to be weakest and the sentiment signal in headlines about individual companies to be most idiosyncratic.
Table 11 confirms that the difference between in-sample and out-of-sample market betas reported in Table 10 is statistically significant for both news sources. This comparison treats returns in the two time periods as independent of each other.
## 4 Discussion
The power of large language models lies in their ability to learn from massive training data. This feature enables LLMs to make short-term stock price predictions from news text without having been trained to do so. But the use of massive training data also limits opportunities to design and backtest strategies that leverage LLMs. Any backtest that includes an LLM's training window is tainted by the LLM's exposure to information about subsequent events.
Our analysis shows that an LLM's in-sample knowledge can usefully be thought of in two parts, which we refer to as look-ahead bias and a distraction effect. Look-ahead bias overstates the performance of trading strategies based on an LLM's predictions, but the distraction effect can be positive or negative.
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{4}{c}{Scraped} & \multicolumn{4}{c}{Thomson Reuters} \\ \hline & \multicolumn{2}{c}{In-Sample} & \multicolumn{2}{c}{Out-of-Sample} & \multicolumn{2}{c}{In-Sample} & \multicolumn{2}{c}{Out-of-Sample} \\ & \multicolumn{2}{c}{Original} & \multicolumn{2}{c}{Replaced} & \multicolumn{2}{c}{Original} & \multicolumn{2}{c}{Replaced} & \multicolumn{2}{c}{Original} & \multicolumn{2}{c}{Replaced} \\ \hline \hline const & 12.77*** & 14.74*** & 18.78*** & 14.13*** & 9.09*** & 11.44*** & 3.77 & 3.54 \\ std. error & 3.917 & 4.389 & 6.623 & 6.384 & 1.783 & 2.007 & 4.117 & 4.979 \\ \hline t-stat & (3.26) & (3.358) & (2.837) & (2.213) & (5.102) & (5.701) & (0.917) & (0.712) \\ rm-rf & 0.429*** & 0.348* & 0.273*** & 0.0850 & 0.317*** & 0.0677 & 0.220*** & 0.0221 \\ \hline std. error & 0.0420 & 0.215 & 0.0532 & 0.0694 & 0.0192 & 0.0488 & 0.0331 & 0.0677 \\ t-stat & (10.217) & (1.620) & (5.131) & (1.226) & (16.571) & (1.389) & (6.654) & (0.326) \\ \hline No. of obs. & 1699 & 1699 & 314 & 314 & 1695 & 1695 & 314 & 314 \\ R\({}^{2}\) & 0.058 & 0.033 & 0.078 & 0.009 & 0.14 & 0.005 & 0.124 & 0.001 \\ \hline \multicolumn{10}{l}{* p \(<\) 0.10, ** p \(<\) 0.05, *** p \(<\) 0.01} \\ \end{tabular}
\end{table}
Table 10: Regression results of overall long-short portfolio returns on the excess return of the market.
\begin{table}
\begin{tabular}{l r r} \hline \hline & Scraped & Thomson Reuters \\ \hline abs. diff. in coefficients & 0.1557** & 0.973*** \\ std. error & 0.0678 & 0.0382 \\ t-stat & 2.30 & 2.55 \\ p-value & 0.011 & 0.005 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Independent samples t-test comparing the coefficient of the excess market return in-sample and the coefficient of the excess market return out-of-sample.
To measure and mitigate the bias in LLM in-sample predictions, we proposed an anonymization procedure that removes a company's name and other identifying information from the news text provided to the LLM. Surprisingly, anonymization _improves_ the in-sample performance of a long-short trading strategy driven by GPT-3.5 sentiment analysis of news headlines. We confirm this pattern in two different collections of news headlines. We conclude that the overall distraction effect must be negative and that it outweighs the positive gains from look-ahead bias. GPT-3.5 does not do a good job exploiting look-ahead bias, and the distraction effect adversely interferes with its ability to evaluate the sentiment in news text. This pattern is particularly evident for larger companies -- the companies we expect to be most represented in the GPT training data.
Out-of-sample performance is immune to look-ahead bias but not to the distraction effect. We find some evidence, although only borderline statistically significant, that anonymization is helpful in this setting as well. In other words, anonymization may be useful in improving GPT's out-of-sample sentiment analysis as well as in removing in-sample bias. |
2310.01423 | An Empirical Study of AI Generated Text Detection Tools | Since ChatGPT has emerged as a major AIGC model, providing high-quality
responses across a wide range of applications (including software development
and maintenance), it has attracted much interest from many individuals. ChatGPT
has great promise, but there are serious problems that might arise from its
misuse, especially in the realms of education and public safety. Several AIGC
detectors are available, and they have all been tested on genuine text.
However, more study is needed to see how effective they are for multi-domain
ChatGPT material. This study aims to fill this need by creating a multi-domain
dataset for testing the state-of-the-art APIs and tools for detecting
artificially generated information used by universities and other research
institutions. A large dataset consisting of articles, abstracts, stories, news,
and product reviews was created for this study. The second step is to use the
newly created dataset to put six tools through their paces. Six different
artificial intelligence (AI) text identification systems, including "GPTkit,"
"GPTZero," "Originality," "Sapling," "Writer," and "Zylalab," have accuracy
rates between 55.29 and 97.0%. Although all the tools fared well in the
evaluations, originality was particularly effective across the board. | Arslan Akram | 2023-09-27T12:44:12Z | http://arxiv.org/abs/2310.01423v1 | # An Empirical Study of AI Generated Text Detection Tools
###### Abstract
Since ChatGPT has emerged as a major AIGC model, providing high-quality responses across a wide range of applications (including software development and maintenance), it has attracted much interest from many individuals. ChatGPT has great promise, but there are serious problems that might arise from its misuse, especially in the realms of education and public safety. Several AIGC detectors are available, and they have all been tested on genuine text. However, more study is needed to see how effective they are for multi-domain ChatGPT material. This study aims to fill this need by creating a multi-domain dataset for testing the state-of-the-art APIs and tools for detecting artificially generated information used by universities and other research institutions. A large dataset consisting of articles, abstracts, stories, news, and product reviews was created for this study. The second step is to use the newly created dataset to put six tools through their paces. Six different artificial intelligence (AI) text identification systems, including "GPTkit," "GPTZero," "Originality," "Sapling," "Writer," and "Zylalab," have accuracy rates between 55.29 and 97.0%. Although all the tools fared well in the evaluations, originality was particularly effective across the board.
AI generated content detection, large language models, Classification, Text evaluation.
## 1 Introduction
Natural language generation (NLG) models developed more recently have significantly improved the control, variety, and quality of text generated by machines. Phishing [1], disinformation [2], fraudulent product reviews [3], academic dishonesty [4], and toxic spam all take advantage of NLG models' ability to generate novel, manipulable, human-like text at breakneck speeds and efficiencies. A major principle of trustworthy AI [5] addresses the possibility for misuse, such that benefits can be maximized while downsides are minimized. Annie Chechitelli, who serves as Turnitin's chief product officer, provided different instances in which the service discovered false positives. Chechitelli announced on May 14 that the program had processed 38.4 million entries; 9.6% of these submissions featured over 20% artificial intelligence writing, and 3.5% reported employing over 80% artificial intelligence writing [6].
The NLG model is quite robust, but there is room for improvement, such as generating grammatically valid but semantically incoherent or counterfactual language. Even worse, this data can potentially influence public opinion [7], [8]. Thus, language models may be contaminated by mass-produced text [9]. This approach to producing manuscripts can provide fresh and significant challenges to the veracity of scientific publishing and research. Recognizing literature produced by AI saves time for reviewers, contributes to maintaining the integrity of the scientific community, and protects the public from receiving erroneous information. To enhance AI models and human cooperation with AI, researchers need to investigate the gaps between the scientific literature generated by AI and the scientific material published by humans. As a result, we focus on a scenario in which an AI writing helper plays an important role in the
scientific community, and we evaluate the quality of scientific literature generated by AI compared to scientific language authored by humans.
In recent years, a lot of interest has been directed toward the potential of generative models such as ChatGPT to generate human-like writing, images, and other forms of media. An OpenAI-developed variant of the ubiquitous GPT-3 language model, ChatGPT is designed specifically for generating text suitable for conversation and may be trained to perform tasks including answering questions, translating text, and creating new languages [10]. Even though ChatGPT and other generative models have made great advances in creating human-like language, it still needs to be determined the difference between writing generated by a machine and text written by a human. This is the case even though ChatGPT and other generative models. This is of the utmost importance in applications such as content moderation, where it is important to identify and remove hazardous information and automated spam [11].
Recent work has centered on enhancing pre-trained algorithms' capacity to identify text generated by artificial intelligence. Along with GPT-2, OpenAI also released a detection model consisting of a RoBERTa-based binary classification system that was taught to distinguish between human-written and GPT-2-generated text. Integrating source-domain data with in-domain labeled data is what Black et al. (2021) do to overcome the difficulty of finding GPT-2-generated technical research literature [12]. The challenge and dataset on detecting machine-created scientific publications, DagPap22, were proposed by Kashnitsky et al. [13]. During the COLING 2022 session on Scholarly Document Processing. Algorithms like GPT-3, GPT-neo, and led-large-book-summary are examples of abstract algorithms used. DagPap22's prompt templates necessitate including information on the primary topic and scientific structural function, making it more probable that the tool will collect problematic and easily-discoverable synthetic abstracts [14], [15]. More recently, GPTZero has been proposed to detect ChatGPT-generated text, primarily based on perplexity. Recent studies have revealed two major issues that need addressing. To begin, every study given here had to make do with small data samples. Thus, a larger, more robust data set is required to advance our understanding. Second, researchers have typically used mock data to fine-tune final versions of pre-train models. Text created with various artificial intelligence programs should all be detectable by the same approach.
Recently developed algorithms for detecting AI-generated text can tell the difference between the two. These systems employ state-of-the-art models and algorithms to decipher text created by artificial intelligence. One API that can accurately identify AI-generated content is Check For AI, which analyses text samples. A further tool called Compilatio uses sophisticated algorithms to identify instances of plagiarism, even when they are present in AI-generated content. Similarly, Content at Scale assesses patterns, writing style, and other language properties to spot artificially generated text. Crossplag is an application programming interface (API) that can detect AI-generated text in a file. The DetectGPT artificial intelligence content detector can easily identify GPT model-generated text. We use Go Winston to identify artificially manufactured news content and social media content. Machine learning and linguistic analysis are used by GPT Zero to identify AI-generated text. The GPT-2 Output Detector Demo over at OpenAI makes it simple to test if a text was produced by a GPT-2 model. OpenAI Text Classifier uses an application programming interface to categorize text, including text generated by artificial intelligence. In addition to other anti-plagiarism features, PlagiarismCheck may identify information created by artificial intelligence. Turnitin is another well-known tool that uses AI-generated text detection to prevent plagiarism. The Writeful GPT Detector is a web-based tool that uses pattern recognition to identify artificially produced text. Last but not least, the Writer can spot computer-generated text and check the authenticity and originality of written materials. Academics, educators, content providers, and businesses must deal with the challenges of AI-generated text, but new detection techniques are making it easier.
This research will conduct a comparative analysis of AI-generated text detection tools using self-generated custom dataset. To accomplish this, the researchers will collect datasets using a variety of artificial intelligence (AI) text generators and humans. This study, in contrast to others that have been reported, incorporates a wide range of text formats, sizes, and organizational patterns. The next stage is to test and compare the tools with the proposed tool. The following bullet points present the most significant takeaways from this study's summary findings.
* Collecting dataset using different LLMs on different topics having different sizes and writing styles.
* Investigation of detection tools for AI text detection on collected dataset.
* Comparison of the proposed tool to other cutting-edge tools to demonstrate the interpretability of the best tool among them.
The following is the structure of this article: The results and comments are presented in Section 4, while Section 2 provides a brief literature review. The final chapter summarizes the work and recommends where the authors could go.
**2. Literature Review**
Artificial intelligence-generated text identification has sparked a paradigm change in the ever-evolving fields of both technology and literature. This innovative approach arises from the combination of artificial intelligence and language training, in which computers are given access to reading comprehension strategies developed by humans. Like human editors have done for decades in the physical world, AI-generated text detection is now responsible for determining the authenticity of literary works in the digital world. The ability of algorithms to tell the difference between human-authored material and that generated by themselves is a remarkable achievement of machine learning. As we venture into new waters, there are serious implications for spotting plagiarized work, gauging the quality of content, and safeguarding writers' rights. AI-generated text detection is a guardian for literary originality and reassurance that the spirit of human creativity lives on in future algorithms, connecting the past and future of written expression.
In their detailed analysis of three custom-built LLMs on a dataset they created, ChatGPT Comparison Corpus (HC3), Guo et al. [16] Using F1 scores for each corpus or sentence, we evaluated the performance of the three models and found that the best model had a maximum F1 score of 98.78%. Results indicated superiority over other SOTA strategies. Because the authors of the corpus only used abstract paragraphs from a small subset of the available research literature, the dataset is skewed toward that subset. The presented models may need to perform better on a general-purpose data set. Wang et al. [17] have offered a benchmarked dataset-based comparison of several AI content detection techniques. For evaluation, they have used question-and-answer, code-summarization, and code-generation databases. The study found an average AUC of 0.40 across all selected AI identification techniques, with the datasets used for comparison containing 25k samples for both human and AI-generated content. Due to the lack of diversity in the dataset, the chosen tools performed poorly; a biased dataset cannot demonstrate effective performance. Tools can also be more accurate or have a higher area under the curve (AUC).
Catherine et al. [18] employed the 'GPT-2 Output Detector' to assess the quality of generated abstracts. The study's findings revealed a significant disparity between the abstracts created and the actual abstracts. The AI output detector consistently assigned high 'false' scores to the generated abstracts, with a median score of 99.98% (interquartile range: 12.73%, 99.98%). This suggests a strong likelihood of machine-generated content. The initial abstracts exhibited far lower levels of 'false' ratings, with a median of 0.02% and an interquartile range (IQR) ranging from 0.02% to 0.09%. The AI output detector exhibited a robust discriminatory ability, evidenced by its AUROC (Area Under the Receiver Operating Characteristic) value of 0.94. Utilizing a website and iThenticate software to conduct a plagiarism detection assessment revealed that the generated abstracts obtained higher scores, suggesting a greater linguistic similarity to other sources. Remarkably, human evaluators had difficulties in distinguishing between authentic and generated abstracts. The researchers achieved an accuracy rate of 68% in accurately identifying abstracts generated by ChatGPT. Notably, 14% of the original abstracts were produced by machine-generated methods. The literature critiques have brought attention to the issue of abstracts that are thought to be generated by artificial intelligence (AI).
Using a dataset generated by the users themselves, Debora et al. [19] compared and contrasted multiple AI text detection methods. The research compared 12 publicly available tools, two proprietary and available only to qualified academic institutions and other research groups. The researchers' primary focus has been explaining why and how artificial intelligence (AI) techniques are useful in the academy and the
sciences. The results of the comparison were then shown and discussed. Finally, the limitations of AI technologies regarding evaluation criteria were discussed.
To fully evaluate such detectors, the team [20] first trained DIPPER, a paraphrase generation model with 11 billion parameters, to rephrase entire texts in response to contextual information such as user-generated cues. Using scalar controls, DIPPER's paraphrased results can be tailored to vocabulary and sentence structure. Extensive testing proved that DIPPER's paraphrase of AI-generated text could evade watermarking techniques and GPTZero, DetectGPT, and OpenAI's text classifier. The detection accuracy of DetectGPT was decreased from 70.3% to 4.6% while maintaining a false positive rate of 1% when DIPPER was used to paraphrase text generated by three well-known big language models, one of which was GPT3.5-davinci-003. These rephrases were impressive since they didn't alter the original text's meaning. The study developed a straightforward defense mechanism to safeguard AI-generated text identification from paraphrase-based attacks. Language model API providers were required to get semantically identical texts for this defense strategy to work. To find sequences comparable to the candidate text, the algorithm looked through a collection of already generated sequences. A 15-million-generation database derived from a finely tuned T5-XXL model confirmed the efficacy of this defense strategy. The software identified paraphrased generations in 81% to 97% of test cases, demonstrating its efficacy. Remarkably, only 1% of human-written sequences were incorrectly labeled as AI-generated by the software. The project made its code, models, and data publicly available to pave the way for additional work on detecting and protecting AI-generated text.
OpenAI [21], an AI research company, compared manual and automatic ML-based synthetic text recognition methods. Utilizing models trained on GPT-2 datasets enhances the inherent authenticity of the text created by GPT-2, hence facilitating human evaluators' identification of erroneous datasets. Consequently, the team evaluated a rudimentary logistic regression model, a detection model based on fine-tuning, and a detection model employing zero-shot learning. A logistic regression model was trained using TFIDF, unigram, and bigram features and evaluated using various generating processes and model parameters afterward. The most basic classifiers demonstrated an accuracy rate of 97% or higher. Models need help in identifying shorter outputs. Topological Data Analysis (TDA) was utilized by Kushnareva et al. [22] to count graph components, edges, and cycles. Text recognition machine learning used these features. The characteristics trained a logistic regression classifier on WebText, Amazon Reviews, RealNews, and GROVER [23]. ChatGPT's lack of thorough testing makes this approach's success uncertain.
The online application DetectGPT was used to zero-shot identify and separate AI-generated text from human-generated text in another investigation [24]. Log probabilities from the generative model were employed. The researchers found intentionally generated text in the model's log probability function's negative curvature. The authors thought assessing the log probability of the models under discussion was always possible. This method only works with GPT-2 cues, the scientists say. In another study, Mitrovic et al. trained an ML model to identify ChatGPT queries from human ones [25]. ChatGPT-generated two-line restaurant reviews were recognized by DISTILBERT, a lightweight BERT-trained and Transformer-tuned model. SHAP explained model predictions. Researchers observed that the ML model couldn't recognize ChatGPT messages. The authors introduced AICheatCheck, a web-based AI detection tool that can distinguish if a text was produced by ChatGPT or a human [26]. AICheck analyzes text patterns to detect origin. The writers used Guo et al. [16] and education to make do with limited data. The study must explain AICheatCheck's precision. The topic was recently investigated by Cotton et al. [27]. The benefits and drawbacks of using ChatGPT in the classroom concerning plagiarism are discussed. In another text [28], the authors used statistical distributions to analyze simulated data. Using a GLTR application, they make sure the text you put in is correct by highlighting it in different colors. Questions used on the GLTR exam were written by the general public and based on the publicly accessible GPT-human-generated content for the 21.5B parameter model [10]. The authors also studied human subjects by having students spot instances of fabricated news.
Different AI-generated text classification models have been presented in recent years, with approaches ranging from deep learning and transfer learning to machine learning. Furthermore, software incorporated the most effective models to help end users verify AI-generated writing. Some studies evaluate various AI
text detection tools by comparing their performance on extremely limited and skewed datasets. Therefore, it is necessary to have a dataset that includes samples from many domains written in the same language that the models were trained in. It needs to be clarified which of the many proposed tools for AI text identification is the most effective. To find the best tool for each sort of material, whether from a research community or content authors, it is necessary to do a comparative analysis of the top-listed tools.
## 3 Material and Methods
Many different tools for recognizing artificial intelligence-created text were compared in this analysis. The approach outlined here consists of three distinct phases. Examples of human writing are collected from many online sources, and OpenAI frameworks are used to generate examples of AI writing from various prompts (such as articles, abstracts, stories, and comment writing). In the following stage, you will select six applications for your newly formed dataset. Finally, the performance of the tools is provided based on several state-of-the-art measurements, allowing end users to pick the best alternative. Figure 1 depicts the overall structure of the executing process.
### Datasets
The primary goal of this study is to amass human-written samples from many sources, including academic databases such as Google Scholar and Research Gate, content producer and blogger databases such as Wikipedia, and other knowledge aggregators. The dataset collected using above mentioned tools is named as AH&AITD (Arslan's Human and AI Text Database) is available at this link (). The samples used for testing are divided into two groups in Table 1: "Human Written" and "AI Generated." The "Human Written" section of the dataset is further broken down into subheadings like "Open Web Text," "Blogs," "Web Text," "Q&A," "News Articles," "Opinion Statements," and "Scientific Research." The number of samples from each source is also included in this group's total of 5,790 samples. The "AI-Generated" group, on the other hand, has a wide range of AI models, each with its own sample size (such as ChatGPT [29], GPT-4 [30],
Figure 1: Implementation framework for testing tools
Paraphrase [31], GPT-2 [32], GPT-3 [33], DaVinci, GPT-3.5, OPT-IML [34], and Flan-T5 [35]). The final number in the table is 11,580, the total number of students in both groups. This table is useful since it shows where the testing dataset came from and how the AI models were evaluated. Table 1 presents details of testing samples used for evaluation of targeted AI generated text detection tools.
The method of data collecting for human written samples is based on a variety of various approaches. Most human-written samples were manually obtained from human-written research articles, and only abstracts were used. Additionally, open web texts are harvested from various websites, including Wikipedia.
_3.2 AI Generated Text Detection Tools_
Without AI-generated text detection systems, which monitor automated content distribution online, the modern digital ecosystem would collapse. These systems employ state-of-the-art machine learning and natural language processing methods to identify and label data generated by AI models like GPT-3, ChatGPT, and others. They play a vital role in the moderation process by preventing the spread of misinformation, protecting online communities, and identifying and removing fake news. Platform administrators and content moderators can use these tools to spot literature generated by artificial intelligence by seeing patterns, language quirks, and other telltale signals. The importance of AI-created text detection tools for user security, ethical online discourse, and legal online material has remained strong despite the advancements in AI. In this section, AI generated text detection tools are described briefly.
_3.2.1 AI Text Detection API (Zylalab)_
Zyalab's AI Text Detection API [36] makes locating and analyzing text in various content types simple. This API employs cutting-edge artificial intelligence (AI) technology to precisely recognize and extract textual content from various inputs, including photos, documents, and digital media. AI Text Detection API uses cutting-edge OpenAI technology to identify ChatGPT content. Its high accuracy and simple interface
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Class** & **Source** & **Number of Samples** & **Total** \\ \hline \multirow{8}{*}{Human Written} & Open Web Text & 2343 & \multirow{8}{*}{5790} \\ \cline{2-3} \cline{5-5} & Blogs & 196 & \\ \cline{2-3} \cline{2-3} & Web Text & 397 & \\ \cline{2-3} \cline{2-3} & Q\&A & 670 & \\ \cline{2-3} \cline{2-3} & News Articles & 430 & \\ \cline{2-3} \cline{2-3} & Opinion Statements & 1549 & \\ \cline{2-3} \cline{2-3} & Scientific Research & 205 & \\ \hline \multirow{8}{*}{AI Generated} & ChatGPT & 1130 & \multirow{8}{*}{5790} \\ \cline{2-3} \cline{2-3} & GPT-4 & 744 & \\ \cline{2-3} \cline{2-3} & Paraphrase & 1694 & \\ \cline{2-3} \cline{2-3} & GPT-2 & 328 & \\ \cline{2-3} \cline{2-3} & GPT-3 & 296 & \\ \cline{2-3} \cline{2-3} & Davinci & 433 & \\ \cline{2-3} \cline{2-3} & GPT-3.5 & 364 & \\ \cline{2-3} \cline{2-3} & OPT-IML & 406 & \\ \cline{2-3} \cline{2-3} & Flan-T5 & 395 & \\ \hline
**Total** & **11580** & **11580** \\ \hline \end{tabular}
\end{table}
Table 1: Testing dataset samples (AH&AITD).
let instructors spot plagiarism in student essays and other AI-generated material. Its ease of integration into workflows and use by non-technical users is a major asset. Due to OpenAI's natural language processing powers, the API can detect even mild plagiarism, ensuring the information's uniqueness. It helps teachers grade essays by improving the efficiency of checking student work for originality. In conclusion, the AI Text Detection API simplifies, accurately, and widely applies plagiarism detection and essay grading for content suppliers, educators, and more. Due to its ability to analyze text and provide a detailed report, this tool can be used for plagiarism detection, essay grading, content generation, chatbot building, and machine learning research. There are no application type constraints, merely API request limits. The API makes use of OpenAI technology. It has a simple interface and high accuracy, allowing it to detect plagiarism in AI-generated writing and serve as an essay detector for teachers.
#### 3.2.2 Gptkit
The innovators of GPTKit [37] saw a need for an advanced tool to accurately identify Chat GPT material, so they built one. GPTKit is distinguished from other tools because it utilizes six distinct AI-based content recognition methods, all working together to considerably enhance the precision with which AI-generated content may be discovered. Educators, professionals, students, content writers, employees, and independent contractors worried about the accuracy of AI-generated text will find GPTKit highly adaptable. When users input text for analysis, GPTKit uses these six techniques to assess the content's authenticity and accuracy. Customers can try out GPTKit's features for free by having it return the first 2048 characters of a response to a request. Due to the team's dedication to continuous research, the detector in GPTKit now claims an impressive accuracy rate of over 93% after being trained on a big dataset. You can rest easy knowing that your data will remain private during detection and afterward, as GPTKit only temporarily stores information for processing and promptly deletes it from its servers. GPTKit is a great tool to use if you wish to validate information with artificial intelligence (AI) for authenticity or educational purposes.
#### 3.2.3 GPTZero
GPTZero is the industry standard for identifying Large Language Model documents like ChatGPT. It detects AI content at the phrase, paragraph, and document levels, making it adaptable. The GPTZero model was trained on a wide range of human-written and AI-generated text, focusing on English prose. After servicing 2.5 million people and partnering with 100 education, publishing, law, and other institutions, GPTZero is a popular AI detector. Users may easily enter text for analysis using its simple interface, and the system returns detailed detection findings, including sentence-by-sentence highlighting of AI-detected material, for maximum transparency. GPTZero supports numerous AI language models, making it a versatile AI detection tool. ChatGPT, GPT-4, GPT-3, GPT-2, LLaMA, and AI services are included. It was the most accurate and trustworthy AI detector of seven tested by TechCrunch. Customized for student writing and academic prose, GPTZero is ideal for school. Despite its amazing powers, GPTZero [38] admits it has limitations in the ever-changing realm of AI-generated entertainment. Thus, teachers should combine its findings into a more complete assessment that prioritizes student comprehension in safe contexts. To help teachers and students address AI misuse and the significance of human expression and real-world learning, GPTZero emphasizes these topics. In-person evaluations, edited history analysis, and source citations can help teachers combat AI-generated content. Due to its commitment to safe AI adoption, GPTZero may be a reliable partner for educators facing AI issues.
#### 3.2.4 Sapling
Sapling AI Content Detector [39] is a cutting-edge program that accurately recognizes and categorizes AI-generated media. This state-of-the-art scanner utilizes state-of-the-art technology to verify the authenticity and integrity of text by checking for the existence of AI-generated material in various contexts. Whether in
the courtroom, the publishing industry, or the classroom, Sapling AI Content Detector is a potent solution to the issue of AI-generated literature. Its straightforward interface and comprehensive detection results equip users to make informed judgments about the authenticity of the material. Sapling AI Content Detector's dedication to precision and dependability makes it a valuable resource for companies and individuals serious about preserving the highest possible content quality and originality requirements.
#### 3.2.5 Originality
The Originality AI Content Detector [40] was intended to address the growing challenge of identifying AI-generated text. This cutting-edge artificial intelligence influence detector can tell if human-written content has been altered. It examines every word, every sentence, every paragraph, and every document. Human-made and computer-generated texts may be distinguished with confidence thanks to the rich variety of training data. Educators, publishers, academics, and content producers will find this tool invaluable for guarding the authenticity and integrity of their own work. The Originality AI Content Detector highlights potential instances of AI-generated literature to increase awareness and promote the responsible use of AI technologies in writing. The era of AI-driven content creation gives users the knowledge to make purposeful decisions that preserve the quality and originality of their writing.
#### 3.2.6 Writer
The Writer AI Content Detector [41] is cutting-edge software for spotting content created by artificial intelligence. This program utilizes cutting-edge technologies to look for signs of artificial intelligence in the text at the phrase, paragraph, and overall document levels. Since he was taught using a large dataset, including human-authored and AI-generated content, the Writer is very good at telling them apart. This guide is a must-read for every instructor, publisher, or content provider serious about their craft. By alerting users to the presence of AI content and offering details about it, the Writer arms them with the information they need to protect the authenticity of their works. The author is an honest champion of originality, advocating for responsible and ethical content generation in an era when AI is increasingly involved in the creative process. Developers can get the Writer AI Content Detector SDK by running "pip install writer." The use of API credentials for writer authentication is critical [42]. These API keys can be found on your account's dashboard. Simply replace the sample API keys in the code snippets with your own or sign in for individualized code snippets. Users without access to their secret API keys on the control panel. To become a Writer account's development team member, you should contact the account's owner. Developers can access the Writer SDK and AI Content Detector once signed in. The SDK includes document and user management tools, content identification, billing information retrieval, content production, model customization, file management, snippet handling, access to a style guide, terminology management, user listing, and management. With this full suite of resources, customers can confidently include AI-driven content recognition into their projects and apps without compromising safety or precision.
#### 3.3 Experimental Setup
Six distinct content identification approaches developed using artificial intelligence were evaluated in depth for this study. Each tool has an API that can be used with various languages and frameworks. To take advantage of these features, subscriptions have been obtained for each API, and the software has been put through its pace with Python scripts. The results were produced using the testing dataset discussed above. All experiments have been run on a sixth-generation Dell I7 system with 24 GB of RAM and 256 SSD ROM using Python 3.11 on MS Code with Jupyter Notebook Integration.
#### 3.3 Evaluation Policy
To ensure the robustness, dependability, and usefulness of a company's machine-learning models, the company should develop and adhere to an evaluation policy. This policy spells evaluation, validation, and
application of models in detail. As a first step, it converges on a standardized approach to evaluation, allowing for fair and uniform assessment of model performance across projects. Comparing projects, identifying best practices, and maximizing model development are all made easier with the introduction of uniform standards. Second, a policy for assessing model performance guarantees that they hit targets for measures like accuracy, precision, and recall. As a result, only high-quality, reliable models with strong performance are deployed. Reduced implementation risks are achieved through the policy's assistance in identifying model inadequacies, biases, and inaccuracies. An assessment policy fosters accountability and trustworthiness in data science by requiring uniformity and transparency in model construction.
Accuracy is important in machine learning and statistics because it measures model prediction. Accuracy is a percentage of accurately predicted cases to the dataset's total occurrences. The term "accuracy" could mean:
\[Accuracy=\frac{Number\ of\ correct\ predictions}{Total\ Number\ of\ predictions}\times 100\]
In this formula, the "Total Number of Predictions" represents the size of the dataset, while the "Number of Correct Predictions" is the number of predictions made by the model that corresponds to the actual values. A quick and dirty metric to gauge a model's efficacy is accuracy, but when one class greatly outnumbers the other in unbalanced datasets, this may produce misleading results.
Precision is the degree to which a model correctly predicts the outcome. In the areas of statistics and machine learning, it is a common metric. The number of correct positive forecasts equals the ratio of true positive predictions to all positive predictions. The accuracy equation can be described as follows:
\[Precision=\frac{True\ Positives}{True\ Positives+False\ Positives}\]
The avoidance of false positives and negatives in practical use is what precision quantifies. A high accuracy score indicates that when the model predicts a positive outcome, it is more likely to be true, which is especially important in applications where false positives could have major consequences, such as medical diagnosis or fraud detection.
Recall (true positive rate or sensitivity) is an important performance metric in machine learning and classification applications. It measures a model's ability to discover and label every instance of interest in a given dataset. To recall information, follow this formula:
\[Recall=\frac{True\ Positives}{True\ Positives+False\ Negatives}\]
In this formula, TP represents the total number of true positives, whereas FN represents the total number of false negatives. Medical diagnosis and fraud detection are two examples of areas where missing a positive instance can have serious effects; applications with a high recall, which indicates the model effectively catches a large proportion of the true positive cases, could profit greatly from such a model.
The F1 score is a popular metric in machine learning that combines precision and recall into a single value, offering a fairer evaluation of a model's efficacy, especially when working with unbalanced datasets. The formula for its determination is as follows:
\[F1\ Score=2\ \times\frac{(Precision\ \times Recall)}{(Precision+Recall}\]
Precision is the proportion of correct predictions relative to the total number of correct predictions made by the model, whereas recall measures the same proportion relative to the number of genuine positive cases in the dataset. The F1 score excels when a compromise between reducing false positives and false negatives is required, such as medical diagnosis, information retrieval, and anomaly detection. By factoring in precision and recall, F1 is a well-rounded measure of a classification model's efficacy.
A machine learning classification model's accuracy can be evaluated using the ROC curve and the Confusion Matrix. The ROC curve compares the True Positive Rate (Sensitivity) to the False Positive Rate (1-Specificity) at different cutoffs to understand a model's discriminatory ability. The Confusion Matrix provides a more detailed assessment of model accuracy, precision, recall, and F1-score, which meticulously
tabulates model predictions into True Positives, True Negatives, False Positives, and False Negatives. Data scientists and analysts can use these tools to learn everything they need to know about model performance, threshold selection, and striking a balance between sensitivity and specificity in classification jobs.
**4. Results and Analysis**
The results and discussion surrounding these tools reveal intriguing insights into the usefulness and feasibility of six AI detection approaches for differentiating AI-generated text from human-authored content. Detection technologies, including GPTZero, Sapling, Writer, AI Text Detection API (Zyalab), Originality AI Content Detector, and GPTKIT, were ranked based on several factors, including accuracy, precision, recall, and f1-score. Table 2 compares different AI text detection approaches that can be used to tell the difference between AI-written and -generated text.
First, GPTKIT impresses with its high F1 Score (21) because of its high precision (90) in detecting human-written text but shockingly low recall (12). This suggests that GPTKIT is overly conservative, giving rise to several false negatives. On the other hand, its recall (99) and F1 Score (69) are excellent when recognizing language created by humans. GPTZero's performance is more uniformly excellent across the board. Its recall (60%) and F1 Score (62) more than make up for its lower precision (65%) on AI-generated text. An F1 Score of 65 for human written text strikes a reasonable balance between accuracy (63) and recall (68).
When distinguishing AI-generated content from machine-written content, Originality shines. Its F1 Score of 97 reflects its remarkable precision (98), recall (96), and overall effectiveness. It also excels at text created by humans, with an F1 Score of 97, recall of 98%, and precision of 96%. The high precision (86) and recall (40) on AI-generated text give Sapling an F1 Score of 54. Despite a high recall (94) and poor precision (61) in identifying human-written text, the F1 Score 74 leaves room for improvement. Writer is unbiased in assessing the relative merits of AI-generated and human-written content. It has an average F1 Score of 62 because it has an average level of precision while analyzing AI-generated text (79) and recall (52). The F1 Score for this piece of human-written text is 74, meaning it has an excellent balance of precision (64) and recall (87). Regarding recognizing AI-generated content, Zylalab has a good 84 precision, 45 recall, and 59 F1 Score. Recognizing synthetic language is where it shines, with an F1 Score of 74, a recall of 91, and a precision of 62.
As a result of its superior performance in terms of precision, recall, and F1 Score across both classes, we have concluded that Originality is the most reliable alternative for AI text identification. Additionally, GPTZERO displays all-around performance, making it a practical option. However, Sapling shows skills in identifying AI-generated text whereas GPTKIT demonstrates remarkable precision but needs better recall.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Tools** & **Classes** & **Precision** & **Recall** & **F1 Score** \\ \hline \multirow{3}{*}{GPTKIT} & AI Generated & 90 & 12 & 21 \\ \cline{2-5} & Human Written & 53 & 99 & 69 \\ \hline \multirow{3}{*}{GPTZERO} & AI Generated & 65 & 60 & 62 \\ \cline{2-5} & Human Written & 63 & 68 & 65 \\ \hline \multirow{3}{*}{Originality} & AI Generated & **98** & **96** & **97** \\ \cline{2-5} & Human Written & **96** & **98** & **97** \\ \hline \multirow{3}{*}{Sapling} & AI Generated & 86 & 40 & 54 \\ \cline{2-5} & Human Written & 61 & 94 & 74 \\ \hline \multirow{3}{*}{Writer} & AI Generated & 79 & 52 & 62 \\ \cline{2-5} & Human Written & 64 & 87 & 74 \\ \hline \multirow{3}{*}{Zylalab} & AI Generated & 84 & 45 & 59 \\ \cline{2-5} & Human Written & 62 & 91 & 74 \\ \hline \end{tabular}
\end{table}
Table 1: Comparative results of AI text detection tools on AH&AITD.
Writers find a comfortable medium ground but need to differentiate themselves. Zylalab performs about as well as the best of the rest, but it has room to grow. Before selecting a tool, it is crucial to consider the needs and priorities of the job.
Figure 2 provides a visual representation of a comparison between the accuracy of six different AI text identification systems, including "GPTkit," "GPTZero," "Originality," "Sampling," "Writer," and "Zylalab." Data visualization demonstrates that "GPTkit" has a 55.29 percent accuracy rate, "GPTZero" has a 63.7 percent accuracy rate, "Originality" has a spectacular 97.0 percent accuracy rate, "Sapling" has a 66.6 percent accuracy rate, "Writer" has a 69.05 percent accuracy rate, and "Zylalab" has a 68.23 percent accuracy rate. These accuracy ratings demonstrate how well the tools distinguish between natural and computer-generated text. When contrasting the two forms of writing, "Originality" achieves the highest degree of accuracy. Compared to the other two, "GPTkit" has the lowest detection accuracy and thus the most room for improvement. This visual representation of the performance of various AI text detection tools will be an important resource for users looking for the most precise tool for their needs.
Artificial Intelligence-Generated and Human-Written Text Detection testing confusion matrices are shown in Figure 3 for simple comparison. Confusion matrices like this demonstrate visually how well certain technologies can distinguish between text generated by AI and that authored by humans. The use of blue to illustrate the matrices aids in their readability. The actual labels appear in the matrix rows, while the predicted labels are displayed in the columns. The total number of occurrences that match that criterion is in each matrix cell. These matrices allow us to compare various tools based on their ability to classify texts accurately, recall, and overall performance. This graphic is an excellent reference for consumers, researchers, and decision-makers because it visually compares the accuracy of various AI text detection technologies.
Figure 1: Accuracy comparison of AI text detection tools on AH&AITD
Figure 4 displays the Testing Receiver Operating Curves (ROCs) for selecting AI text detection algorithms, visually comparing their relative strengths and weaknesses. These ROC curves, one for each tool, are essential for judging how well they can tell the difference between AI-generated and human-written material. Values for "GPTkit," "GPTZero," "Originality," "Sapling," "Writer," and "Zylalab" in terms of Area Under the Curve (AUC) are 0.55, 0.64%, 0.97%, 0.67%, 0.69%, and 0.68%, respectively. The Area under the curve (AUC) is a crucial parameter for gauging the precision and efficiency of such programs. A bigger area under the curve (AUC) suggests that the two text types can be distinguished with more accuracy. To help users, researchers, and decision-makers choose the best AI text recognition tool for their needs, Figure 4 provides a visual summary of how these tools rank regarding their discriminative strength.
Figure 3: Testing Receiver Operating Curves of AI text detection tools on AH&AITD
Figure 2: Testing confusion matrices of AI text detection tools on AH&AITD
## 5 Conclusion
We learned much about six AI-generated text identification tools by evaluating their strengths and limitations across numerous criteria using GPTkit, GPTZero, Originality, Sapling, Writer, and Zylalab. These systems have varying precision, recall, and F1 scores for distinguishing AI-generated and human-written material. Originality is impressive because it balances specificity and breadth in its effects. GPTkit excels at human-written text recognition, whereas GPTZero competes in AI-generated text precision and recall. Sapling, Writer, and Zylalab offer decent solutions. This research should focus on increasing such instruments' functions. When paraphrasing heavily, AI-generated writing must be more accurate. More work is needed to eliminate false positives because of their user impact. These tools must be more linguistically flexible and domain-specific to reach more people. Text recognition systems must keep up with AI development to detect AI-generated content in academic, creative, and social media environments. In the digital age, joint research and ongoing refinement are needed to solve new AI text detection challenges and ensure digital content accuracy.
#### Availability of Data and Materials:
Data will be provided on request. It is also publicly available.
#### Conflicts of Interest:
The authors declare no conflicts of interest to report regarding the present study.
|
2309.15535 | From LAION-5B to LAION-EO: Filtering Billions of Images Using Anchor
Datasets for Satellite Image Extraction | Large datasets, such as LAION-5B, contain a diverse distribution of images
shared online. However, extraction of domain-specific subsets of large image
corpora is challenging. The extraction approach based on an anchor dataset,
combined with further filtering, is proposed here and demonstrated for the
domain of satellite imagery. This results in the release of LAION-EO, a dataset
sourced from the web containing pairs of text and satellite images in high
(pixel-wise) resolution. The paper outlines the acquisition procedure as well
as some of the features of the dataset. | Mikolaj Czerkawski, Alistair Francis | 2023-09-27T09:53:38Z | http://arxiv.org/abs/2309.15535v1 | # From LAION-5B to LAION-EO:
###### Abstract
Large datasets, such as LAION-5B, contain a diverse distribution of images shared online. However, extraction of domain-specific subsets of large image corpora is challenging. The extraction approach based on an anchor dataset, combined with further filtering, is proposed here and demonstrated for the domain of satellite imagery. This results in the release of LAION-EO, a dataset sourced from the web containing pairs of text and satellite images in high (pixel-wise) resolution. The paper outlines the acquisition procedure as well as some of the features of the dataset. Access at [https://huggingface.co/datasets/mikonvergence/LAION-EO](https://huggingface.co/datasets/mikonvergence/LAION-EO).
## 1 Introduction
Large collections of images have been instrumental for building the recent generative models, such as DALLE [11], Imagen [14], or StableDiffusion [12]. LAION-5B [15] is one of the largest open-source image datasets containing pairs of image and text and was used to train some of the state-of-the-art generative models, including StableDiffusion [12].
However, the datasets used for training the general text-to-image generative models capture a correspondingly wide domain of visual data. Hence, they may be less than ideal for individual domains, such as those involving processing of satellite-images. There are several potential benefits of more specialised vision-language datasets derived from general corpora. First, they can provide a way to fine-tune existing models on the domain of interest in order to improve the prediction quality further, tangentially explored in the works on subject-driven image editing [5, 4, 13]. Secondly, they offer insight into the representation of the given domain within the parent distribution of data and consequently, the representation learned by models trained on that broader distribution.
This work demonstrates an approach to filtering a large-scale dataset of LAION-5B to extract a satellite image subset from it and analyses the resulting collection of images. Several attempts have been made to filter large datasets like LAION-5B [9, 7], however, the previous studies focused on improving the general quality of the dataset as opposed to extracting a specific domain like herein.
## 2 LAION-5B for Satellite Images
A common way to navigate the space of very large datasets relies on embeddings of a general vision-language model, such as CLIP [10], and this has been done for the official demonstration of LAION-5B dataset [15]. The CLIP model, which is trained with a contrastive loss, paves the way for assessing pair-wise similarity of image and text samples. By encoding a piece of text or an image into the embedding space of CLIP, it is possible to identify meaningful nearest neighbours by comparing to the embedding vectors of other samples. However, it has not yet been demonstrated how the entire domain of satellite images can be identified.
Figure 1: Random samples from the anchor dataset derived from CloudSEN12.
### Anchor Dataset
In this work, the technique of using a reference anchor dataset is introduced. An anchor dataset is selected as a representative subset of the domain of interest. By iterating over each sample in the anchor dataset, a subset of LAION-5B can be identified that resembles the original dataset, and to some degree, the domain of similar-looking images in the dataset. For that reason, it is important for the anchor dataset to capture a faithful representation of the domain of interest, the satellite images.
In this case, Sentinel-2 images in the resolution of 512 pixels by 512 pixels and minimal (mostly none) cloud coverage are used. To enable that, extracted are the cloud-free samples from the training subset of the CloudSEN12 dataset [1] (the samples with high quality and scribble annotations are organised into training, validation, and test sets in the official release). The selection is motivated by two key features of the dataset, despite being designed for the task of cloud detection: i) it contains Sentinel-2 images with large patches of 512 pixels, larger than other similar datasets with Sentinel-2 data (WorldStrat contains 160 by 160 pixels [3], BigEarthNet contains 120 by 120 pixels [16]) and ii) the data is sampled globally, with 9,880 unique locations. Furthermore, the use of an official training subset makes it possible to later evaluate on the test subset for the same dataset, for example, when LAION-EO dataset is used as a substitute source of training samples.
The data is obtained in the standard Sentinel-2 level 2A format, with values ranging between 0 and 10,000. To make the images suitable for the CLIP model (which has been trained with images from the web in a standard format), the following normalization approach is conducted. First, the images are divided by 10,000 to fit into the standard range of 0-1. Then, if the image is too dark (which is often the case for images with no snow or clouds), a gain is applied. Specifically, if the mean reflectance is less than 0.25, the entire image is multiplied by the ratio of 0.5 to the image mean (clipped for maximum gain of 8). This approach may be unconventional in the Earth Observation pipelines, but should be taken in order to increase the similarity between the CloudSEN12 anchor images and the images that may potentially be found in LAION-5B. Examples of samples from the used CloudSEN12 subset and the effect of normalization are shown in Figure 1.
### Nearest Neighbour Search
As described in the appendices C.2 and C.3 of the LAION-5B paper [15], the dataset images can be compressed into a large k-NN index of CLIP embeddings. The nearest neighbour search has been implemented based on FAISS search algorithm (using approximate similarity) [6]. The clip-retrieval open-source tool [2] is employed to find 100 nearest neighbours with FAISS in the LAION-5B dataset for each of the 3,456 anchor images extracted from CloudSEN12. Notably, each of the query result may contain a certain number of duplicates, so 100 corresponds to the maximum number of neighbours found.
The process involved querying 100 nearest neighbour images, ignoring duplicates, checking if it is possible to download the image, and if so, checking if the image has at least 256 pixels in both width and height.
As a result, 28,572 neighbour images have been extracted from LAION-5B, derived from 2,582 out of the initial 3,456 anchor images (74.71%). This means that for 874 images, the results contained images that were too small or inaccessible. The average number of images extracted based on a single anchor image is 11.1, as shown in Figure 2.
Figure 3: Comparison of the actual CLIP image similarity and the fast similarity computed by FAISS within the nearest neighbour pipeline. Colour of the markers corresponds to the fraction of all samples occupying given region.
Figure 2: Distribution of the number of neighbours found for a single anchor image.
### Filtering
The acquired 28,572 images are likely to include some incorrect samples due to several reasons. First, the embeddings are compared using FAISS [6], which introduces an approximation error. Second, not all images similar to satellite images are satellite images themselves. To minimize this effect, another filtering stage is carried out by recomputing the CLIP similarity of each found image to (i) the anchor image (visual similarity), and (ii) a text prompt "a satellite image" (semantic similarity), attempting to reduce the two types of errors described above.
Figure 3 shows the divergence between the computed fast CLIP similarity and the actual CLIP image similarity to the anchor image, unveiling a number of examples where FAISS overestimates the actual image similarity (orange area). The recomputed values of image similarity are used consequently for further filtering and the initial fast similarity is ignored.
The distribution of text similarity (to the "a satellite image" text) and image similarity (to the anchor image) is shown in Figure 5. As in Figure 3, many samples with low image similarity can be found as well as a set of samples with low text similarity. These samples with either or both similarity values too low can be filtered out using thresholds (\(\tau_{I}\) for image similarity and \(\tau_{T}\) for text similarity), marked by golden lines in Figure 5. The exact values of thresholds have been set heuristically to one-and-a-half standard deviation below the mean of each distribution. The errors corresponding to the four quadrants separated by threshold values are shown in Figure 4, where the found image is visually similar to the anchor, but not to a satellite image (top left), similar in both respects (top right), or dissimilar visually (bottom).
Figure 4: Types of matches found in each threshold quadrant, corresponding to the four areas in Figure 5. These correspond to the four scenarios based on whether the extracted image matches the anchor image or semantic text (“a satellite image”).
Figure 5: Distribution of image and text similarity used for filtering with corresponding threshold values based on the mean and standard deviation of each distribution.
### Final Product: LAION-EO
The final dataset is obtained after applying the image and text similarity thresholds and removing 273 duplicate images. The resulting is a set of 24,933 samples (from initial 28,572), with mean height of 633.0 pixels (up to 9,999) and mean width of 843.7 pixels (up to 19,687). Examples of square crops from random samples in the dataset are shown in Figure 6.
The dataset contains English captions (40.1%) as well as other common languages in LAION-5B. This has been approximated by applying an open-source language detection model [8], as shown in Figure 7.
## 3 Conclusion
Powerful general visual generative models for satellite images require diverse data to train on. Currently, the majority of Earth observation datasets are collected manually with a specialised acquisition pipeline. In contrast, LAION-EO is a dataset consisting of satellite images shared on web by humans, accompanied by text. It represents a fundamental change of perspective, where the images are collected because they have been found meaningful by the internet users, not necessarily experts. LAION-EO can open doors to training or fine-tuning vision-language models focused on Earth Observation data and understanding the representation of satellite images present online.
The approach of using an anchor dataset for nearest neighbour search allows to elevate existing satellite image datasets to factorise and inspect the 5 bilion images contained in LAION-5B. This was followed by a filtering stage, where the embedding of a reference text "a satellite image" was used to clean up the dataset and remove images that do not resemble satellite images.
Several directions to developing the LAION-EO dataset
Figure 6: Random samples from LAION-EO with English captions and high semantic similarity (over 0.22), cropped to 1:1 ratio. There are 4,116 high-quality images under this criterion.
Figure 7: Distribution of language detected by the XLM-RoBERTa model [8].
and increasing the quality of the contained data include expanding the coverage of the anchor dataset and applying more complex filtering approaches. LAION-5B likely contains even more high-quality Earth Observation images than the ones extracted in the prototype version of LAION-EO.
|
2309.05040 | Comparison of viscosity solutions for a class of second order PDEs on
the Wasserstein space | We prove a comparison result for viscosity solutions of second order
parabolic partial differential equations in the Wasserstein space. The
comparison is valid for semisolutions that are Lipschitz continuous in the
measure in a Fourier-Wasserstein metric and uniformly continuous in time. The
class of equations we consider is motivated by Mckean-Vlasov control problems
with common noise and filtering problems. The proof of comparison relies on a
novel version of Ishii's lemma, which is tailor-made for the class of equations
we consider. | Erhan Bayraktar, Ibrahim Ekren, Xin Zhang | 2023-09-10T14:30:53Z | http://arxiv.org/abs/2309.05040v2 | # Comparison of viscosity solutions for a class of second-order PDEs on the Wasserstein space
###### Abstract.
We prove a comparison result for viscosity solutions of second-order parabolic partial differential equations in the Wasserstein space. The comparison is valid for semisolutions that are Lipschitz continuous in the measure in a Fourier-Wasserstein metric and uniformly continuous in time. The class of equations we consider is motivated by Mckean-Vlasov control problems with common noise and filtering problems. The proof of comparison relies on a novel version of Ishii's lemma, which is tailor-made for the class of equations we consider.
Key words and phrases:Wasserstein space, second-order PDEs, viscosity solutions, comparison principle, Ishii's Lemma 2020 Mathematics Subject Classification: 58E30, 90C05 E. Bayraktar is partially supported by the National Science Foundation under grant DMS-2106556 and by the Susan M. Smith chair I. Ekren is supported in part by NSF Grant DMS 2007826.
## 1. Introduction
In this paper we consider the following _linear differential equations_
\[\begin{cases}\partial_{t}u+\partial_{x}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.1}\]
where \(u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.2}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.3}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.4}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.5}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.6}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.7}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.8}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.9}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.10}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.11}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.12}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.13}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.14}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.15}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.16}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.17}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.18}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.19}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.20}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.21}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.22}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.23}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.2}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.3}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.4}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.5}\]
where \(\partial_{t}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.6}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.7}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{x}u=0,&\partial_{x}u=0,\end{cases} \tag{1.8}\]
where \(\partial_{x}u\) is a smooth function and \(\mathcal{F}\) is a smooth function. The _linear differential equations_ are the solutions of the following differential equations:
\[\begin{cases}\partial_{t}u=0,&\partial_{x}u=0,\\ \partial_{
space. Viewing \(\mathcal{P}_{2}(\mathbb{R}^{d})\) as a quotient of Hilbert space, [3] lifts PDEs on Wasserstein space to the ones on Hilbert space, and provides the uniqueness result. However, the relation between solutions to the lifted PDEs and originals ones are unclear. In this paper, we show a comparison principle for a class of nonlinear degenerate second-order parabolic PDEs on Wasserstein space, and thus give a PDE characterization of the value function in filtering problem. Moreover, (1.1) also covers first-order HJB equations corresponding to McKean-Vlasov control problems.
There is another motivation of this paper. It is well-known that solutions to PDEs can be approximated using machine learning-based numerical methods. On the other hand, some learning problems such as expert prediction can be viewed as finite difference algorithms of PDEs; see [10, 11, 6, 28, 16, 27, 8, 7]. Therefore, the long-time behavior of such learning problems is characterized by certain parabolic PDEs. In [8], the authors interpreted a learning problem as a discrete time zero-sum game with incomplete information. After properly scaling in both time and space variable, the value function converges to a second-order parabolic equation on Wasserstein space of type (1.1) in a formal way; see [8, Equation (4.11)]. To verify convergence rigorously, viscosity theory has to be established. This paper provides the uniqueness result as a first step, and the the stability of viscosity solutions of (1.1) is an ongoing project.
Let us mention some related references. The decomposition of \(\mathcal{P}_{2}(\mathbb{R}^{d})\) into \(\mathbb{R}^{d}\times\mathcal{P}_{2,0}(\mathbb{R}^{d})\) can be found in [32], where the authors define intrinsic viscosity solutions, a notion that is the closest to ours. Compared to [32] when defining second-order jets, we do not require the existence of derivatives at all directions but only an integral of these derivatives. Our second-order jets being larger, our definition is more stringent than the definition of intrinsic viscosity solutions in [32], and any viscosity solution in our sense is an intrinsic viscosity solution in the sense of [32]. Note that [32] does not provide a comparison of the intrinsic viscosity solution for second-order equations and their main comparison result relies on lifting the functions of measures into functionals on an \(L^{2}\) space and using the comparison of viscosity solutions on a Hilbert space. For the viscosity theory of PDEs on Hilbert space, we refer the readers to [39, 41, 40, 31]. Viscosity solutions of first-order PDEs on the Wasserstein space have been studied in [49, 44, 47, 15, 4, 22, 12, 26]. It is worth noting that our formulation of the Fourier-Wasserstein metric, represented as \(\rho_{F}\), as well as the variational function, symbolized as \(\rho_{\sigma}\), drew significant inspiration from the concepts discussed in the first-order case in [47] and [22], respectively. We should mention that another viscosity solution notion and a corresponding comparison result were given in [12] to analyze optimal transport problems, in which the PDEs are first-order in the measure and second-order in the space variable are stated on compact domains. The proof in this paper relies on lifting to a Hilbert space as described above and using Ishii's lemma developed for Hilbert spaces in [40]. As shown in [24], second-order PDEs on the Wasserstein space are also related to measure-valued martingale optimization problems. Relying on the specific martingale structure, that paper manages to reduce the problem to finitely supported measures where the usual viscosity theory can be applied. [24] proves a very general uniqueness results under a novel definition of viscosity solution which might not enjoy the stability property of viscosity solutions. PDEs on the Wasserstein space also appear in mean-field optimal stopping problems [49, 48, 46].
The remainder of the paper is organized as follows. In Section 2, we will introduce different topological equivalent metrics used in this paper, and provide official definitions
for differentiability of functions on Wasserstein space. In Section 3, we present a version of Ishii's lemma on Wasserstein space, Theorem 3.1, and the comparison result, Theorem 3.2. Section 4 is devoted to an application of comparison principle in stochastic control with partial observation. Section 5 provides some preliminary estimates for derivatives of \(\rho_{F}\) and \(\rho_{\sigma}\). The proofs of Theorem 3.1, Theorem 3.2, and all the arguments in Section 4 can be found in Sections 6, 7, and 8 respectively.
We will end this section by explaining the frequently used notation.
**Notation.** We denote by \(\mathcal{P}_{2}(\mathbb{R}^{d})\) the set of Borel probability measures \(\mu\) in \(\mathbb{R}^{d}\) with a finite second moment, define \(m(\mu):=\int x\,\mu(dx)\in\mathbb{R}^{d}\), and by \(\mathcal{P}_{2,0}(\mathbb{R}^{d})\) the elements of \(\mathcal{P}_{2}(\mathbb{R}^{d})\) with mean \(0\). Both the identity matrix of dimension \(d\) and the identity map from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{d}\) are denoted by \(I_{d}\). The centered part of measure \(\mu\) is given by \(\mathcal{S}_{0}(\mu):=(I_{d}-m(\mu))_{\sharp}\mu\in\mathcal{P}_{2,0}(\mathbb{R }^{d})\), and the covariance matrix of \(\mu\) is defined as
\[V(\mu)=V(\mathcal{S}_{0}(\mu)):=\int(x-m(\mu))(x-m(\mu))^{\top}\,\mu(dx)\in \mathcal{S}_{d},\]
where \(\mathcal{S}_{d}\) stands for the set of positive-semidefinite matrices of dimension \(d\). For \(M,N\in\mathcal{S}_{d},\,|M-N|\) is taken as the Frobenious norm of matrices, i.e., \(|M-N|^{2}:=Tr((M-N)^{\top}(M-N))\). For any complex number \(z\), we denote its conjugate by \(z^{*}\) and its real part by \(Re(z)\).
We denote by \(\mu*\nu\) the convolution of \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) defined via
\[\int f(x)(\mu*\nu)(dx)=\iint f(x+y)\mu(dx)\nu(dy)\quad\text{ for all }f\in C_{b}(\mathbb{R}^{d}).\]
For any \(\sigma>0\), \(N_{\sigma}\) denotes the \(d\)-dimensional Gaussian distribution with mean \(0\) and covariance matrix \(\sigma I_{d}\).
For \(k\in\mathbb{R}^{d}\), we define the Fourier basis function \(f_{k}\) by \(f_{k}(x)=(2\pi)^{-d/2}e^{-k\cdot x}\), and for any \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) we take \(F_{k}(\mu)=\int f_{k}(x)\,\mu(dx)\). The Fourier transform \(\mathcal{F}g\) of a function \(g\in L^{2}(\mathbb{R}^{d})\) is defined as \(\mathcal{F}g(k):=\int f_{k}(x)g(x)\,dx.\) Let \(\mathbb{H}_{\lambda}\) be the Hilbert space
\[\mathbb{H}_{\lambda}:=\left\{f\in L^{2}(\mathbb{R}^{d}):\,(1+|k|^{2})^{ \frac{\lambda}{2}}\mathcal{F}f\in L^{2}(\mathbb{R}^{d})\right\},\]
where for all \(f\in\mathbb{H}_{\lambda}\) we define
\[\|f\|_{\lambda}^{2}:=\int(1+|k|^{2})^{\lambda}|\mathcal{F}f(k)|^{2}\,dk.\]
For any \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\) and measurable function \(f:\mathbb{R}^{d}\to\mathbb{R}\), we use the short notation \(\mu(f)=f([\mu]):=\int f(x)\,\mu(dx)\) if the integral is well-defined. For two probability measures \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), we define \(\|\mu-\nu\|_{\lambda}:=\sup_{\|f\|_{\lambda}\leq 1}(\mu-\nu)(f)\). For any \(f:\mathbb{R}^{d}\to\mathbb{R}\), let us also define its Lipschitz constant and supnorm
\[Lip(f)=\sup_{x,y\in\mathbb{R}^{d},x\neq y}\frac{|f(x)-f(y)|}{|x-y|},\quad||f|| _{\infty}:=\sup_{x\in\mathbb{R}^{d}}|f(x)|.\]
## 2. Metrics and derivatives on the space of measures
### Different metrics
We endow \(\mathcal{P}_{2}(\mathbb{R}^{d})\) with the \(2\)-Wasserstein distance \(W_{2}\), that is, for any \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\)
\[W_{2}(\mu,\nu)^{2}:=\inf_{\pi\in\Pi(\mu,\nu)}\int\frac{1}{2}|x-y|^{2}\,\pi(dx,dy),\]
where \(\Pi(\mu,\nu)\) denotes the collection of probability measures on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) with first and second marginals \(\mu\) and \(\nu\) respectively. By [1, Lemma 2], we have
\[W_{2}(\mu,\nu)^{2}=\frac{1}{2}|m(\mu)-m(\nu)|^{2}+W_{2}(\mathcal{S}_{0}(\mu), \mathcal{S}_{0}(\mu))^{2}. \tag{2.1}\]
As such, \(\mathcal{P}_{2}(\mathbb{R}^{d})\) admits a natural decomposition as the product of \(\mathbb{R}^{d}\) and \(\mathcal{P}_{2,0}(\mathbb{R}^{d})\) that will be crucial for our comparison result.
Similarly to [22], we also endow \(\mathcal{P}_{2}(\mathbb{R}^{d})\) with the metric
\[W_{2}^{(\sigma)}(\mu,\nu):=W_{2}(\mu*N_{\sigma},\nu*N_{\sigma})\]
so that \((\mathcal{P}_{2}(\mathbb{R}^{d}),W_{2}^{(\sigma)})\) is a complete metric space topologically equivalent to \((\mathcal{P}_{2}(\mathbb{R}^{d}),W_{2})\) thanks to [22, Lemma 4.2]. Fix \(\sigma>0\) and let \(B_{0}=(-1,1]^{d}\). For \(l\geq 0\), \(\mathfrak{P}_{l}\) denotes the partition of \(B_{0}\) into \(2^{dl}\) translations of \((-2^{-l},2^{-l}]^{d}\) and for every \(n\geq 1\), \(B_{n}=(-2^{n},2^{n}]^{d}\backslash(-2^{n-1},2^{n-1}]^{d}\). With \(\delta_{n,l}:=2^{-(4n+2dl)}\), we define
\[\rho_{\sigma}^{2}(\mu,\nu) :=|m(\mu)-m(\nu)|^{2} \tag{2.2}\] \[+\sum_{n\geq 0}2^{2n}\sum_{l\geq 0}2^{-2l}\sum_{B\in\mathfrak{P}_{l} }\sqrt{|((\mathcal{S}_{0}(\mu)-\mathcal{S}_{0}(\nu))*\mathcal{N}_{\sigma})((2 ^{n}B)\cap B_{n})|^{2}+\delta_{n,l}^{2}}-\delta_{n,l}\]
which is gauge type function for the metric space \(\left(\mathcal{P}_{2}(\mathbb{R}^{d}),W_{2}^{(\sigma)}\right)\) due to [22, Lemma 4.3] and (2.1).
Besides \(W_{2}^{(\sigma)}\), we define another metric \(\rho_{F}:\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathcal{P}_{2}(\mathbb{R}^{d} )\to[0,\infty)\) via
\[\rho_{F}^{2}(\mu,\nu):=|m(\mu)-m(\nu)|^{2}+|V(\mu)-V(\nu)|^{2}+\int_{\mathbb{R }^{d}}\frac{|F_{k}(\mathcal{S}_{0}(\mu))-F_{k}(\mathcal{S}_{0}(\nu))|^{2}}{(1+ |k|^{2})^{\lambda}}\,dk. \tag{2.3}\]
Note that \(\rho\) is well defined on \(\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathcal{P}_{2}(\mathbb{R}^{d})\) for any \(\lambda>\frac{d}{2}\). For our purpose, we fix \(\lambda=d+7\) in the rest of the paper. According to the Sobolev embedding theorem [14, Corollary 9.13], given that \(\lambda-\frac{d}{2}=\frac{d}{2}+7>1\), the Lipschitz constant and supnorm are bounded by the \(\lambda\)-Sobolev norm, i.e.,
\[Lip(f)+\|f\|_{\infty}\leq C\|f\|_{\lambda}.\]
Additionally, for any \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d}),f\in\mathbb{H}_{\lambda}\), due to Fourier inversion formula, it can be easily seen that
\[\eta(f)=\eta(\mathcal{F}^{-1}\mathcal{F}f) =\frac{1}{(2\pi)^{d}}\int\,\eta(dy)\int e^{iy\cdot k}\mathcal{F}f( k)\,dk=\frac{1}{(2\pi)^{d}}\int F_{-k}(\eta)\,\mathcal{F}f(k)\,dk\] \[\leq\frac{1}{(2\pi)^{d}}\|f\|_{\lambda}\sqrt{\int\frac{|F_{k}( \eta))|^{2}}{(1+|k|^{2})^{\lambda}}\,dk},\]
and
\[\sup_{\|f\|_{\lambda}\leq 1}\eta(f)=\frac{1}{(2\pi)^{d}}\sqrt{\int\frac{|F_{k}( \eta)|^{2}}{(1+|k|^{2})^{\lambda}}\,dk}.\]
This observation leads to the following lower bound on \(\rho_{F}\).
**Lemma 2.1**.: _There is a constant \(C\) such that for any \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\)_
\[\|\mu-\nu\|_{\lambda}:=\sup_{\|f\|_{\lambda}\leq 1}\int f\,d(\mu-\nu)\leq C \rho_{F}(\mu,\nu).\]
Proof.: Taking \(\tilde{\mu}=(I_{d}+m(\nu-\mu))_{\sharp}\mu\), for any \(f\in\mathbb{H}_{\lambda}\) we have that
\[\int f\,d(\mu-\nu)= \int f(x)\,(\mu-\tilde{\mu})(dx)+\int f(x)\,(\tilde{\mu}-\nu)(dx)\] \[= \int f(x)-f(x+m(\nu-\mu))\,\mu(dx)+\int f(x)\,(\tilde{\mu}-\nu)(dx)\] \[\leq |m(\mu)-m(\nu)|Lip(f)+\frac{1}{(2\pi)^{d}}\|f\|_{\lambda}\sqrt{ \int\frac{|F_{k}(\tilde{\mu})-F_{k}(\nu)|^{2}}{(1+|k|^{2})^{\lambda}}\,dk}\leq C \rho_{F}(\mu,\nu)\]
where in the last inequality we use the fact that thanks to \(m(\tilde{\mu})=m(\nu)\) one has
\[|F_{k}(\tilde{\mu})-F_{k}(\nu)|^{2}=|F_{k}(\mathcal{S}_{0}(\mu))-F_{k}( \mathcal{S}_{0}(\nu))|^{2}.\]
Before stating the next lemma, we recall that the completeness of a metric space is a property of its metric and not its topology.
**Lemma 2.2**.: \((\mathcal{P}_{2}(\mathbb{R}^{d}),\rho_{F})\) _is a non-complete metric space which is topologically equivalent to \((\mathcal{P}_{2}(\mathbb{R}^{d}),W_{2})\) and \((\mathcal{P}_{2}(\mathbb{R}^{d}),W_{2}^{(\sigma)})\)._
Proof.: Note that for \(\mu_{n},\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), the convergence \(\rho_{F}(\mu_{n},\mu)\to 0\) means the convergence of the first two moments of the measures and the pointwise convergence of characteristic functions. Thanks to [51, Theorem 6.9] and [36, Theorem 6.3], this implies distributional convergence and also that \(W_{2}(\mu_{n},\mu)\to 0\). Together with [1, Lemma 2], we conclude that \(\rho_{F}\) and \(W_{2}\) are topologically equivalent. Due to [22, Lemma 4.3], the topologies generated by \(W_{2}\) and \(W_{2}^{(\sigma)}\) are equal as well.
The following example shows the lack of completeness of \(\rho_{F}\) which also implies that the metrics are not metrically equivalent. Let us take
\[\mu_{n}=\left(1-\frac{1}{n}\right)\delta_{0}+\frac{1}{2n}\delta_{-\sqrt{n}e_{ 1}}+\frac{1}{2n}\delta_{\sqrt{n}e_{1}},\quad n\in\mathbb{N},\]
where \(e_{1}=(1,0,0,\ldots,0)\in\mathbb{R}^{d}\). For each \(n\), \(\mu_{n}\) has mean \(0\) and variance \(e_{1}^{\top}e_{1}\). Due to Sobolev embedding, for any \(n,l\geq 1\)
\[\rho_{F}(\mu_{n},\mu_{m}) =\sqrt{\int\frac{|F_{k}(\mu_{n})-F_{k}(\mu_{m})|^{2}}{(1+|k|^{2}) ^{\lambda}}\,dk}=\sup_{\|f\|_{\lambda}\leq 1}\int f\,d(\mu_{n}-\mu_{m})\] \[\leq C\sup_{Lip(f)\leq 1}\int f\,d(\mu_{n}-\mu_{m})\leq W_{1}( \mu_{n},\mu_{m}).\]
Since \(W_{1}(\mu_{n},\delta_{0})\to 0\) as \(n\to\infty\), \((\mu_{n})_{n\geq 1}\) is a Cauchy sequence w.r.t. \(W_{1}\) and also Cauchy w.r.t. \(\rho_{F}\). However, \(\rho_{F}(\mu_{n},\delta_{0})\geq 1\) due to the variance part, and hence there is no limit point w.r.t. \(\rho_{F}\). Therefore \(\rho_{F}\) is not complete.
We have introduced two functions on \((\mathcal{P}_{2}(\mathbb{R}^{d}))^{2}\). \(\rho_{F}\) is a non-complete metric associated the Fourier-Wasserstein metrics in [1, 20, 50]. As observed in [47], for first-order equations on compact domains, this function enjoys good differentiability and symmetry properties and will be used for the classical doubling of variables methods of viscosity solutions. Compared to [47], the mean term \(m\) in (2.3) allows us to easily compute the partial
Hessian of \(\rho_{F}\). The presence of the \(V\) term allows us to claim the topological equivalence of \(\rho_{F}\) to \(W_{2}\). However, this \(V\) term is also the precise reason why completeness fails (owing to the unboundedness of the domain). Indeed, in the proof of Lemma 2.2, we are able to obtain the convergence \(V(\mu_{m})\). However, the limit of \(V(\mu_{m})\) is not \(V(\delta_{0})\) and the space \((\mathcal{P}_{2}(\mathbb{R}^{d}),\rho_{F})\) lacks completeness. If the domain were compact, as noted by [47], these metrics are equivalent and \((\mathcal{P}_{2}(\mathbb{R}^{d}),\rho_{F})\) is complete.
\(\rho_{\sigma}\) is a modification of the gauge-type function of [22] allowing us to use the Borwein-Preiss variational principle in [13]. Note that the two metrics \(\rho_{F},W_{2}^{(\sigma)}\) are topologically equivalent to \(W_{2}\). Although these metrics lead to different sets of uniformly continuous functions, they have the same set of continuous and semicontinuous functions.
### first-order derivatives
For a function \(f:\mathbb{R}^{d}\mapsto\mathbb{R}\) we define the quadratic weighted norm
\[||f||_{\mathfrak{q}}:=\sup_{x\in\mathbb{R}^{d}}\frac{|f(x)|}{1+|x|^{2}} \tag{2.4}\]
and denote by \(B_{\mathfrak{q}}\) the set of Borel measurable functions \(f\) with at most quadratic growth, i.e. \(||f||_{\mathfrak{q}}<\infty\).
Following the definition of [17], a function \(u:\mathcal{P}_{2}(\mathbb{R}^{d})\mapsto\mathbb{R}\) is said to be linearly differentiable if there exists a continuous function1
Footnote 1: \((\mathcal{P}_{2}(\mathbb{R}^{d}),\rho_{F})\) and \((\mathcal{P}_{2}(\mathbb{R}^{d}),W_{2})\) being topologically equivalent, we do not need to specify the underlying metric on \(\mathcal{P}_{2}(\mathbb{R}^{d})\).
\[\frac{\delta u}{\delta\mu}:\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathbb{R}^{d }\mapsto\mathbb{R}\]
so that
\[\left\|\frac{\delta u}{\delta\mu}(\mu,\cdot)\right\|_{\mathfrak{q}}<\infty\]
and for all \(\mu,\mu^{\prime}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) we have that
\[\lim_{h\to 0}\frac{u(\mu+h(\mu^{\prime}-\mu))-u(\mu)}{h}=\int\frac{\delta u}{ \delta\mu}(\mu,x)d(\mu^{\prime}-\mu)(x).\]
In case that \(x\mapsto\frac{\delta u}{\delta\mu}(\mu,x)\) is differentiable, we say \(u\) is \(C^{1}\) and define
\[D_{\mu}u(\mu,x)=D_{x}\frac{\delta u}{\delta\mu}(\mu,x)\in\mathbb{R}^{d}.\]
If \(\left\|D_{\mu}u(\mu,\cdot)\right\|_{\mathfrak{q}}<\infty\), for notational simplicity we denote for all \(\mu^{\prime}\in\mathcal{P}_{2}(\mathbb{R}^{d})\)
\[D_{\mu}u(\mu,[\mu^{\prime}])=\int D_{\mu}u(\mu,x)d\mu^{\prime}(x)\in\mathbb{R }^{d}.\]
It is shown in [17, Proposition 2.3] that \(D_{\mu}u\) can be understood as a derivative \(u\) along push-forward directions, meaning for all Borel measurable bounded vector field \(\phi:\mathbb{R}^{d}\mapsto\mathbb{R}^{d}\), we have
\[\lim_{h\to 0}\frac{u((I_{d}+h\phi)_{\sharp}\mu)-u(\mu)}{h}=\int\phi^{\top}(x)D_{ \mu}u(\mu,x)\,d\mu(x).\]
In particular for \(\phi(x)=v\in\mathbb{R}^{d}\), we have that
\[\lim_{h\to 0}\frac{u((I_{d}+hv)_{\sharp}\mu)-u(\mu)}{h}=v^{\top}D_{\mu}u(\mu,[\mu]).\]
### second-order derivatives
#### 2.3.1. Full \(C^{2}\)-regularity
Similarly to [17, 21], if \(D_{\mu}u(\mu,x)\) is sufficiently smooth, we also define \(D_{x\mu}u(\mu,x)=D_{x}D_{\mu}u(\mu,x)=D_{xx}^{2}\frac{\delta u}{\delta\mu}(\mu, x)\in\mathcal{S}^{d}\) and \(D_{\mu\mu}^{2}u(\mu,x,z)=D_{\mu}(D_{\mu}u(\mu,x))(z)\). If \(\left\|D_{x\mu}u(\mu,\cdot)\right\|_{\mathfrak{q}}<\infty\) and
\[\left\|D_{\mu\mu}^{2}u(\mu,\cdot)\right\|_{\mathfrak{q}}:=\sup\frac{|D_{\mu \mu}^{2}u(\mu,x,z)|}{1+|x|^{2}+|z|^{2}}<\infty, \tag{2.5}\]
we define the partial Hessian
\[\mathcal{H}u(\mu)=D_{\mu\mu}^{2}u(\mu,[\mu],[\mu])+D_{x\mu}u(\mu,[\mu])\in \mathcal{S}_{d}. \tag{2.6}\]
For a function \(U:\mathbb{R}\times\mathcal{P}_{2}(\mathbb{R})\mapsto\mathbb{R}\) we also define the cross derivative
\[D_{y\mu}^{2}U(y,\mu,x):=D_{y}D_{\mu}U(y,\mu,x)\]
and the partial cross derivative
\[\mathcal{D}_{y\mu}^{2}U(y,\mu):=\int D_{y\mu}^{2}U(y,\mu,x)\,\mu(dx)=D_{y\mu} ^{2}U(y,\mu,[\mu])\in\mathbb{R}^{d\times d}. \tag{2.7}\]
For fixed maturity \(T>0\), we denote \(\Theta:=[0,T]\times\mathbb{R}^{d}\times\mathbb{P}_{2}(\mathbb{R}^{d})\), \(\mathring{\Theta}:=[0,T)\times\mathbb{R}^{d}\times\mathbb{P}_{2}(\mathbb{R}^ {d})\), and \(\theta=(t,y,\mu)\in\Theta\) the generic element of \(\Theta\). We define the following metric and gauge-type function on \(\Theta\) by
\[d_{F}^{2}(\theta,\tilde{\theta}):=d_{F}^{2}((t,y,\mu),(\tilde{t},\tilde{y},\tilde{\mu})):=|t-\tilde{t}|^{2}+|y-\tilde{y}|^{2}+\rho_{F}^{2}(\mu,\tilde{\mu}) \tag{2.8}\] \[d_{\sigma}^{2}(\theta,\tilde{\theta}):=d_{\sigma}^{2}((t,y,\mu), (\tilde{t},\tilde{y},\tilde{\mu})):=|t-\tilde{t}|^{2}+|y-\tilde{y}|^{2}+\rho_{ \sigma}^{2}(\mu,\tilde{\mu}) \tag{2.9}\]
for \((\theta,\tilde{\theta})=((t,y,\mu),(\tilde{t},\tilde{y},\tilde{\mu}))\in \Theta^{2}\).
Our goal is to provide assumptions on the function
\[h:\Theta\times\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{S}_{d}\times B_{ \mathfrak{q}}^{d}\times B_{\mathfrak{q}}^{d\times d}\times\mathbb{R}^{d\times d }\times\mathcal{S}^{d}\rightarrow\mathbb{R}\]
to have comparison of Lipschitz continuous viscosity solutions of
\[-\partial_{t}U(\theta)-h(\theta,U(\theta),D_{y}U(\theta),D_{yy}^{2}U(\theta),D _{\mu}U(\theta),D_{x\mu}U(\theta),\mathcal{D}_{y\mu}^{2}U(\theta),\mathcal{H}U (\theta))=0, \tag{2.10}\]
for \(\theta=(t,y,\mu)\in\mathring{\Theta}\). In equation (2.10), the subscript \(y\) is the derivative with respect to the state variable \(y\), \((D_{\mu}U(\theta),D_{x\mu}U(\theta))=(D_{\mu}U(\theta,x),D_{x\mu}U(\theta,x))_{x \in\mathbb{R}^{d}}\) are functions of the dummy variable \(x\), and \((D_{y}U(\theta),D_{yy}^{2}U(\theta),\mathcal{D}_{y\mu}^{2}U(\theta),\mathcal{H} U(\theta))\in\mathbb{R}^{d}\times\mathcal{S}_{d}\times\mathbb{R}^{d\times d }\times\mathcal{S}_{d}\) are finite dimensional.
We now define the concept of full \(C^{2}\)-regular functions and full \(C^{2}\)-regular solutions of (2.10).
**Definition 2.1**.: _We say that a function \(U:\Theta\mapsto\mathbb{R}\) is full \(C^{2}\)-regular if_
\[(\partial_{t}U,D_{y}U,D_{yy}^{2}U,D_{\mu}U,D_{x\mu}U,D_{y\mu}^{2}U,D_{\mu\mu}^ {2}U)\in\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{S}_{d}\times B_{\mathfrak{ q}}^{d}\times B_{\mathfrak{q}}^{d\times d}\times B_{\mathfrak{q}}^{d \times d}\]
_is continuous in \((t,x,z,y,\mu)\)._
\(U\) is a full \(C^{2}\)-regular solution (resp. subsolution, supersolution) of (2.10), if it is a full \(C^{2}\)-regular function and_
\[-\partial_{t}U(\theta)-h(\theta,U(\theta),D_{y}U(\theta),D_{yy}^{2 }U(\theta),D_{\mu}U(\theta),D_{x\mu}U(\theta),\mathcal{D}_{y\mu}^{2}U(\theta), \mathcal{H}U(\theta))\] \[=0\,(\text{resp.}\ \ \leq 0,\,\geq 0),\]
_for all \(\theta\in\hat{\Theta}\) where \(\mathcal{D}_{y\mu}^{2}U(\theta)\) is defined by (2.7) and \(\mathcal{H}U(\theta)\) is defined by (2.6)._
#### 2.3.2. Partial \(C^{2}\)-regularity
In (2.10), we do not need \(D_{\mu\mu}^{2}U(\theta,x,z),D_{y}D_{\mu}U(\theta,x)\) for all \((x,z)\) but an integral of these functions. Inspired by [21, 32], we aim to define the concept of partial \(C^{2}\)-regularity which is luckily the regularity of the test functions we are able to construct with the doubling of variable methodology in our proof of comparison of viscosity solutions. Our definition of partial regularity is motivated by the following lemma, which gives an intrinsic definition of \(\mathcal{H}U,\mathcal{D}_{y\mu}^{2}U\) without relying on the existence of \(D_{\mu\mu}^{2}U(\theta,x,z)\) and \(D_{y}D_{\mu}U(\theta,x)\).
**Lemma 2.3**.: _Let \(U\) be a full \(C^{2}\)-regular function. Then_
\[\frac{\partial^{2}}{\partial z^{2}}U(t,y,(I_{d}+z)_{\sharp}\mu)|_ {z=0} =\mathcal{H}U(t,y,\mu)\] \[\frac{\partial^{2}}{\partial z\partial y}U(t,y,(I_{d}+z)_{\sharp }\mu)|_{z=0} =\mathcal{D}_{y\mu}^{2}U(t,y,\mu).\]
Proof.: The result is a consequence of [21, Proposition 3.4].
Now we define the concept of partial \(C^{2}\)-regularity and the solution property of partial \(C^{2}\)-regular functions.
**Definition 2.2**.: _We say that a function \(U:\Theta\mapsto\mathbb{R}\) is partial \(C^{2}\)-regular if the function \((t,y,z,\mu)\in\mathbb{R}^{d}\times\Theta\mapsto U(t,y,(I_{d}+z)_{\sharp}\mu)\) is \(C^{1,2,2}\) in \((t,y,z)\) and continuous in all its variables, and \((D_{\mu}U,D_{x\mu}U)\in B_{\mathfrak{q}}^{d}\times B_{\mathfrak{q}}^{d\times d }\times B_{\mathfrak{q}}^{d\times d}\) exist and they are continuous in \((t,x,y,\mu)\)._
_For a partial \(C^{2}\)-regular function, we define \(\mathcal{H}U\) and \(\mathcal{D}_{y\mu}^{2}\) as_
\[\mathcal{H}U(\theta) :=\frac{\partial^{2}}{\partial z^{2}}U(t,y,(I_{d}+z)_{\sharp}\mu )|_{z=0},\] \[\mathcal{D}_{y\mu}^{2}U(\theta) :=\frac{\partial^{2}}{\partial z\partial y}U(t,y,(I_{d}+z)_{ \sharp}\mu)|_{z=0}.\]
_We say that a partial \(C^{2}\)-regular function \(U:\Theta\mapsto\mathbb{R}\) is a partial \(C^{2}\)-regular solution (resp. subsolution, supersolution) of (2.10), if_
\[(\partial_{t}U,D_{y}U,D_{yy}^{2}U,D_{\mu}U,D_{x\mu}U,\mathcal{D}_{y\mu}^{2}U, \mathcal{H}U)\in\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{S}_{d}\times B_{ \mathfrak{q}}^{d}\times B_{\mathfrak{q}}^{d\times d}\times\mathbb{R}^{d\times d }\times\mathcal{S}_{d}\]
_is continuous in all of its variables and_
\[-\partial_{t}U(\theta)-h(\theta,U(\theta),D_{y}U(\theta),D_{yy}^ {2}U(\theta),D_{\mu}U(\theta),D_{x\mu}U(\theta),\mathcal{D}_{y\mu}^{2}U( \theta),\mathcal{H}U(\theta))\] \[=0\,(\text{resp.}\ \ \leq 0,\,\geq 0),\]
_for all \(\theta\in\hat{\Theta}\)._
**Remark 2.1**.:
1. _Thanks to (_2.8_), for fixed_ \(\tilde{\mu}\in\mathcal{P}_{2}(\mathbb{R}^{d})\)_, the mapping_ \[\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\mapsto d_{F}^{2}(\mu,\tilde{\mu})\] _is partial_ \(C^{2}\)_-regular with_ \[\mathcal{H}d_{F}^{2}(\mu,\tilde{\mu})=2I_{d}\text{ and }\mathcal{D}_{y\mu}^{2}d_{F}^{2}((t,y,\mu),\tilde{\theta})=0.\]
2. _Note that although_ \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\mapsto W_{2}(\mu,\nu)^{2}\) _admits a partial Hessian_ \(\mathcal{H}\)_, it does not admit a first-order derivative at some points of_ \(\mathcal{P}_{2}(\mathbb{R}^{d})\)_. As such, it is not partial_ \(C^{2}\)_-regular._
3. _Given that we have relaxed the definition of smoothness, a natural question is what class of stochastic dynamics on the Wasserstein space is, for which an Ito formula still holds. We provide an answer to this question in Section_ 4_._
## 3. Notion of Viscosity solutions and our main results
The objective of this section is to use the fact that (2.10) only requires the partial Hessian and partial cross derivative in second-order terms to define the (partial) second-order jets. This leads to a novel definition of the viscosity property and an Ishii lemma. As an application of this result, we provide the promised comparison result.
### second-order jets
**Definition 3.1**.: _Let \(U:\Theta\mapsto\mathbb{R}\) be a locally bounded function and \(\theta\in\mathring{\Theta}\). We define the (partial) second-order superjet \(J^{2,+}U(\theta)\subset\mathbb{R}\times\mathbb{R}^{d}\times\mathcal{S}_{d} \times B_{\mathfrak{q}}^{d}\times B_{\mathfrak{q}}^{d\times d}\times\mathbb{R} ^{d\times d}\times\mathcal{S}_{d}\) of \(U\) at \(\theta\) by_
\[J^{2,+}U(\theta):=\Big{\{}(\partial_{t}\phi(\theta),D_{y}\phi(\theta),D_{yy}^{ 2}\phi(\theta),D_{\mu}\phi(\theta),D_{x\mu}\phi(\theta),\mathcal{D}_{y\mu}^{2} \phi(\theta),\mathcal{H}\phi(\theta)):\]
\[U-\phi\text{ has a local maximum at }\theta,\,\phi\text{ is partial }C^{2}\text{-regular}\Big{\}}\]
and subjet \(J^{2,-}U(\theta)=-J^{2,+}(-U)(\theta)\). Recalling that the topology of sets \(B_{\mathfrak{q}}^{d}\) and \(B_{\mathfrak{q}}^{d\times d}\) are induced by (2.4) and (2.5) respectively, we define the closure of the jets as
\[\bar{J}^{2,+}U(\theta)= \Big{\{}(b,p,X^{11},f,g,X^{12},X^{22})\in\mathbb{R}\times\mathbb{ R}^{d}\times\mathcal{S}_{d}\times B_{\mathfrak{q}}^{d}\times B_{\mathfrak{q}}^{d \times d}\times\mathbb{R}^{d\times d}\times\mathcal{S}_{d}:\] \[U-\phi^{n}\text{ has a local maximum at }\theta^{n},\,\phi^{n}\text{ is partial }C^{2}\text{-regular}\] \[(\partial_{t}\phi^{n}(\theta^{n}),D_{y}\phi^{n}(\theta^{n}),D_{yy}^ {2}\phi^{n}(\theta^{n}),D_{\mu}\phi^{n}(\theta^{n}),D_{x\mu}\phi^{n}(\theta^{n }),\mathcal{D}_{y\mu}^{2}\phi^{n}(\theta^{n}),\mathcal{H}\phi^{n}(\theta^{n}))\] \[\to(b,p,X^{11},f,g,X^{12},X^{22}),\,\theta^{n}\to\theta,\,U( \theta^{n})\to U(\theta)\Big{\}},\]
_and also \(\bar{J}^{2,-}U(\theta)=-\bar{J}^{2,+}(-U)(\theta)\)._
**Definition 3.2**.: _We say that a locally bounded function \(U:\Theta\mapsto\mathbb{R}\) is a viscosity subsolution (resp. supersolution) to (2.10) if for all \(\theta\in\mathring{\Theta}\) and \((b,p,X^{11},f,g,X^{12},X^{22})\in J^{2,+}U(\theta)\) (resp. \(\in J^{2,-}U(\theta)\)), one has the inequality_
\[-b-h(\theta,U(\theta),p,X^{11},f,g,X^{12},X^{22})\leq 0\,(\text{resp.}\geq 0). \tag{3.1}\]
**Remark 3.1**.: _In our definition of viscosity solution, the test functions are allowed to have partial regularity rather than full regularity as it is the case in [32, Section 5]. Thus, compared to a definition with full regularity, with our definition one needs to check the inequality (3.1) with more test functions \(\phi\) as the jets are larger. As noted in [30, Remark
3.12], our more stringent definition of viscosity property (with a larger set of test functions) is expected to simplify the proof of the comparison of viscosity solutions._
### An Ishii lemma on the Wasserstein space
The main technical contribution of the paper is the following version of Ishii's Lemma, whose proof is given in Section 6. The result can be seen as an infinite-dimensional extension of [25, Theorem 3.2 and Theorem 8.3] or [35, Proposition II.3]. Unlike [31, 40], we cannot rely on any underlying Hilbert space structure. The finite dimensionality of the partial Hessian \(\mathcal{H}u(\theta)\) is crucial in the proof of the Theorem.
**Theorem 3.1**.: _Assume that \(u\), \(-v:\Theta\mapsto\mathbb{R}\) are upper semicontinuous and that the function_
\[(t,y,z)\in[0,T]\times(\mathbb{R}^{d})^{2}\mapsto(u(t,y,(I_{d}+z)_{\sharp}\mu), v(t,y,(I_{d}+z)_{\sharp}\mu)) \tag{3.2}\]
_is continuous for fixed \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\). Let \(\alpha>0\) and \(((t^{*},y^{*},\mu^{*}),(\tilde{t}^{*},\tilde{y}^{*},\tilde{\mu}^{*}))=(\theta ^{*},\tilde{\theta}^{*})\in\mathring{\Theta}^{2}\) be a strict global maximum of_
\[(\theta,\tilde{\theta})\in\Theta^{2}\mapsto u(\theta)-v(\tilde{\theta})-\frac {\alpha}{2}d_{F}^{2}(\theta,\tilde{\theta}).\]
_Denote_
\[\mathcal{L}(\mu,\eta,\nu):= 2\int\frac{Re(F_{k}(\mathcal{S}_{0}(\mu))(F_{k}(\mathcal{S}_{0}( \eta))-F_{k}(\mathcal{S}_{0}(\nu)))^{*})}{(1+|k|^{2})^{\lambda}}dk \tag{3.3}\] \[+2Tr(V^{\top}(\mathcal{S}_{0}(\mu))(V(\mathcal{S}_{0}(\eta))-V( \mathcal{S}_{0}(\nu)))),\] \[\Psi(\mu):= 2\rho_{F}^{2}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\mu^{*}))+ \mathcal{L}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\mu^{*}),\mathcal{S}_{0}( \tilde{\mu}^{*})),\] (3.4) \[\tilde{\Psi}(\mu):= 2\rho_{F}^{2}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\tilde{\mu}^{ *}))-\mathcal{L}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\mu^{*}),\mathcal{S}_{0} (\tilde{\mu}^{*})).\]
_Then for any \(\epsilon>0\), there exist \(X=\begin{pmatrix}X^{11}&X^{12}\\ X^{21}&X^{22}\end{pmatrix},\tilde{X}=\begin{pmatrix}\tilde{X}^{11}&\tilde{X}^{ 12}\\ \tilde{X}^{21}&\tilde{X}^{22}\end{pmatrix}\in\mathcal{S}_{2d}\) so that_
\[\left(\alpha(t^{*}-\tilde{t}^{*}),\alpha(y^{*}-\tilde{y}^{*}),X^{ 11},\alpha(m(\mu^{*})-m(\tilde{\mu}^{*}))+\alpha\frac{D_{\mu}\Psi(\mu^{*})}{2},\alpha\frac{D_{x\mu}\Psi(\mu^{*})}{2},X^{12},X^{22}\right)\] \[\in\bar{J}^{2,+}u(\theta^{*}),\] \[\left(\alpha(t^{*}-\tilde{t}^{*}),\alpha(y^{*}-\tilde{y}^{*}),- \tilde{X}^{11},\alpha(m(\mu^{*})-m(\tilde{\mu}^{*}))-\alpha\frac{D_{\mu} \tilde{\Psi}(\tilde{\mu}^{*})}{2},-\alpha\frac{D_{x\mu}\tilde{\Psi}(\tilde{ \mu}^{*})}{2},-\tilde{X}^{12},-\tilde{X}^{22}\right)\] \[\in\bar{J}^{2,-}v(\theta^{*}),\text{ and }\] \[-\left(\frac{1}{\epsilon}+2\alpha\right)I_{4d}\leq\begin{pmatrix} X&0\\ 0&\tilde{X}\end{pmatrix}\leq(\alpha+2\epsilon\alpha^{2})\begin{pmatrix}I_{2d}&-I_{2d }\\ -I_{2d}&I_{2d}\end{pmatrix}.\]
### Comparison of viscosity solutions
Before stating our main theorem, we list the necessary assumptions.
**Assumption 3.1**.:
1. _Assume that the Hamiltonian_ \(h\) _is continuous in all variables and satisfies the following Lipschitz continuity assumption_ \[|h(\theta,u,p,X^{11},f,g,X^{12},X^{22})-h(\theta,\tilde{u},\tilde{p},\tilde{X}^{11},\tilde{f},\tilde{g},X^{12},\tilde{X}^{22})|\] (3.5) \[\leq L_{h}|u-\tilde{u}|+L_{h}\left(1+|y|+\int|x|^{2}\mu(dx)\right)\]
\[\left(||f-\tilde{f}||_{\mathfrak{q}}+||g-\tilde{g}||_{\mathfrak{q}}+|p-\tilde{p}|+|X ^{11}-\tilde{X}^{11}|+|X^{22}-\tilde{X}^{22}|\right)\] _for some constant_ \(L_{h}\)_._
2. _There exists a modulus of continuity_ \(\omega_{h}\) _so that for all_ \(\epsilon>0\)_,_ \(X,\tilde{X}\in\mathcal{S}_{2d}\) _satisfying_ \[-\frac{3}{\epsilon}I_{4d}\leq\begin{pmatrix}X&0\\ 0&\tilde{X}\end{pmatrix}\leq\frac{3}{\epsilon}\begin{pmatrix}I_{2d}&-I_{2d} \\ -I_{2d}&I_{2d}\end{pmatrix},\] (3.6) \[\theta^{*},\tilde{\theta}^{*}\in\mathring{\Theta}\text{, and }r\in\mathbb{R}\text{ we have the inequality}\] \[h\left(\theta^{*},r,\frac{y^{*}-\tilde{y}^{*}}{\epsilon},(X)^{11},\frac{m( \mu^{*}-\tilde{\mu}^{*})}{\epsilon}+\frac{D_{\mu}\Psi(\mu^{*})}{2\epsilon}, \frac{D_{x\mu}\Psi(\mu^{*})}{2\epsilon},(X)^{12},(X)^{22}\right)\] \[\leq h\left(\tilde{\theta}^{*},r,\frac{y^{*}-\tilde{y}^{*}}{ \epsilon},-(\tilde{X})^{11},\frac{m(\mu^{*}-\tilde{\mu}^{*})}{\epsilon}+ \frac{D_{\mu}\Psi(\mu^{*})}{2\epsilon},\frac{D_{x\mu}\Psi(\mu^{*})}{2\epsilon },-(\tilde{X})^{12},-(\tilde{X})^{22}\right)\] \[\quad+\omega_{h}\left(\frac{d_{F}^{2}(\theta^{*},\tilde{\theta}^{* })}{\epsilon}+d_{F}(\theta^{*},\tilde{\theta}^{*})\right)\left(1+|y|+\int|x|^ {2}\mu(dx)\right)\] (3.7) _where_ \(\Psi\) _is defined by (_3.4_) and its derivatives computed in (_7.15_)._
3. _For all_ \((\theta,u,p,f,g)\in\Theta\times\mathbb{R}\times\mathbb{R}^{d}\times B^{d}_{ \mathfrak{q}}\times B^{d\times d}_{\mathfrak{q}}\) _the mapping_ \[\begin{pmatrix}X^{11}&X^{12}\\ X^{12}&X^{22}\end{pmatrix}\in\mathcal{S}_{2d}\mapsto h(\theta,u,p,X^{11},f,g,X ^{12},X^{22})\] (3.8) _is increasing for the order of symmetric matrices._
We now use the version of Ishii's lemma in Theorem 3.1 to prove the comparison of viscosity solutions, which is the main result of the paper. The proof of this theorem is provided in Section 7.
**Theorem 3.2**.: _Assume that \(u\), \(-v:\Theta\mapsto\mathbb{R}\) are bounded from above and \(d_{F}\)-Lipschitz continuous functions in \((y,\mu)\) and uniformly continuous in \(t\), and \(u\) (resp. \(v\)) is a viscosity subsolution (resp. supersolution) of the equation_
\[-\partial_{t}U(\theta)-h(\theta,U(\theta),D_{y}U(\theta),D_{yy}^{2}U(\theta), D_{\mu}U(\theta,\cdot),D_{x\mu}U(\theta,\cdot),\mathcal{D}^{2}_{y\mu}U(\theta), \mathcal{H}U(\theta))=0\]
_for \(\theta=(t,y,\mu)\in\mathring{\Theta}\)._
_Then, under Assumption 3.1\(u(T,\cdot)\leq v(T,\cdot)\) implies that \(u(t,\cdot)\leq v(t,\cdot)\) for all \(t\in[0,T]\)._
**Remark 3.2**.:
1. _In order to avoid technicalities and concentrate solely on the issues associated with the absence of a Hilbert space structure in the Wasserstein space, we demonstrate our comparison result under the assumption of a Lipschitz continuous generator \(h\). We hypothesize that by incorporating the pertinent penalizations it is possible to relax this uniform continuity assumption._
2. _Assumption (_3.7_) is the counterpart of the assumption in_ _[_25_, Inequality (_3.14_)]__. The derivatives_ \(D_{\mu}\Psi(\mu^{*})\) _and_ \(D^{2}_{y\mu}\Psi(\mu^{*})\) _admit explicit Fourier decomposition in Lemma_ 5.2_. For the Hamilton-Jacobi-Bellman equation we study below, this assumption requires integrability and regularity of the Fourier transform of the data of the problem._
## 4. Application to stochastic control with partial observation
As an application, we show that the value function of the stochastic control problem with partial observation is the unique viscosity solution of a parabolic equation. Proofs are postponed to Section 8.
Suppose that we have two independent Brownian motions \(V,W\) of dimension \(d_{1}\) and \(d_{2}\), and a compact closed control set \(A\subset\mathbb{R}^{d}\). Take coefficients \(b:\mathbb{R}^{d}\times A\to\mathbb{R}^{d}\), \(\sigma:\mathbb{R}^{d}\times A\to\mathbb{R}^{d_{1}\times d}\), and \(\tilde{\sigma}:A\times\mathbb{R}^{d_{2}\times d}\). We consider the following stochastic differential equations
\[dX^{t,\mu,\alpha}_{s} =b(X^{t,\mu,\alpha}_{s},\alpha_{s})\,ds+\sigma(X^{t,\mu,\alpha}_{ s},\alpha_{s})\,dV_{s}+\tilde{\sigma}(\alpha_{s})\,dW_{s},\quad t\leq s\leq T,\] \[X^{t,\mu,\alpha}_{t} =\xi,\]
where \(\xi\) is independent of \(V,W\) with distribution \(\mu\) and \(\alpha_{t}\in A\) is an admissible control adapted to only \(W\). Since \(\xi\) is independent of \(V,W\), it can be easily checked that the distribution of \((X^{t,\mu,\alpha}_{s},\alpha_{s})\) is independent of the choice of \(\xi\).
Let \(m_{t}\) denote the conditional law \(\mathcal{L}(X_{t}\,|\,\mathcal{F}^{W}_{t})\). Then \(m_{t}\) satisfies the equation
\[dm^{t,\mu,\alpha}_{s}(h) =m^{t,\mu,\alpha}_{s}(L^{\alpha_{s}}h)\,ds+m^{t,\mu,\alpha}_{s}(M ^{\alpha_{s}}h)\,dW_{s},\quad\ t\leq s\leq T \tag{4.1}\] \[m^{t,\mu,\alpha}_{t} =\mu,\]
where \(h:\mathbb{R}^{d}\to\mathbb{R}\) is any \(C^{2}\) test function and
\[L^{a}h(\cdot):= b(\cdot,a)^{\top}Dh(\cdot)+\frac{1}{2}Tr\left((\sigma\sigma^{ \top}(\cdot,a)+\tilde{\sigma}\tilde{\sigma}^{\top}(a))D^{2}h(\cdot)\right),\] \[M^{a}h(\cdot):= \tilde{\sigma}(a)^{\top}Dh(\cdot).\]
Take \(\mathcal{A}:=\{\alpha=(\alpha_{s})_{0\leq s\leq T}:\alpha_{s}\in A\text{ is }\mathcal{F}^{W}_{s}\text{ measurable for all }s\in[0,T]\}\) to be the set of all admissible controls. Given a running cost \(f:\mathbb{R}^{d}\times A\to\mathbb{R}\), and a terminal cost \(g:\mathbb{R}^{d}\to\mathbb{R}\), we define the cost of control \(\alpha\in\mathcal{A}\),
\[J(t,\mu,\alpha):=\mathbb{E}\left[\int_{t}^{T}f(X^{t,\mu,\alpha}_{s},\alpha_{s })\,ds+g(X^{t,\mu,\alpha}_{T})\right].\]
We aim at solving the following optimization problem
\[v(t,\mu)=\inf_{\alpha\in\mathcal{A}}J(t,\mu,\alpha).\]
**Remark 4.1**.: _(i) This is a simple version of the filtering problem. Here we assume the controller observes the process \((Y_{s})=(W_{s})_{0\leq s\leq T}\). If \(f\) and \(g\) are also functions of the observation process \((Y_{s})\), we can still show that the value function \(v\) is the unique viscosity solution of some second-order PDE. Let us take the value function_
\[v(t,y,\mu):=\inf_{\alpha\in\mathcal{A}}\mathbb{E}\left[\int_{t}^{T}f(X^{t,\mu, \alpha}_{s},Y_{s},\alpha_{s})\,ds+g(X^{t,\mu,\alpha}_{T},Y_{T})\right],\]
_and the generator_
\[K(a,y,\mu,X^{11},p,q,X^{12},X^{22}):= \int f(x,y,a)+b(x,a)p(x)+\frac{1}{2}Tr\left(q(x)\sigma\sigma^{ \top}(x,a)\right)\,\mu(dx)\] \[+\frac{1}{2}Tr(X^{11})+Tr(\tilde{\sigma}(a)\mathbf{1}^{\top}X^{12 })+\frac{1}{2}Tr(\tilde{\sigma}\tilde{\sigma}^{\top}(a)X^{22}).\]
_By a similar argument, with the notation \(\theta=(t,y,\mu)\) it can be checked that the value function \(v\) is the unique viscosity solution of_
\[-\partial_{t}v(\theta) =\inf_{a\in A}K\left(a,y,\mu,D^{2}_{yy}v(\theta),D_{\mu}v(\theta),D _{x\mu}v(\theta),\mathcal{D}^{2}_{y\mu}(\theta),\mathcal{H}v(\theta)\right),\] \[v(T,y,\mu) =\int g(x,y)\,\mu(dx).\]
_This equation fits in the form of Theorem 3.2, but for simplicity of notation, we prove the case where the coefficients are not functions of \(y\)._
_(ii) In [8], the authors characterize the asymptotic behavior of a learning problem by a second-order PDE on Wasserstein space, which fits in the form of (1.1) and satisfies Assumption 3.1._
_(iii) We can also allow \(b,f,\sigma,\tilde{\sigma}\) to be \(\mu\) dependent with additional assumption on its dependence in \(\mu\)._
_(vi) We conjecture that our uniqueness result can be applied to first-order equations of the McKean-Vlasov control studied in [47] where the control \(\alpha\) is allowed to be a feedback control dependent on \(X\). In this case, similar to [47], the admissibility condition of controls must require the regularity \(x\mapsto b(x,\alpha(x))\)._
**Assumption 4.1**.:
1. _The functions_ \(b,\sigma,\tilde{\sigma},f,g\) _are bounded and continuous over their domains._
2. \(b(\cdot,a),\sigma(\cdot,a)\) _are_ \(\lambda\) _times differentiable with bounded derivatives uniform in_ \(a\)_, i.e.,_ \[\sup_{a\in A,k=0,\ldots,\lambda}\lVert b^{(k)}(\cdot,a)\rVert_{\infty}+\sup_ {a\in A}\lVert\sigma^{(k)}(\cdot,a)\rVert_{\infty}<\infty,\] _and also_ \[\sup_{a\in A,i,j=0,\ldots,d}||x^{i}b^{j}(x,a)||_{\lambda}+||b^{j}(x,a)||_{ \lambda}+||f(\cdot,a)||_{\lambda}<\infty.\]
3. _Derivatives of cost functions_ \(f,g\) _are of exponential decay, i.e., there exist some positive constants_ \(K,c\) _such that_ \(\sup_{a\in A}|f^{(k)}(x,a)|+|g^{(k)}(x)|\leq Ke^{-c|x|}\) _for_ \(k=0,\ldots,\lambda\)_._
**Lemma 4.1**.: _Under Assumption 4.1, the solution to (4.1) is pathwise continuous and it satisfies_
\[\mathbb{E}[W_{2}(m^{t,\mu,\alpha}_{s},\mu)^{2}]+\mathbb{E}\left[\lVert m^{t, \mu,\alpha}_{s}-\mu\rVert_{\lambda}^{2}\right]\leq L(s-t).\]
**Proposition 4.1**.: _Under Assumption 4.1, the value function \(v:[0,T]\times\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) is bounded and Lipschitz continuous with respect to \(\rho_{F}\), uniformly continuous in time and satisfies the dynamic programming principle_
\[v(t,\mu) =\inf_{\alpha\in\mathcal{A}}\inf_{\tau\in\mathcal{T}^{T}_{t}} \mathbb{E}\left[\int_{t}^{\tau}f(X^{t,\mu,\alpha}_{s},\alpha_{s})\,ds+v(\tau, m^{t,\mu,\alpha}_{\tau})\right]\] \[=\inf_{\alpha\in\mathcal{A}}\sup_{\tau\in\mathcal{T}^{T}_{t}} \mathbb{E}\left[\int_{t}^{\tau}f(X^{t,\mu,\alpha}_{s},\alpha_{s})\,ds+v(\tau, m^{t,\mu,\alpha}_{\tau})\right],\]
_where \(\mathcal{T}^{T}_{t}\) is the set of \([t,T]\)-valued stopping times \(\tau\) with respect to the filtration \((\mathcal{F}^{W}_{s})_{s\in[t,T]}\)_
Another crucial ingredient to show the viscosity property of value functions is Ito's formula for measure-valued process. Our argument relies on the Ito-Wentzell formula, see
e.g. [45], and the fact that the second-order generator of \(m_{t}\) is given by the Wasserstein Hessian \(\mathcal{H}\), and hence we require weaker regularity.
**Proposition 4.2** (Ito's formula for measure valued SDE).: _Suppose \(\psi:[0,T]\times\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) is partial \(C^{2}\)-regular. Then we have that_
\[d\psi(s,m_{s})= \partial_{t}\psi(s,m_{s})\,ds+\int b(x,\alpha_{s})D_{\mu}\psi(s,m_ {s})(x)\,m_{s}(dx)\,ds\] \[+\frac{1}{2}\int Tr\left(D_{x\mu}\psi(s,m_{s})(x)\sigma\sigma^{ \top}(x,\alpha_{s})\right)m_{s}(dx)\,ds\] \[+\frac{1}{2}Tr(\mathcal{H}\psi(s,m_{s})\tilde{\sigma}\tilde{ \sigma}^{\top}(\alpha_{s}))\,ds+D_{\mu}\psi(s,m_{s})([m_{s}])\tilde{\sigma}( \alpha_{s})\,dW_{s}.\]
Let us define for \((a,\mu,p,q,M)\in A\times\mathcal{P}_{2}(\mathbb{R}^{d})\times B^{d}_{\mathfrak{ q}}\times B^{d\times d}_{\mathfrak{q}}\times\mathcal{S}^{d}\)
\[K(a,\mu,p,q,M):= \int f(x,a)+b(x,a)p(x)+\frac{1}{2}Tr\left(q(x)\sigma\sigma^{\top }(x,a)\right)\,\mu(dx)\] \[+\frac{1}{2}Tr(\tilde{\sigma}\tilde{\sigma}^{\top}(a)M).\]
**Theorem 4.1** (Viscosity solution).: _Under Assumption 4.1, the value function is the unique viscosity solution of the equation_
\[-\partial_{t}v(t,\mu)= \inf_{a\in A}K(a,\mu,D_{\mu}v(t,\mu),D_{x\mu}v(t,\mu),\mathcal{H}v (t,\mu))\] \[v(T,\mu)= \mu(g).\]
## 5. Auxiliary Results
### Differentiability of the metrics
We will use the following lemma to compute derivatives in the Wasserstein space.
**Lemma 5.1**.: _Let \(u:\mathcal{P}_{2}(\mathbb{R}^{d})\mapsto\mathbb{R}\) be first-order differentiable, i.e., \(u\in C^{1}\). Define \(u_{0}:\mathcal{P}_{2}(\mathbb{R}^{d})\mapsto\mathbb{R}\) by the equation \(u_{0}(\mu):=u(\mathcal{S}_{0}(\mu))=u((I_{d}-m(\mu))\sharp\mu)\). Then, \(u_{0}\) is also differentiable, and for all \(x\in\mathbb{R}^{d}\) we have_
\[D_{\mu}u_{0}(\mu,x)=D_{\mu}u(\mathcal{S}_{0}(\mu),x-m(\mu))-D_{\mu}u( \mathcal{S}_{0}(\mu),[\mathcal{S}_{0}(\mu_{0})]).\]
_Therefore, the derivative of \(\mu\mapsto F_{k}(\mathcal{S}_{0}(\mu))\) is_
\[D_{\mu}(F_{k}(\mathcal{S}_{0}(\mu)))(x)=ike^{ik\cdot m(\mu)}\left(\int f_{k}(z )\mu(dz)-f_{k}(x)\right)=ik(F_{k}(\mathcal{S}_{0}(\mu))-f_{k}(x-m(\mu))).\]
Proof.: We fix \(\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), \(h\) a continuous and bounded function and define \(\mu^{\epsilon}:=(I_{d}+\epsilon h)_{\sharp}\mu\). Then, we have the equality of laws
\[\mathcal{S}_{0}(\mu^{\epsilon}) =(I_{d}-m(\mu)-\epsilon h([\mu]))_{\sharp}\mu^{\epsilon}\] \[=(I_{d}+\epsilon h(\cdot)-m(\mu)-\epsilon h([\mu]))_{\sharp}\mu\] \[=(I_{d}+\epsilon(h(\cdot+m(\mu))-h([\mu]))_{\sharp}\mathcal{S}_{ 0}(\mu).\]
By direct computation we have the following limit
\[\lim_{\epsilon\to 0}\frac{u_{0}(\mu^{\epsilon})-u_{0}(\mu)}{\epsilon}=\lim_{ \epsilon\to 0}\frac{u\left((I_{d}+\epsilon(h(\cdot+m(\mu))-h([\mu])))_{\sharp} \mathcal{S}_{0}(\mu)\right)-u(\mathcal{S}_{0}(\mu))}{\epsilon}\]
\[=\int D_{\mu}^{\top}u(\mathcal{S}_{0}(\mu),x)(h(x+m(\mu))-h([\mu])) \,\mathcal{S}_{0}(\mu)(dx)\] \[=\int D_{\mu}^{\top}u(\mathcal{S}_{0}(\mu),x-m(\mu))h(x)\,\mu(dx)-D_ {\mu}^{\top}u(\mathcal{S}_{0}(\mu),[\mathcal{S}_{0}(\mu)]])h([\mu])\] \[=\int\big{(}D_{\mu}u(\mathcal{S}_{0}(\mu),x-m(\mu))-D_{\mu}u( \mathcal{S}_{0}(\mu),[\mathcal{S}_{0}(\mu)])\big{)}^{\top}h(x)\,\mu(dx)\]
which concludes the proof.
Denote the canonical basis of \(\mathbb{R}^{d}\) by \((e_{1},\ldots,e_{d})\). Then one can easily show that \(\mu\mapsto V(\mu)\) is \(L\)-differentiable and for all \(i,j\)
\[D_{\mu}V^{i,j}(\mu,x)=(x^{j}-m^{j}(\mu))e_{i}+(x^{i}-m^{i}(\mu))e_{j},\]
where \(V^{i,j}\) stands for the \((i,j)\)-entry of matrix \(V\). With these preparation, we compute the derivatives of \(\rho_{F}^{2}(\mu,\nu)\). For a function \(f:\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathcal{P}_{2}(\mathbb{R}^{d})\to \mathbb{R}\), we define its partial Hessian w.r.t. \((\mu,\nu)\) via
\[\mathcal{H}_{\mu,\nu}f(\mu,\nu)=\left(\frac{d^{2}}{dx_{i}dx_{j}}\Big{|}_{x_{1} =x_{2}=0}f((I_{d}+x_{1})_{\sharp}\mu,(I_{d}+x_{2})_{\sharp}\nu)\right)_{1\leq i,j\leq 2}\in\mathbb{R}^{2d\times 2d}.\]
**Lemma 5.2**.: _Let \(\mu,\nu\in\mathcal{P}_{2}(\mathbb{R}^{k})\). Then \((\mu,\nu)\mapsto\rho_{F}^{2}(\mu,\nu)\) is twice-differentiable, and_
\[D_{\mu}\rho_{F}^{2}(\mu,\nu,x)= 2(m(\mu)-m(\nu))+4(V(\mu)-V(\nu))(x-m(\mu))\] \[+2\int\frac{Re\left(ik\left(F_{k}(\mathcal{S}_{0}(\mu))-f_{k}(x-m (\mu))\right)\left(F_{k}(\mathcal{S}_{0}(\mu))-F_{k}(\mathcal{S}_{0}(\nu)) \right)^{*}\right)}{(1+|k|^{2})^{\lambda}}dk\] \[D_{\mu x}\rho_{F}^{2}(\mu,\nu,x)= 4(V(\mu)-V(\nu))\] \[-2\int\frac{kk^{\top}Re\left(f_{k}(x-m(\mu))(F_{k}(\mathcal{S}_{0 }(\mu))-F_{k}(\mathcal{S}_{0}(\nu)))^{*}\right)}{(1+|k|^{2})^{\lambda}}dk\] \[\mathcal{H}_{\mu,\nu}\rho_{F}^{2}(\mu,\nu)= 2\begin{pmatrix}I_{d}&-I_{d}\\ -I_{d}&I_{d}\end{pmatrix}.\]
_Furthermore, \(D_{\mu}\rho_{F}^{2},D_{x\mu}\rho_{F}^{2}\) are continuous in \(\mu\) in any of the equivalent topologies considered._
Proof.: The first two equations directly follow from Lemma 5.1. The last is due to the fact that for \(c\in\mathbb{R}\)
\[\rho_{F}^{2}((I_{d}+c)_{\sharp}\mu,\nu)-\rho_{F}^{2}(\mu,\nu)=\rho_{F}^{2}(\mu,(I_{d}-c)_{\sharp}\nu)-\rho_{F}^{2}(\mu,\nu)=|\bar{\mu}-\bar{\nu}+c|^{2}-| \bar{\mu}-\bar{\nu}|^{2},\]
and
\[D_{\mu}\rho_{F}^{2}(\mu,\nu,[\mu])=\frac{d}{dc}\rho_{F}^{2}((I_{d}+c)_{\sharp }\mu,\nu)|_{c=0},\]
\[\mathcal{H}_{\mu}\rho_{F}^{2}(\mu,\nu)=\frac{d^{2}}{dc^{2}}\rho_{F}^{2}((I_{d }+c)_{\sharp}\mu,\nu)|_{c=0}.\]
Since \(\rho_{F},W_{2}\) induce the same topology on \(\mathcal{P}_{2}(\mathbb{R}^{d})\), we prove the continuity of \(\mu\mapsto(D_{\mu}\rho_{F}^{2}(\mu,\nu,x),D_{x\mu}\rho_{F}^{2}(\mu,\nu,x))\) with respect to the \(W_{2}\) metric. Note that \(\mu\mapsto(m(\mu),V(\mu),\mathcal{S}_{0}(\mu))\) is clearly continuous and the Fourier transforms and basis \(F_{k}(\nu),f_{k}(x)\) are uniformly bounded for all \(\nu\in\mathcal{P}_{2}(\mathbb{R}^{d}),k,x\in\mathbb{R}^{d}\). Our claim simply follows from the continuity of \(\mu\mapsto F_{k}(\mu)\) and the dominated convergence theorem.
In the proof of the main theorems, we will use the following auxiliary function to reduce functions on Wasserstein space to the ones on \(\mathbb{R}^{d}\). For \(\mu,\eta,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\), denote
\[\mathcal{L}(\mu,\eta,\nu):= 2Tr(V^{\top}(\mathcal{S}_{0}(\mu))(V(\mathcal{S}_{0}(\eta))-V( \mathcal{S}_{0}(\nu))))\] \[+2\int\frac{Re(F_{k}(\mathcal{S}_{0}(\mu))(F_{k}(\mathcal{S}_{0}( \eta))-F_{k}(\mathcal{S}_{0}(\nu)))^{*})}{(1+|k|^{2})^{\lambda}}dk.\]
Here are some properties of \(\mathcal{L}\) whose proof is almost the same as in Lemma 5.2 and thus omitted.
**Lemma 5.3**.: _Let \(\mu,\eta,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\). Then \(\mu\mapsto\mathcal{L}(\mu,\eta,\nu)\) is twice-differentiable, and_
\[D_{\mu}\mathcal{L}(\mu,\eta,\nu,x)= 2\sum_{i,j}(V^{i,j}(\mathcal{S}_{0}(\eta))-V^{i,j}(\mathcal{S}_{ 0}(\nu)))\big{(}(x^{j}-m^{j}(\mu))e_{i}+(x^{i}-m^{i}(\mu))e_{j}\big{)}\] \[+2\int\frac{Re(ik(F_{k}(\mathcal{S}_{0}(\mu))-f_{k}(x-m(\mu)))(F_ {k}(\mathcal{S}_{0}(\eta))-F_{k}(\mathcal{S}_{0}(\nu)))^{*})}{(1+|k|^{2})^{ \lambda}}dk,\] \[D_{x\mu}\mathcal{L}(\mu,\eta,\nu,x)= 2\sum_{i,j}(V^{i,j}(\mathcal{S}_{0}(\eta))-V^{i,j}(\mathcal{S}_{ 0}(\nu)))\big{(}e_{i}e_{j}^{\top}+e_{j}e_{i}^{\top}\big{)}\] \[-2\int\frac{kk^{\top}Re\left(f_{k}(x-m(\mu))(F_{k}(\mathcal{S}_{0} (\eta))-F_{k}(\mathcal{S}_{0}(\nu)))^{*}\right)}{(1+|k|^{2})^{\lambda}}dk.\]
_Furthermore, \(\mu\mapsto(D_{\mu}\mathcal{L}(\mu,\eta\,\nu,x),D_{x\mu}\mathcal{L}(\mu,\eta, \nu,x))\) is continuous w.r.t. any of the equivalent topology considered._
### Variational principle
Thanks to [1, Lemma 2] and [22, Lemma 4.4], \(\rho_{\sigma}\) is a gauge-type function on the complete metric space \((\mathcal{P}_{2}(\mathbb{R}^{d}),W_{2}^{(\sigma)})\). Recall the definition of \(d_{\sigma}\) in (2.9). Thanks to [22, Lemma 4.3 and Lemma 4.4], the mapping
\[((\theta_{1},\tilde{\theta}_{1}),(\theta_{2},\tilde{\theta}_{2}))\in(\Theta^{ 2})^{2}\mapsto d_{\sigma}^{2}(\theta_{1},\theta_{2})+d_{\sigma}^{2}(\tilde{ \theta}_{1},\tilde{\theta}_{2})\]
is a gauge-type function on the product space \(\Theta^{2}\) where \(\mathcal{P}_{2}(\mathbb{R}^{d})\) is endowed with the metric \(W_{2}^{(\sigma)}\). Being topologically equivalent to \(W_{2}\) and \(\rho_{F}\), these metric admit the same set of semi-continuous functions. Thus, we have the following version of the Borwein-Preiss variational principle whose proof is the same as in [22, Theorem 4.5] up to the addition of the \(y\) variable and the doubling of all variables. The upper bound of the derivatives can be easily derived from [22, (4.6),(4.7)] and Lemma 5.1.
**Lemma 5.4**.: _Fix \(\kappa>0\) and let \(G:\Theta^{2}\mapsto\mathbb{R}\) be an upper semicontinuous function. Assume that_
\[\sup_{(\theta,\tilde{\theta})\in\Theta^{2}}G(\theta,\tilde{\theta})<\infty.\]
_and let \((\theta^{0},\tilde{\theta}^{0})\in\Theta^{2}\) so that_
\[\sup_{(\theta,\tilde{\theta})\in\Theta^{2}}G(\theta,\tilde{\theta})-\kappa<G( \theta^{0},\tilde{\theta}^{0}).\]
_Then, for every \(\delta>0\), there exists \((\theta^{*},\tilde{\theta}^{*})\) and \(\{(\theta^{j},\tilde{\theta}^{j}):j\geq 1\}\) such that denoting_
\[\phi_{\delta}(\theta):=\sum_{j=0}^{\infty}\frac{d_{\sigma}^{2}(\theta,\theta^{ j})}{2^{j}}\text{ and }\tilde{\phi}_{\delta}(\tilde{\theta}):=\sum_{j=0}^{\infty}\frac{d_{\sigma}^{2}(\tilde{ \theta},\tilde{\theta}^{j})}{2^{j}}\]
_we have_
\[\phi_{\delta}(\theta):=\sum_{j=0}^{\infty}\frac{d_{\sigma}^{2}(\tilde{\theta}, \tilde{\theta}^{j})}{2^{j}}\]
_and \(\tilde{\theta}_{\delta}(\tilde{\theta}):=\sum_{j=0}^{\infty}\frac{d_{\sigma}^{2 }(\tilde{\theta},\tilde{\theta}^{j})}{2^{j}}\)._
Proof.: We first prove the claim.
**Lemma 5.5**.: _Let \(\mu,\eta,\nu\in\mathcal{P}_{2}(\mathbb{R}^{d})\). Then \(\mu\mapsto\mathcal{L}(\mu,\eta,\nu)\) is twice-differentiable, and_
\[D_{\mu}\mathcal{L}(\mu,\eta,\nu,x)= 2\sum_{i,j}(V^{i,j}(\mathcal{S}_{0}(\eta))-V^{i,j}(\mathcal{S}_ {0}(\nu)))\big{(}(x^{j}-m^{j}(\mu))e_{i}+(x^{i}-m^{i}(\mu))e_{j}\big{)}\] \[+2\int\frac{Re(ik(F_{k}(\mathcal{S}_{0}(\eta))-f_{k}(x-m(\mu)))(F_ {k}(\mathcal{S}_{0}(\eta))-F_{k}(\mathcal{S}_{0}(\nu)))^{*})}{(1+|k|^{2})^{ \lambda}}dk,\] \[D_{x\mu}\mathcal{L}(\mu,\eta,\nu,x)= 2\sum_{i,j}(V^{i,j}(\mathcal{S}_{0}(\eta))-V^{i,j}(\mathcal{S}_ {0}(\nu)))\big{(}e_{i}e_{j}^{\top}+e_{j}e_{i}^{\top}\big{)}\] \[-2\int\frac{kk^{\top}Re\left(f_{k}(x-m(\mu))(F_{k}(\mathcal{S}_ {0}(\eta))-F_{k}(\mathcal{S}_{0}(\nu)))^{*}\right)}{(1+|k|^{2})^{\lambda}}dk.\]
_Furthermore, \(\mu\mapsto(D_{\mu}\mathcal{L}(\mu,\eta\,\nu,x),D_{x\mu}\mathcal{L}(\mu,\eta, \nu,x))\) is continuous w.r.t. any of the equivalent topology considered._
### Variational principle
Thanks to [1, Lemma 2] and [22, Lemma 4.4], \(\rho_{\sigma}\) is a gauge-type function on the complete metric space \((\mathcal{P}_{2}(\mathbb{R}^{d}),W_{2}^{(\sigma)})\). Recall the definition of \(d_{\sigma}\) in (2.9). Thanks to [22, Lemma 4.3 and Lemma 4.4], the mapping
\[((\theta_{1},\tilde{\theta}_{1}),(\theta_{2},\tilde{\theta}_{2}))\in(\Theta^{2}) ^{2}\mapsto d_{\sigma}^{2}(\theta_{1},\theta_{2})+d_{\sigma}^{2}(\tilde{\theta}_ {1},\tilde{\theta}_{2})\]
is a gauge-type function on the product space \(\Theta^{2}\) where \(\mathcal{P}_{2}(\mathbb{R}^{d})\) is endowed with the metric \(W_{2}^{(\sigma)}\). Being topologically equivalent to \(W_{2}\) and \(\rho_{F}\), these metric admit the same set of semi-continuous functions. Thus, we have the following version of the Borwein-Preiss variational principle whose proof is the same as in [22, Theorem 4.5] up to the addition of the \(y\) variable and the doubling of all variables. The upper bound of the derivatives can be easily derived from [22, (4.6),(4.7)] and Lemma 5.1.
**Lemma 5.4**.: _Fix \(\kappa>0\) and let \(G:\Theta^{2}\mapsto\mathbb{R}\) be an upper semicontinuous function. Assume that_
\[\sup_{(\theta,\tilde{\theta})\in\Theta^{2}}G(\theta,\tilde{\theta})<\infty.\]
_and let \((\theta^{0},\tilde{\theta}^{0})\in\Theta^{2}\) so that_
\[\sup_{(\theta,\tilde{\theta})\in\Theta^{2}}G(\theta,\tilde{\theta})-\kappa<G( \theta^{0},\tilde{\theta}^{0}).\]
_Then, for every \(\delta>0\), there exists \((\theta^{*},\tilde{\theta}^{*})\) and \(\{(\theta^{j},\tilde{\theta}^{j}):j\geq 1\}\) such that denoting_
\[\phi_{\delta}(\theta):=\sum_{j=0}^{\infty}\frac{d_{\sigma}^{2}(\theta,\theta^{ j})}{2^{j}}\text{ and }\tilde{\phi}_{\delta}(\tilde{\theta}):=\sum_{j=0}^{\infty}\frac{d_{\sigma}^{2
* \(d_{\sigma}^{2}(\theta^{*},\theta^{j})+d_{\sigma}^{2}(\tilde{\theta}^{*},\tilde{ \theta}^{j})\leq\frac{\kappa}{\delta^{2j}}\) _for_ \(j\geq 0\)_;_
* \(G(\theta^{0},\tilde{\theta}^{0})\leq G(\theta^{*},\tilde{\theta}^{*})-\delta( \phi_{\delta}(\theta^{*})+\tilde{\phi}_{\delta}(\tilde{\theta}^{*}))\)_;_
* \((\theta,\tilde{\theta})\in\Theta^{2}\mapsto G(\theta,\tilde{\theta})-\delta( \phi_{\delta}(\theta)+\tilde{\phi}_{\delta}(\tilde{\theta}))\) _admits a strict global maximum at_ \((\theta^{*},\tilde{\theta}^{*})\)_._
_Additionally \(d_{\sigma}\) satisfies the following derivative bounds_
\[|D_{\mu}d_{\sigma}^{2}(\theta,\tilde{\theta},x)| \leq C_{d}\left(\sigma+\sigma^{-1}\left(|x|^{2}+|m(\mu)|^{2}+\int |x|^{2}\,\mathcal{S}_{0}(\mu)(dx)\right)\right)+2|m(\mu)-m(\tilde{\mu})|,\] \[|D_{x\mu}d_{\sigma}^{2}(\theta,\tilde{\theta},x)| \leq C_{d}(1+\sigma^{-2}(|x|^{2}+|m(\mu)|^{2})),\]
_for a constant \(C_{d}\) that only depends on the dimension \(d\)._
## 6. Proof of Ishii's Lemma
Proof of Theorem 3.1.: _Step 1: Reducing to finite dimensional Ishii's lemma._ Define the functions
\[u_{1}(t,y,\mu) :=u(t,y,\mu)-\frac{\alpha}{2}\Psi(\mu),\] \[v_{1}(t,y,\mu) :=v(t,y,\mu)+\frac{\alpha}{2}\tilde{\Psi}(\mu).\]
Thanks to the inequalities
\[|F_{k}(\mathcal{S}_{0}(\mu))-F_{k}(\mathcal{S}_{0}(\tilde{\mu})) |^{2}+|F_{k}(\mathcal{S}_{0}(\mu^{*}))-F_{k}(\mathcal{S}_{0}(\tilde{\mu}^{*}) )|^{2}\] \[\qquad\leq 2(|F_{k}(\mathcal{S}_{0}(\mu))-F_{k}(\mathcal{S}_{0}( \mu^{*}))|^{2}+|F_{k}(\mathcal{S}_{0}(\tilde{\mu}))-F_{k}(\mathcal{S}_{0}( \tilde{\mu}^{*}))|^{2})\] \[\qquad+2Re((F_{k}(\mathcal{S}_{0}(\mu))-F_{k}(\mathcal{S}_{0}( \tilde{\mu})))(F_{k}(\mathcal{S}_{0}(\mu^{*}))-F_{k}(\mathcal{S}_{0}(\tilde{ \mu}^{*}))^{*})) \tag{6.1}\]
and
\[Tr((M-N)^{\top}(M-N))+Tr((M^{*}-N^{*})^{\top}(M^{*}-N^{*}))\] \[\qquad\leq 2(Tr((M-M^{*})^{\top}(M-M^{*}))+Tr((N-N^{*})^{\top}(N -N^{*})))\] \[\qquad+2Tr((M-N)^{\top}(M^{*}-N^{*}))) \tag{6.2}\]
valid for any symmetric matrices \(M=V(\mu),N=V(\tilde{\mu}),M^{*}=V(\mu^{*}),N^{*}=V(\tilde{\mu}^{*})\), we have the inequality
\[2\rho_{F}^{2}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\mu^{*}))+2 \rho_{F}^{2}(\mathcal{S}_{0}(\tilde{\mu}),\mathcal{S}_{0}(\tilde{\mu}^{*}))+ \mathcal{L}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\mu^{*}),\mathcal{S}_{0}( \tilde{\mu}^{*}))-\mathcal{L}(\mathcal{S}_{0}(\tilde{\mu}),\mathcal{S}_{0}( \mu^{*}),\mathcal{S}_{0}(\tilde{\mu}^{*}))\] \[\geq\rho_{F}^{2}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\tilde{\mu} ))+\rho_{F}^{2}(\mathcal{S}_{0}(\mu^{*}),\mathcal{S}_{0}(\tilde{\mu}^{*})).\]
Thus, we have the inequality,
\[u_{1}(t,y,\mu)-v_{1}(\tilde{t},\tilde{y},\tilde{\mu})-\frac{ \alpha}{2}\left(|y-\tilde{y}|^{2}+|m(\mu)-m(\tilde{\mu})|^{2}+|t-\tilde{t}|^{2}\right)\] \[=u(t,y,\mu)-v(\tilde{t},\tilde{y},\tilde{\mu})-\frac{\alpha}{2} \left(|y-\tilde{y}|^{2}+|m(\mu)-m(\tilde{\mu})|^{2}+|t-\tilde{t}|^{2}\right)\] \[-\frac{\alpha}{2}\left(2\rho_{F}^{2}(\mathcal{S}_{0}(\mu), \mathcal{S}_{0}(\mu^{*}))+2\rho_{F}^{2}(\mathcal{S}_{0}(\tilde{\mu}),\mathcal{S} _{0}(\tilde{\mu}^{*}))\right)\] \[-\frac{\alpha}{2}\left(\mathcal{L}(\mathcal{S}_{0}(\mu),\mathcal{ S}_{0}(\mu^{*}),\mathcal{S}_{0}(\tilde{\mu}^{*}))-\mathcal{L}(\mathcal{S}_{0}( \tilde{\mu}),\mathcal{S}_{0}(\mu^{*}),\mathcal{S}_{0}(\tilde{\mu}^{*}))\right)\] \[\leq u(t,y,\mu)-v(\tilde{t},\tilde{y},\tilde{\mu})-\frac{\alpha}{2 }\left(|y-\tilde{y}|^{2}+|m(\mu)-m(\tilde{\mu})|^{2}+|t-\tilde{t}|^{2}\right)\] \[-\frac{\alpha}{2}(\rho_{F}^{2}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0 }(\tilde{\mu}))+\rho_{F}^{2}(\mathcal{S}_{0}(\mu^{*}),\mathcal{S}_{0}(\tilde{ \mu}^{*})))\]
\[=u(\theta)-v(\tilde{\theta})-\frac{\alpha}{2}d_{F}^{2}(\theta,\tilde{ \theta})-\frac{\alpha}{2}\rho_{F}^{2}(\mathcal{S}_{0}(\mu^{*}),\mathcal{S}_{0}( \tilde{\mu}^{*})).\]
Additionally, for \((\mu,\tilde{\mu},M,N)=(\mu^{*},\tilde{\mu}^{*},M^{*},N^{*})\) the inequalities (6.1)-(6.2) are equalities. Thus, the function
\[u_{1}(t,y,\mu)-v_{1}(\tilde{t},\tilde{y},\tilde{\mu})-\frac{\alpha}{2}\left(|y -\tilde{y}|^{2}+|m(\mu)-m(\tilde{\mu})|^{2}+|t-\tilde{t}|^{2}\right) \tag{6.3}\]
admits the same strict global maximum at \(((t^{*},y^{*},\mu^{*}),(\tilde{t}^{*},\tilde{y}^{*},\tilde{\mu}^{*}))\) as
\[u(\theta)-v(\tilde{\theta})-\frac{\alpha}{2}d_{F}^{2}(\theta,\tilde{\theta}).\]
Define the mappings
\[(t,y,z)\in[0,T]\times(\mathbb{R}^{d})^{2}\mapsto u_{2}(t,y,z)=\sup_{\mu\in \mathcal{P}_{2,0}(\mathbb{R}^{d})}u_{1}(t,y,(I_{d}+z)\sharp\mu) \tag{6.4}\]
\[(t,y,z)\in[0,T]\times(\mathbb{R}^{d})^{2}\mapsto v_{2}(t,y,z)=\inf_{\mu\in \mathcal{P}_{2,0}(\mathbb{R}^{d})}v_{1}(t,y,(I_{d}+z)\sharp\mu) \tag{6.5}\]
where we recall that \(\mathcal{P}_{2,0}(\mathbb{R}^{d})\subset\mathcal{P}_{2}(\mathbb{R}^{d})\) is the set of measures with \(0\) mean and finite variance. By the maximality of \(u_{1}(t,y,\mu)-v_{1}(\tilde{t},\tilde{y},\tilde{\mu})-\frac{\alpha}{2}\left(| y-\tilde{y}|^{2}+|m(\mu)-m(\tilde{\mu})|^{2}+|t-\tilde{t}|^{2}\right)\) at \(((t^{*},y^{*},\mu^{*}),(\tilde{t}^{*},\tilde{y}^{*},\tilde{\mu}^{*}))\), the function
\[u_{2}(t,y,z)-v_{2}(\tilde{t},\tilde{y},\tilde{z})-\frac{\alpha}{2}\left(|y- \tilde{y}|^{2}+|z-\tilde{z}|^{2}+|t-\tilde{t}|^{2}\right) \tag{6.6}\]
admits a global maximum at \(((t^{*},y^{*},m(\mu^{*})),(\tilde{t}^{*},\tilde{y}^{*},m(\tilde{\mu}^{*})))\). Denoting \(u_{2}^{*}\) the upper semicontinuous envelope of \(u_{2}\) and \(v_{2,*}\) the lower semicontinuous envelope of \(v_{2}\), this maximality implies that
\[u_{2}^{*}(t,y,z)-v_{2,*}(\tilde{t},\tilde{y},\tilde{z})-\frac{\alpha}{2}\left( |y-\tilde{y}|^{2}+|z-\tilde{z}|^{2}+|t-\tilde{t}|^{2}\right) \tag{6.7}\]
admits a global maximum at \((((t^{*},y^{*},m(\mu^{*})),(\tilde{t}^{*},\tilde{y}^{*},m(\tilde{\mu}^{*})))\) and
\[u_{2}^{*}(t^{*},y^{*},m(\mu^{*}))-v_{2,*}(\tilde{t}^{*},\tilde{y}^{*},m(\tilde {\mu}^{*}))=u_{2}(t^{*},y^{*},m(\mu^{*}))-v_{2}(\tilde{t}^{*},\tilde{y}^{*}, m(\tilde{\mu}^{*})).\]
_Step 2: Applying finite dimensional Ishii's lemma._ By Ishii's Lemma in [25, Theorem 3.2] applied to (6.7), for \(\epsilon>0\), there exists a sequence indexed by \(n\)
\[(t_{n}^{*},\tilde{t}_{n}^{*},y_{n}^{*},\tilde{y}_{n}^{*},z_{n}^{*},\tilde{z}_{ n}^{*},q_{n}^{*},\tilde{q}_{n}^{*},p_{n}^{*},X_{n}^{*},\tilde{X}_{n}^{*})\in( \mathbb{R})^{2}\times(\mathbb{R}^{d})^{4}\times\mathbb{R}^{2}\times(\mathbb{R }^{2d})^{2}\times(\mathcal{S}_{2d})^{2}\]
so that
\[u_{2}^{*}(t,y,z)-v_{2,*}(\tilde{t},\tilde{y},\tilde{z})-q_{n}^{*}(t-t_{n}^{*})- \tilde{q}_{n}^{*}(\tilde{t}-\tilde{t}_{n}^{*})-(p_{n}^{*})^{\top}\begin{pmatrix} y-y_{n}^{*}\\ z-z_{n}^{*}\end{pmatrix} \tag{6.8}\]
admits a local maximum at \(((t_{n}^{*},y_{n}^{*},z_{n}^{*}),(\tilde{t}_{n}^{*},\tilde{y}_{n}^{*},\tilde{z}_ {n}^{*}))\), and we have the following bounds and limits as \(n\to\infty\),
\[((t_{n}^{*},y_{n}^{*},z_{n}^{*}),(\tilde{t}_{n}^{*},\tilde{y}_{n}^{*},\tilde{z}_ {n}^{*})) \to\left(((t^{*},y^{*},m(\mu^{*})),(\tilde{t}^{*},\tilde{y}^{*},m( \tilde{\mu}^{*})))\right), \tag{6.9}\] \[(u_{2}^{*}(t_{n}^{*},y_{n}^{*},z_{n}^{*}),v_{2,*}(\tilde{t}_{n}^{* },\tilde{y}_{n}^{*},\tilde{z}_{n}^{*})) \to(u_{2}^{*}(t^{*},y^{*},m(\mu^{*})),v_{2,*}(\tilde{t}^{*},\tilde{y} ^{*},m(\tilde{\mu}^{*}))),\] \[=(u_{2}(t^{*},y^{*},m(\mu^{*})),v_{2}(\tilde{t}^{*},\tilde{y}^{* },m(\tilde{\mu}^{*})))\] \[(q_{n}^{*},q_{n}^{*}+\tilde{q}_{n}^{*},p_{n}^{*},p_{n}^{*}+\tilde{ p}_{n}^{*},X_{n}^{*},\tilde{X}_{n}^{*}) \to\left(\alpha(t^{*}-\tilde{t}^{*}),0,\alpha\begin{pmatrix}y^{*}- \tilde{y}^{*}\\ m(\mu^{*}-\tilde{\mu}^{*})\end{pmatrix},0,X^{*},\tilde{X}^{*}\right),\]
\[-\left(\frac{1}{\epsilon}+2\alpha\right)I_{4d}\leq\begin{pmatrix}X^{*}&0 \\ 0&\tilde{X}^{*}\end{pmatrix}\leq(\alpha+2\epsilon\alpha^{2})\begin{pmatrix}I_{2d}& -I_{2d}\\ -I_{2d}&I_{2d}\end{pmatrix}. \tag{6.10}\]
_Step 3: Application of the variational principle._ Note that the set \(\{(t_{n}^{*},y_{n}^{*},z_{n}^{*}):n\in\mathbb{N}\}\cup\{(\tilde{t}_{n}^{*}, \tilde{y}_{n}^{*},\tilde{z}_{n}^{*}):n\in\mathbb{N}\}\) is bounded. Thus, taking
\[R>\sup_{n}\sqrt{|t_{n}^{*}|^{2}+|y_{n}^{*}|^{2}+|z_{n}^{*}|}+\sqrt{|\tilde{t}_ {n}^{*}|^{2}+|\tilde{y}_{n}^{*}|^{2}+|\tilde{z}_{n}^{*}|}\]
large enough, thanks to the continuity (3.2) which is a uniform continuity on bounded sets, and the optimality (6.3), there exists a continuous modulus of continuity \(\omega_{R}\) so that for all \((t_{0},y_{0},z_{0},\tilde{t}_{0},\tilde{y}_{0},\tilde{z}_{0})\) with
\[\sqrt{|t_{0}|^{2}+|y_{0}|^{2}+|z_{0}|}+\sqrt{|\tilde{t}_{0}|^{2}+|\tilde{y}_{0 }|^{2}+|\tilde{z}_{0}|}\leq R\]
we have
\[u_{1}(t_{0},y_{0},(I_{d}+z_{0})\sharp\mathcal{S}_{0}(\mu^{*}))-v _{1}(\tilde{t}_{0},\tilde{y}_{0},(I_{d}+\tilde{z}_{0})\sharp\mathcal{S}_{0}( \tilde{\mu}^{*}))\] \[-\frac{\alpha}{2}\left(|y_{0}-\tilde{y}_{0}|^{2}+|z_{0}-\tilde{z} _{0}|^{2}+|t_{0}-\tilde{t}_{0}|^{2}\right)\] \[\geq u_{1}(t^{*},y^{*},(I_{d}+m(\mu^{*}))\sharp\mathcal{S}_{0}( \mu^{*}))-v_{1}(\tilde{t}^{*},\tilde{y}^{*},(I_{d}+m(\tilde{z}^{*}))\sharp \mathcal{S}_{0}(\tilde{\mu}^{*}))\] \[-\frac{\alpha}{2}\left(|y^{*}-\tilde{y}^{*}|^{2}+|m(\mu^{*})-m( \tilde{\mu}^{*})|^{2}+|t^{*}-\tilde{t}^{*}|^{2}\right)\] \[-\omega_{R}(|t^{*}-t_{0}|+|y^{*}-y_{0}|+|m(\mu^{*})-z_{0}|+| \tilde{t}^{*}-\tilde{t}_{0}|+|\tilde{y}^{*}-\tilde{y}_{0}|+|m(\tilde{\mu}^{*} )-\tilde{z}_{0}|)\] \[=\sup_{\theta,\tilde{\theta}\in\Theta}u_{1}(t,y,\mu)-v_{1}( \tilde{t},\tilde{y},\tilde{\mu})-\frac{\alpha}{2}\left(|y-\tilde{y}|^{2}+|m( \mu)-m(\tilde{\mu})|^{2}+|t-\tilde{t}|^{2}\right)\] \[-\omega_{R}(|t^{*}-t_{0}|+|y^{*}-y_{0}|+|m(\mu^{*})-z_{0}|+| \tilde{t}^{*}-\tilde{t}_{0}|+|\tilde{y}^{*}-\tilde{y}_{0}|+|m(\tilde{\mu}^{*} )-\tilde{z}_{0}|)\] \[\geq\sup_{\mu,\tilde{\mu}\in\mathcal{P}_{2,0}(\mathbb{R}^{d})}u _{1}(t_{0},y_{0},(I_{d}+z_{0})\sharp\mu)-v_{1}(\tilde{t}_{0},\tilde{y}_{0},(I _{d}+\tilde{z}_{0})\sharp\tilde{\mu})\] \[-\frac{\alpha}{2}\left(|y_{0}-\tilde{y}_{0}|^{2}+|z_{0}-\tilde{z} _{0}|^{2}+|t_{0}-\tilde{t}_{0}|^{2}\right)\] \[-\omega_{R}(|t^{*}-t_{0}|+|y^{*}-y_{0}|+|m(\mu^{*})-z_{0}|+| \tilde{t}^{*}-\tilde{t}_{0}|+|\tilde{y}^{*}-\tilde{y}_{0}|+|m(\tilde{\mu}^{*} )-\tilde{z}_{0}|)\]
which by continuity implies that for all \((t_{0},y_{0},z_{0},\tilde{t}_{0},\tilde{y}_{0},\tilde{z}_{0})\) with
\[\sqrt{|t_{0}|^{2}+|y_{0}|^{2}+|z_{0}|}+\sqrt{|\tilde{t}_{0}|^{2}+|\tilde{y}_{0 }|^{2}+|\tilde{z}_{0}|}\leq R,\]
we have
\[u_{1}(t_{0},y_{0},(I_{d}+z_{0})\sharp\mathcal{S}_{0}(\mu^{*}))-v _{1}(\tilde{t}_{0},\tilde{y}_{0},(I_{d}+\tilde{z}_{0})\sharp\mathcal{S}_{0}( \tilde{\mu}^{*}))\] \[+\omega_{R}(|y^{*}-y_{0}|+|m(\mu^{*})-z_{0}|+|\tilde{t}^{*}- \tilde{t}_{0}|+|\tilde{y}^{*}-\tilde{y}_{0}|+|m(\tilde{\mu}^{*})-\tilde{z}_{0 }|+|\tilde{t}^{*}-\tilde{t}_{0}|)\] \[\geq u_{2}^{*}(t_{0},y_{0},z_{0})-v_{2,*}(\tilde{t}_{0},\tilde{y}_ {0},\tilde{z}_{0}).\]
Thus, denoting
\[(\theta^{0,n},\tilde{\theta}^{0,n}) :=((t^{0,n},y^{0,n},\mu^{0,n}),(\tilde{t}^{0,n},\tilde{y}^{0,n}, \tilde{\mu}^{0,n}))\] \[:=((t_{n}^{*},y_{n}^{*},(I_{d}+z_{n}^{*})\sharp\mathcal{S}_{0}( \mu^{*})),(\tilde{t}_{n}^{*},\tilde{y}_{n}^{*},(I_{d}+\tilde{z}_{n}^{*})\sharp \mathcal{S}_{0}(\tilde{\mu}^{*}))) \tag{6.11}\] \[C_{n} :=\omega_{R}(|y^{*}-y_{n}^{*}|+|m(\mu^{*})-z_{n}^{*}|+|\tilde{t}^{ *}-\tilde{t}_{n}^{*}|+|\tilde{y}^{*}-\tilde{y}_{n}^{*}|+|m(\tilde{\mu}^{*})- \tilde{z}_{n}^{*}|+|\tilde{t}^{*}-\tilde{t}_{n}^{*}|)\]
which goes to \(0\) thanks to (6.9), the function
\[G_{n}(\theta,\tilde{\theta}) :=u_{1}(t,y,\mu)-v_{1}(\tilde{t},\tilde{y},\tilde{\mu})-q_{n}^{*}(t -t_{n}^{*})-\tilde{q}_{n}^{*}(\tilde{t}-\tilde{t}_{n}^{*})\] \[-(p_{n}^{*})^{\top}\begin{pmatrix}y-y_{n}^{*}\\ m(\mu)-z_{n}^{*}\end{pmatrix}-(\tilde{p}_{n}^{*})^{\top}\begin{pmatrix}\tilde{y}- \tilde{y}_{n}^{*}\\ m(\tilde{\mu})-\tilde{z}_{n}^{*}\end{pmatrix}\] \[-\frac{1}{2}\begin{pmatrix}y-y_{n}^{*}\\ m(\mu)-z_{n}^{*}\end{pmatrix}^{\top}X_{n}^{*}\begin{pmatrix}y-y_{n}^{*}\\ m(\mu)-z_{n}^{*}\end{pmatrix}-\frac{1}{2}\begin{pmatrix}\tilde{y}-\tilde{y}_{ n}^{*}\\ m(\tilde{\mu})-\tilde{z}_{n}^{*}\end{pmatrix}^{\top}\tilde{X}_{n}^{*}\begin{pmatrix} \tilde{y}-\tilde{y}_{n}^{*}\\ m(\tilde{\mu})-\tilde{z}_{n}^{*}\end{pmatrix}\]
satisfies
\[G_{n}(\theta^{0,n},\tilde{\theta}^{0,n}) =u_{1}(t_{n}^{*},y_{n}^{*},\mu^{0,n})-v_{1}(\tilde{t}_{n}^{*}, \tilde{y}_{n}^{*},\tilde{\mu}^{0,n})\] \[\geq u_{2}^{*}(t_{n}^{*},y_{n}^{*},z_{n}^{*})-v_{2*}(\tilde{t}_{n }^{*},\tilde{y}_{n}^{*},\tilde{z}_{n}^{*})-C_{n}.\]
Thus, thanks to the maximality condition (6.8) and the definitions of \(u_{2}-v_{2}\), we have
\[G_{n}(\theta^{0,n},\tilde{\theta}^{0,n}) =u_{1}(t_{n}^{*},y_{n}^{*},\mu^{0,n})-v_{1}(\tilde{t}_{n}^{*}, \tilde{y}_{n}^{*},\tilde{\mu}^{0,n})\] \[\geq\sup_{\theta,\tilde{\theta}\in\Theta}G_{n}(\theta,\tilde{ \theta})-C_{n},\]
and we can apply the variational principle in Lemma 5.4 to get the existence of
\[(\theta^{j,n},\tilde{\theta}^{j,n}):=((t^{j,n},y^{j,n}\mu^{j,n}),(\tilde{t}^{ j,n},\tilde{y}^{j,n},\tilde{\mu}^{j,n}))\text{ for }j\geq 1\]
and \((\theta^{*,n},\tilde{\theta}^{*,n}):=((t^{*,n},y^{*,n}\mu^{*,n}),(\tilde{t}^{ *,n},\tilde{y}^{*,n},\tilde{\mu}^{*,n}))\) so that
\[d_{\sigma}^{2}(\theta^{*,n},\theta^{j,n})+d_{\sigma}^{2}(\tilde {\theta}^{*,n},\tilde{\theta}^{j,n})\leq\frac{2\sqrt{C_{n}}}{2^{j}}\text{ for }j\geq 0 \tag{6.12}\] \[G_{n}(\theta^{0,n},\tilde{\theta}^{0,n})\leq G_{n}(\theta^{*,n},\tilde{\theta}^{*,n})-\sqrt{C_{n}}(\phi_{2,n}(\theta^{*,n})+\tilde{\phi}_{2,n }(\tilde{\theta}^{*,n}))\text{ and }\] \[(\theta,\tilde{\theta})\mapsto G_{n}(\theta,\tilde{\theta})- \sqrt{C_{n}}(\phi_{2,n}(\theta)+\tilde{\phi}_{2,n}(\tilde{\theta})) \tag{6.13}\]
admits a strict local maximum at \((\theta^{*,n},\tilde{\theta}^{*,n})\) with
\[\phi_{2,n}(\theta) :=\sum_{j\geq 0}\frac{d_{\sigma}^{2}(\theta,\theta^{j,n})}{2^{j}},\] \[\tilde{\phi}_{2,n}(\theta) :=\sum_{j\geq 0}\frac{d_{\sigma}^{2}(\theta,\tilde{\theta}^{j,n})}{2^ {j}}.\]
Denoting
\[\Phi_{n}(\theta):= \frac{\alpha\Psi(\mu)}{2}+\sqrt{C_{n}}\phi_{2,n}(\theta)\] \[+q_{n}^{*}(t-t_{n}^{*})+(p_{n}^{*})^{\top}\begin{pmatrix}y-y_{n}^{ *}\\ m(\mu)-z_{n}^{*}\end{pmatrix}+\frac{1}{2}\begin{pmatrix}y-y_{n}^{*}\\ m(\mu)-z_{n}^{*}\end{pmatrix}^{\top}X_{n}^{*}\begin{pmatrix}y-y_{n}^{*}\\ m(\mu)-z_{n}^{*}\end{pmatrix}\] \[\tilde{\Phi}_{n}(\theta):= \frac{\alpha\tilde{\Psi}(\mu)}{2}+\sqrt{C_{n}}\tilde{\phi}_{2,n}(\theta)\] \[+\tilde{q}_{n}^{*}(t-\tilde{t}_{n}^{*})+(\tilde{p}_{n}^{*})^{\top }\begin{pmatrix}y-\tilde{y}_{n}^{*}\\ m(\mu)-\tilde{z}_{n}^{*}\end{pmatrix}+\frac{1}{2}\begin{pmatrix}y-\tilde{y}_{n}^{ *}\\ m(\mu)-\tilde{z}_{n}^{*}\end{pmatrix}^{\top}\tilde{X}_{n}^{*}\begin{pmatrix}y- \tilde{y}_{n}^{*}\\ m(\mu)-\tilde{z}_{n}^{*}\end{pmatrix}\]
we obtain that
\[u(\theta)-v(\tilde{\theta})-\Phi_{n}(\theta)-\tilde{\Phi}_{n}(\tilde{\theta})\]
admits a strict maximum at \((\theta^{*,n},\tilde{\theta}^{*,n})=((t^{*,n},y^{*,n},\mu^{*,n}),(\tilde{t}^{*,n}, \tilde{y}^{*,n},\tilde{\mu}^{*,n}))\).
_Step 4: Taking the limit in \(n\)._ Thanks to (6.11) and (6.9), we have
\[d_{F}(\theta^{0,n},\theta^{*}) =d_{\sigma}(\theta^{0,n},\theta^{*})=\sqrt{|t^{*}-t_{n}^{*}|^{2}+| y^{*}-y_{n}^{*}|^{2}+|z_{n}^{*}-m(\mu^{*})|^{2}}\to 0\] \[d_{F}(\tilde{\theta}^{0,n},\tilde{\theta}^{*}) =d_{\sigma}(\tilde{\theta}^{0,n},\tilde{\theta}^{*})=\sqrt{| \tilde{t}^{*}-\tilde{t}_{n}^{*}|^{2}+|\tilde{y}^{*}-\tilde{y}_{n}^{*}|^{2}+| \tilde{z}_{n}^{*}-m(\tilde{\mu}^{*})|^{2}}\to 0.\]
This limit combined with (6.12) for \(j=0\) implies that
\[d_{\sigma}(\theta^{*,n},\theta^{*})+d_{\sigma}(\tilde{\theta}^{*,n},\tilde{ \theta}^{*})\to 0. \tag{6.14}\]
Due to [22, Lemma 4.2], (6.14) implies that \((\mu^{*,n},\tilde{\mu}^{*,n})\to(\mu^{*},\tilde{\mu}^{*})\) in the Wasserstein-2 distance. [51, Theorem 6.9] means that the first two moments of \((\mu^{*,n},\tilde{\mu}^{*,n})\) converge to the first two moments of \((\mu^{*},\tilde{\mu}^{*})\). As such, these second moments are bounded.
By direct computation, and the definition of \(d_{\sigma}\), we have
\[|D_{y}\phi_{2,n}(\theta^{*,n})| \leq C\sum_{j}\frac{d_{\sigma}(\theta^{*,n},\theta^{j,n})}{2^{j}},\,|\partial_{t}\phi_{2,n}(\theta^{*,n})|\leq C\sum_{j}\frac{d_{\sigma}( \theta^{*,n},\theta^{j,n})}{2^{j}},\] \[|D_{\mu}\phi_{2,n}(\theta^{*,n},x)| \leq\sum_{j\geq 0}\frac{|m(\mu^{*,n})-m(\mu^{j,n})|}{2^{j}}\] \[+C_{d}\left(\sigma+\frac{1}{\sigma}(|x|^{2}+|m(\mu^{*,n})|^{2})+ \frac{1}{\sigma}\int|x|^{2}\,\mathcal{S}_{0}(\mu^{*,n})(dx)\right)\] \[\leq C_{d,\sigma}\left(C_{n}^{1/4}+1+|x|^{2}\right)\] \[|D_{x\mu}\phi_{2,n}(\theta^{*,n},x)| \leq C_{d}\left(1+\frac{1}{\sigma^{2}}(|x|^{2}+|m(\mu^{*,n})|^{2}) \right)\leq C_{d,\sigma}\left(1+|x|^{2}\right)\] \[\mathcal{H}\phi_{2,n}(\theta) =\sum_{j\geq 0}\frac{\mathcal{H}\rho_{\sigma}^{2}(\mu,\mu^{j,n})}{2^ {j}}=4I_{d}=D_{yy}^{2}\phi_{2,n}(\theta),\,D_{y\mu}^{2}\phi_{2,n}(\theta,x)=0.\]
Thus, as \(n\to\infty\), all derivatives of
\[\sqrt{C_{n}}(\phi_{2,n}(\theta)+\tilde{\phi}_{2,n}(\tilde{\theta}))\]
evaluated at \((\theta^{*,n},\tilde{\theta}^{*,n})\) goes to \(0\) in their respective metrics. Additionally, thanks to Lemma 5.2, Lemma 5.3, and (6.14), we have that
\[D_{\mu}\Psi(\mu^{*,n}) \to D_{\mu}\Psi(\mu^{*}),\,D_{x\mu}\Psi(\mu^{*,n})\to D_{x\mu} \Psi(\mu^{*})\] \[D_{\mu}\tilde{\Psi}(\tilde{\mu}^{*,n}) \to D_{\mu}\tilde{\Psi}(\tilde{\mu}^{*}),\,D_{x\mu}\tilde{\Psi}( \tilde{\mu}^{*,n})\to D_{x\mu}\tilde{\Psi}(\tilde{\mu}^{*})\]
Thus, sending \(n\to\infty\) concludes the proof.
## 7. Proof of the comparison result
Proof of Theorem 3.2.: The proof is based on adapting the ideas of [31, Theorem 3.59], [40, Theorem 2] to the Wasserstein space, which is a metric space and not a Banach space. Similarly to [29, Remark 3.9], by passing from \(U\) to \(e^{2L_{h}t}U\), without loss of generality, we can assume that for \(u\geq v\), we have the strict decreasing property
\[L_{h}(u-v)\leq h(\theta,v,p,X^{11},f,g,X^{12},X^{22})-h(\theta,u,p,X^{11},f,g,X^ {12},X^{22}) \tag{7.1}\]
and \(h\) satisfies (3.5) with \(3L_{h}\). Denote \(\chi(\theta):=e^{-\tilde{L}t}\left(1+|y|^{2}+\left(\int|x|^{2}\mu(dx)\right)^{2}\right)\) so that
\[\partial_{t}\chi(\theta) =-\tilde{L}\chi(\theta),\,D_{y}\chi(\theta)=2e^{-\tilde{L}t}y,\,D _{yy}^{2}\chi(\theta)=2e^{-\tilde{L}t}I_{d}\] \[D_{\mu}\chi(\theta,x) =4e^{-\tilde{L}t}\int|z|^{2}\mu(dz)x,\,D_{x\mu}\chi(\theta,x)=4e^{ -\tilde{L}t}\int|z|^{2}\mu(dz)I_{d},\,\mathcal{D}_{y\mu}^{2}\chi(\theta)=0\] \[\mathcal{H}\chi(\theta) =e^{-\tilde{L}t}\left(8m(\mu)m^{\top}(\mu)+4\int|x|^{2}\mu(dx)I_{ d}\right).\]
The conclusion of the theorem being \(\sup_{\theta\in\Theta}(u-v)(\theta)=0\), to obtain a contradiction, we assume that
\[\sup_{\theta\in\Theta}(u-v)(\theta)>0.\]
Thus, there exists \(\iota>0\) so that there exists \(r>0\) such that
\[\sup_{\theta\in\Theta}(u_{\iota}-v)(\theta)=:2r>0\]
where \(u_{\iota}(\theta):=u(\theta)-\iota\chi(\theta).\) We now show there exists \(\tilde{L}\) large enough so that \(u_{\iota}\) is a viscosity subsolution of the same equation. Let \(\phi\) be a partial \(C^{2}\) regular test function so that \(u_{\iota}-\phi\) has a local maximum at \(\theta\in\Theta\). This is equivalent to \(u-\phi_{\iota}\) has a local maximum at the same point where \(\phi_{\iota}=\phi+\iota\chi\). Thus, the viscosity property of \(u\) yields
\[-\partial_{t}\phi_{\iota}(\theta)-h(\theta,u(\theta),D_{y}\phi_{\iota}( \theta),D_{yy}^{2}\phi_{\iota}(\theta),D_{\mu}\phi_{\iota}(\theta),D_{x\mu} \phi_{\iota}(\theta),\mathcal{D}_{y\mu}^{2}\phi_{\iota}(\theta),\mathcal{H} \phi_{\iota}(\theta))\leq 0.\]
Thanks to the equality \(\phi=\phi_{\iota}-\iota\chi\) and (7.1), we can estimate
\[-\partial_{t}\phi(\theta)-h(\theta,u_{\iota}(\theta),D_{y}\phi( \theta),D_{yy}^{2}\phi(\theta),D_{\mu}\phi(\theta),D_{x\mu}\phi(\theta), \mathcal{D}_{y\mu}^{2}\phi(\theta),\mathcal{H}\phi(\theta))\] \[\leq-\partial_{t}\phi_{\iota}(\theta)-\iota(\tilde{L}+L_{h})\chi(\theta)\] \[\quad-h(\theta,u(\theta),D_{y}\phi(\theta),D_{yy}^{2}\phi( \theta),D_{\mu}\phi(\theta),D_{x\mu}\phi(\theta),\mathcal{D}_{y\mu}^{2}\phi( \theta),\mathcal{H}\phi(\theta))\] \[\leq-\partial_{t}\phi_{\iota}(\theta)-h(\theta,u(\theta),D_{y} \phi_{\iota}(\theta),D_{yy}^{2}\phi_{\iota}(\theta),D_{\mu}\phi_{\iota}(\theta ),D_{x\mu}\phi_{\iota}(\theta),\mathcal{D}_{y\mu}^{2}\phi_{\iota}(\theta), \mathcal{H}\phi_{\iota}(\theta))\] \[\quad+3L_{h}\iota\left(|D_{y}\chi(\theta)|+|D_{yy}^{2}\chi(\theta )|+||D_{\mu}\chi(\theta)||_{\mathfrak{q}}+||D_{x\mu}\chi(\theta)||_{\mathfrak{ q}}+|\mathcal{D}_{y\mu}^{2}\chi(\theta)|+|\mathcal{H}\chi(\theta)|\right)\] \[\quad-\iota(\tilde{L}+L_{h})\chi(\theta)\] \[\leq 3L_{h}\iota\left(|D_{y}\chi(\theta)|+|D_{yy}^{2}\chi(\theta)|+ ||D_{\mu}\chi(\theta)||_{\mathfrak{q}}+||D_{x\mu}\chi(\theta)||_{\mathfrak{q}}+ |\mathcal{D}_{y\mu}^{2}\chi(\theta)|+|\mathcal{H}\chi(\theta)|\right)\] \[\quad-\iota(\tilde{L}+L_{h})\chi(\theta)\]
where we used the viscosity property of \(u\) to obtain the last inequality. Using the derivatives of \(\chi\), the definition of \(\|\cdot\|_{\mathfrak{q}}\) and Young's inequality we obtain
\[-\partial_{t}\phi(\theta)-h(\theta,u_{\iota}(\theta),D_{y}\phi( \theta),D_{yy}^{2}\phi(\theta),D_{\mu}\phi(\theta),D_{x\mu}\phi(\theta), \mathcal{D}_{y\mu}^{2}\phi(\theta),\mathcal{H}\phi(\theta))\] \[\leq e^{-\tilde{L}t}\iota 6L_{h}\left(|y|+|I_{d}|+2\int|x|^{2}\mu(dx) \left(\sup_{x}\frac{|x|}{1+|x|^{2}}+2|I_{d}|\right)+4|m(\mu)|^{2}\right)\] \[\quad-\iota e^{-\tilde{L}t}(\tilde{L}+L_{h})\left(1+|y|^{2}+ \left(\int|x|^{2}\mu(dx)\right)^{2}\right)\] \[\leq e^{-\tilde{L}t}\iota\left(6L_{h}\sqrt{d}+6L_{h}|y|+12L_{h}(3 +2\sqrt{d})\int|x|^{2}\mu(dx)\right)\]
\[-\iota e^{-\tilde{L}t}(\tilde{L}+L_{h})\left(1+|y|^{2}+\left(\int|x|^{2}\mu(dx) \right)^{2}\right)\] \[\leq e^{-\tilde{L}t}\iota\left(6L_{h}\sqrt{d}+18L_{h}+\frac{L_{h} |y|^{2}}{2}+\frac{(12(3+2\sqrt{d}))^{2}}{2}L_{h}+\frac{L_{h}}{2}\left(\int|x|^{2 }\mu(dx)\right)^{2}\right)\] \[\leq e^{-\tilde{L}t}\iota\left(6L_{h}\sqrt{d}+18L_{h}+\frac{(12(3 +2\sqrt{d}))^{2}}{2}L_{h}-\tilde{L}+L_{h}\right)\]
for any choice of \(\tilde{L}\geq 0\). Thus, taking \(\tilde{L}\) large enough, \(u_{\iota}\) is a viscosity subsolution of
\[-\partial_{t}u_{\iota}(\theta)-h(\theta,u_{\iota}(\theta),D_{y}u_{\iota}( \theta),D_{yy}^{2}u_{\iota}(\theta),D_{\mu}u_{\iota}(\theta),D_{x\mu}u_{\iota }(\theta),\mathcal{D}_{y\mu}^{2}u_{\iota}(\theta),\mathcal{H}_{\iota}(\theta ))\leq 0\]
which also satisfies
\[u_{\iota}(T,\cdot)\leq u(T,\cdot)\leq v(T,\cdot).\]
_Step 1: Introducing doubling of variable with \(d_{F}\):_ We fix \(\epsilon>0\), \(\iota\in[0,\iota_{0}]\) and denote
\[G_{\iota}(\theta,\tilde{\theta})=u_{\iota}(\theta)-v(\tilde{\theta})-\frac{d_ {F}^{2}(\theta,\tilde{\theta})}{2\epsilon}\]
and introduce the optimization problem
\[\sup_{(\theta,\tilde{\theta})\in\Theta^{2}}G_{\iota}(\theta,\tilde{\theta}). \tag{7.2}\]
Due to the boundedness from above of \(u,-v\), (7.2) is finite. Fix \(\kappa>0\), since
\[2r=\sup_{\theta\in\Theta}(u_{\iota}-v)(\theta)\leq\sup_{(\theta,\tilde{\theta })\in\Theta^{2}}G_{\iota}(\theta,\tilde{\theta}),\]
we can find \((\theta^{0},\tilde{\theta}^{0})\in\Theta^{2}\) so that
\[\sup_{(\theta,\tilde{\theta})\in\Theta^{2}}G_{\iota}(\theta,\tilde{\theta})- \kappa<G_{\iota}(\theta^{0},\tilde{\theta}^{0})\text{ and }r\leq G_{\iota}(\theta^{0},\tilde{\theta}^{0}). \tag{7.3}\]
Using the variational principle in Lemma 5.4 with the gauge-type function \(d_{\sigma}\), for \(\delta=\sqrt{\kappa}\), there exists
\[(\theta^{*},\tilde{\theta}^{*})=((t^{*},y^{*},\mu^{*}),(\tilde{t}^{*},\tilde{ y}^{*},\tilde{\mu}^{*}))\]
and
\[\{(\theta^{j},\tilde{\theta}^{j}):j\geq 1\}=\{((t^{j},y^{j},\mu^{j}),( \tilde{t}^{j},\tilde{y}^{j},\tilde{\mu}^{j})):j\geq 1\}\]
depending on \(\epsilon,\kappa\) such that denoting
\[\phi(\theta):=\sum_{j=0}^{\infty}\frac{d_{\sigma}^{2}(\theta,\theta^{j})}{2^{j }}\text{ and }\tilde{\phi}(\tilde{\theta}):=\sum_{j=0}^{\infty}\frac{d_{\sigma}^{2}( \tilde{\theta},\tilde{\theta}^{j})}{2^{j}}\]
we have
\[d_{\sigma}^{2}(\theta^{*},\theta^{j})+d_{\sigma}^{2}(\tilde{ \theta}^{*},\tilde{\theta}^{j}) \leq\frac{\sqrt{\kappa}}{2^{j}}\text{ for }j\geq 0 \tag{7.4}\] \[G_{\iota}(\theta^{0},\tilde{\theta}^{0}) \leq G_{\iota}(\theta^{*},\tilde{\theta}^{*})-\sqrt{\kappa}( \phi(\theta^{*})+\tilde{\phi}(\tilde{\theta}^{*}))\] (7.5) \[\text{ and }(\theta,\tilde{\theta})\in\Theta^{2} \mapsto G_{\iota}(\theta,\tilde{\theta})-\sqrt{\kappa}(\phi( \theta)+\tilde{\phi}(\tilde{\theta})) \tag{7.6}\]
admits a strict global maximum at \((\theta^{*},\tilde{\theta}^{*}).\) Note that by a direct computation we have that
\[D_{y}\phi(\theta)=2\sum_{j=0}^{\infty}\frac{y-y^{j}}{2^{j}},\,D_{x} \tilde{\phi}(\tilde{\theta})=2\sum_{j=0}^{\infty}\frac{\tilde{y}-\tilde{y}^{j}}{ 2^{j}}, \tag{7.7}\] \[\mathcal{D}_{yu}\phi(\theta)=\mathcal{D}_{yu}\tilde{\phi}(\tilde{ \theta})=0,\,D_{yy}^{2}\phi(\theta)=D_{yy}^{2}\tilde{\phi}(\tilde{\theta})=4I_ {d},\,\mathcal{H}\phi(\theta)=\mathcal{H}\tilde{\phi}(\tilde{\theta})=4I_{d}. \tag{7.8}\]
_Step 2: A priori estimates on the optimizers:_ Define the modulus of continuity of \(v\) in time defined by
\[\omega_{v}(t):=\sup\{|v(s,y,\mu)-v(\tilde{s},y,\mu)|:|s-\tilde{s}|\leq t,y\in \mathbb{R}^{d},\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})\}\]
which satisfies for all \(t,s\geq 0\), the inequality
\[\omega_{v}(t+s)\leq\omega_{v}(s)+\omega_{v}(t).\]
Also denote \(L_{v,F}\) the Lipschitz constant of \(v\) in \((y,\mu)\) w.r.t. \(d_{F}\) metric. For \(\epsilon,\kappa>0\), define \(t_{\epsilon,\kappa}\) the maximum positive root of the equation
\[t_{\epsilon,\kappa}^{2}=2\kappa+2\omega_{v}(\sqrt{\epsilon}t_{\epsilon,\kappa }).\]
Similar to [38, Theorem], thanks to the subadditivity of \(\omega_{v}\), we have
\[t_{\epsilon,\kappa}\to 0\text{ as }\epsilon,\kappa\downarrow 0.\]
We now show that for all \(\epsilon,\kappa>0\), we have
\[\frac{\sqrt{|y^{*}-\tilde{y}^{*}|^{2}+\rho_{F}^{2}(\mu^{*},\tilde{\mu}^{*})}} {\epsilon}\leq L_{v,F}+\sqrt{2L_{v,F}^{2}+2\frac{\kappa}{\epsilon}}\text{ and }\frac{|t^{*}-\tilde{t}^{*}|}{\sqrt{\epsilon}}\leq t_{\epsilon,\kappa}. \tag{7.9}\]
By definition of \(G\) and (7.3)-(7.6) we have the inequalities
\[\kappa+u(\theta^{*})-\iota\chi(\theta^{*})-v(\tilde{\theta}^{*})- \frac{d_{F}^{2}(\theta^{*},\tilde{\theta}^{*})}{2\epsilon}=\kappa+G_{\iota}( \theta^{*},\tilde{\theta}^{*})\] \[\geq\kappa+G_{\iota}(\theta^{0},\tilde{\theta}^{0})+\sqrt{\kappa }(\phi(\theta^{*})+\tilde{\phi}(\tilde{\theta}^{*}))\geq\sup_{(\theta,\tilde{ \theta})\in\Theta^{2}}G_{\iota}(\theta,\tilde{\theta})+\sqrt{\kappa}(\phi( \theta^{*})+\tilde{\phi}(\tilde{\theta}^{*}))\] \[\geq G_{\iota}(\theta^{*},(t^{*},\tilde{y}^{*},\tilde{\mu}^{*}))+ \sqrt{\kappa}(\phi(\theta^{*})+\tilde{\phi}(\tilde{\theta}^{*}))\] \[\geq u(\theta^{*})-\iota\chi(\theta^{*})-v(t^{*},\tilde{y}^{*}, \tilde{\mu}^{*})+\sqrt{\kappa}(\phi(\theta^{*})+\tilde{\phi}(\tilde{\theta}^{ *})).\]
Thus, given the Lipschitz continuity of \(v\), we obtain the inequality
\[\frac{d_{F}^{2}(\theta^{*},\tilde{\theta}^{*})}{2\epsilon}+\sqrt {\kappa}(\phi(\theta^{*})+\tilde{\phi}(\tilde{\theta}^{*})) \leq\kappa+v(\theta^{*})-v(t^{*},\tilde{y}^{*},\tilde{\mu}^{*})\] \[\leq\kappa+L_{v,F}\sqrt{|y^{*}-\tilde{y}^{*}|^{2}+\rho_{F}^{2}( \mu^{*},\tilde{\mu}^{*})}\]
This inequality implies that \(b=\frac{\sqrt{|y^{*}-\tilde{y}^{*}|^{2}+\rho_{F}^{2}(\mu^{*},\tilde{\mu}^{*})} }{\epsilon}\) satisfies
\[(b-L_{v,F})^{2}\leq 2L_{v,F}^{2}+2\frac{\kappa}{\epsilon}\]
which implies that
\[\frac{\sqrt{|y^{*}-\tilde{y}^{*}|^{2}+\rho_{F}^{2}(\mu^{*},\tilde{\mu}^{*})}}{ \epsilon}\leq L_{v,F}+\sqrt{2L_{v,F}^{2}+2\frac{\kappa}{\epsilon}}.\]
Similarly, (7.3)-(7.6) lead to
\[\frac{d_{F}^{2}(\theta^{*},\tilde{\theta}^{*})}{2\epsilon}+\sqrt{ \kappa}(\phi(\theta^{*})+\tilde{\phi}(\tilde{\theta}^{*})) \leq\kappa+v(\theta^{*})-v(\tilde{t}^{*},y^{*},\mu^{*})\] \[\leq\kappa+\omega_{v}(|t^{*}-\tilde{t}^{*}|).\]
Thus, \(a=\frac{|t^{*}-\tilde{t}^{*}|}{\sqrt{\epsilon}}\) satisfies
\[a^{2}\leq 2\kappa+2\omega_{v}(\sqrt{\epsilon}a)\]
which shows that
\[\frac{|t^{*}-\tilde{t}^{*}|}{\sqrt{\epsilon}}\leq t_{\epsilon,\kappa}\]
since \(t_{\epsilon,\kappa}\) is the largest root of this equation. We also define \(t_{\epsilon}\) as the largest positive root of
\[t_{\epsilon}^{2}=2\omega_{v}(\sqrt{\epsilon}t_{\epsilon}).\]
Note that \(t_{\epsilon}>0\) if \(\omega_{v}\) is not a constant function. Indeed, if \(t_{\epsilon}=0\), then for all \(t>0\), we have
\[t^{2}\geq 2\omega_{v}(\sqrt{\epsilon}t).\]
By subadditivity, this implies that
\[0\leq\frac{\omega_{v}(\sqrt{\epsilon}(t+s))-\omega_{v}(\sqrt{\epsilon}s)}{t} \leq\frac{\omega_{v}(\sqrt{\epsilon}t)-\omega_{v}(0)}{t}\leq\frac{t}{2}.\]
Sending \(t\to 0\) we have that \(\omega_{v}\) is a constant function.
For the remaining of the proof, we assume that \(\omega_{v}\) is not constant and for each \(\epsilon>0\), we choose \(\kappa_{\epsilon}\) small enough so that \(t_{\epsilon,\kappa}\leq 2t_{\epsilon}\) and \(\kappa_{\epsilon}\leq\epsilon\) which implies that for all \(\epsilon>0\) and \(\kappa\in(0,\kappa_{\epsilon}]\) we have
\[\frac{\sqrt{|y^{*}-\tilde{y}^{*}|^{2}+\rho_{F}^{2}(\mu^{*},\tilde{\mu}^{*})}} {\epsilon}\leq L_{v,F}+\sqrt{2(L_{v,F}^{2}+2)}\text{ and }\frac{|t^{*}-\tilde{t}^{*}|}{\sqrt{ \epsilon}}\leq 2t_{\epsilon}. \tag{7.10}\]
We use (7.3)-(7.6) one more time to obtain
\[\kappa+u(\theta^{*})-\iota\chi(\theta^{*})-v(\tilde{\theta}^{*})- \frac{d_{F}^{2}(\theta^{*},\tilde{\theta}^{*})}{2\epsilon}\geq G_{\iota}((t^{ *},0,\delta_{0}),\tilde{\theta}^{*})+\sqrt{\kappa}(\phi(\theta^{*})+\tilde{ \phi}(\tilde{\theta}^{*}))\] \[\geq u(t^{*},0,\delta_{0})-\iota\chi(t^{*},0,\delta_{0})-v( \tilde{t}^{*},\tilde{y}^{*},\tilde{\mu}^{*})+\sqrt{\kappa}(\phi(\theta^{*})+ \tilde{\phi}(\tilde{\theta}^{*})).\]
Thus, we obtain a prior bound on \(\theta^{*}\),
\[\frac{\kappa+|u(\theta^{*})-u(t^{*},0,\delta_{0})|}{\iota} \geq\chi(\theta^{*})-\chi(t^{*},0,\delta_{0}) \tag{7.11}\] \[=e^{-\tilde{L}t^{*}}\left(|y^{*}|^{2}+\left(\int|x|^{2}\mu^{*}(dx )\right)^{2}\right).\]
_Step 3: Using Ishii's Lemma :_ Define the functions
\[\mathcal{L}(\mu,\eta,\nu) :=2\int\frac{Re(F_{k}(\mathcal{S}_{0}(\mu))(F_{k}(\mathcal{S}_{0} (\eta))-F_{k}(\mathcal{S}_{0}(\nu)))^{*})}{(1+|k|^{2})^{\lambda}}dk\] \[+2Tr(V^{\top}(\mathcal{S}_{0}(\mu))(V(\mathcal{S}_{0}(\eta))-V( \mathcal{S}_{0}(\nu))))\] \[\Psi(\mu) :=2\rho_{F}^{2}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\mu^{*}))+ \mathcal{L}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\mu^{*}),\mathcal{S}_{0}( \tilde{\mu}^{*}))\]
\[\tilde{\Psi}(\mu) :=2\rho_{F}^{2}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\tilde{\mu}^{*} ))-\mathcal{L}(\mathcal{S}_{0}(\mu),\mathcal{S}_{0}(\mu^{*}),\mathcal{S}_{0}( \tilde{\mu}^{*}))\] \[u_{1}(t,x,\mu) :=u_{\iota}(t,x,\mu)-\sqrt{\kappa}\phi(t,x,\mu)\] \[v_{1}(t,x,\mu) :=v(t,x,\mu)+\sqrt{\kappa}\tilde{\phi}(t,x,\mu)\]
We now show that one can apply the Ishii's Lemma in Theorem 3.1. Since \(\rho_{F}\) and \(\rho_{\sigma}\) topologically equivalent, \(u_{1},-v_{1}\) are \(d_{F}\) upper semicontinuous in both topology and the property (3.2) holds for \(u_{1},v_{1}.\) It is now sufficient to show that one can choose \(\epsilon>0\) so that \(t^{*}<T\) and \(\tilde{t}^{*}<T\). Note that \(t^{*},\tilde{t}^{*}\) are bounded. Denote \(t,\tilde{t}\) an accumulation point as \(\epsilon\downarrow 0,\)\(0<\kappa<\kappa_{\epsilon}\). Thanks to (7.10), we have the equality \(t=\tilde{t}.\) The non-negativity of \(\phi,\tilde{\phi},\) (7.3), and (7.5) implies that
\[u_{\iota}(\theta^{*})\geq v(\tilde{\theta}^{*})+r. \tag{7.12}\]
Thus, thanks to the values of \(u_{\iota},v\) at \(T,\) we have
\[\omega_{u_{\iota}}(|t^{*}-T|)+\omega_{v}(|\tilde{t}^{*}-T|)+L_{v, F}d_{F}((T,\tilde{y}^{*},\tilde{\mu}^{*}),(T,y^{*},\mu^{*}))\] \[\geq v(T,y^{*},\mu^{*})-u_{\iota}(T,y^{*},\mu^{*})+r\geq r.\]
Note that for \(0<\kappa<\kappa_{\epsilon}\) as \(\epsilon\downarrow 0\), we have \(d_{F}((T,\tilde{y}^{*},\tilde{\mu}^{*}),(T,y^{*},\mu^{*}))\to 0\) thanks to (7.10). Thus, if \(\tilde{t}=t=T\), we obtain \(0\geq r\) which would conclude the proof. Thus, for the remainder of the proof we assume that \(\tilde{t}=t<T\) and we choose \(\epsilon>0\), so that for all \(0<\kappa<\kappa_{\epsilon}\), \(\max\{t^{*},\tilde{t}^{*}\}<T\).
We can now use Theorem 3.1 to obtain that there exist \(X^{*},\tilde{X}^{*}\in\mathcal{S}_{2d}\) so that
\[\Big{(}\frac{t^{*}-\tilde{t}^{*}}{\epsilon},\frac{y^{*}-\tilde{y }^{*}}{\epsilon},(X^{*})^{11},\frac{2(m(\mu^{*})-m(\tilde{\mu}^{*}))+D_{\mu} \Psi(\mu^{*})}{2\epsilon},\] \[\frac{D_{x\mu}\Psi(\mu^{*})}{2\epsilon},(X^{*})^{12},(X^{*})^{22} \Big{)}\in\bar{J}^{2,+}u_{1}(\theta^{*})\] \[\Big{(}\frac{t^{*}-\tilde{t}^{*}}{\epsilon},\frac{y^{*}-\tilde{y }^{*}}{\epsilon},-(\tilde{X}^{*})^{11},\frac{2(m(\mu^{*})-m(\tilde{\mu}^{*}))- D_{\mu}\tilde{\Psi}(\tilde{\mu}^{*})}{2\epsilon},\] \[-\frac{D_{x\mu}\tilde{\Psi}(\tilde{\mu}^{*})}{2\epsilon},-( \tilde{X}^{*})^{12},-(\tilde{X}^{*})^{22}\Big{)}\in\bar{J}^{2,-}v_{1}(\tilde{ \theta}^{*})\] \[\text{and }-\frac{3}{\epsilon}I_{4d}\leq\begin{pmatrix}X^{*}&0\\ 0&\tilde{X}^{*}\end{pmatrix}\leq\frac{3}{\epsilon}\begin{pmatrix}I_{2d}&-I_{2d }\\ -I_{2d}&I_{2d}\end{pmatrix}. \tag{7.13}\]
Given the smoothness of \(\phi,\tilde{\phi},\) the viscosity properties of the function \(u_{\iota},v\) and the continuity of \(h\) leads to
\[0\leq h\left(\theta^{*},u_{\iota}(\theta^{*}),\frac{y^{*}-\tilde {y}^{*}}{\epsilon}+\sqrt{\kappa}D_{y}\phi(\theta^{*}),(X^{*})^{11}+4\sqrt{ \kappa}I_{d},\right.\] \[\qquad\qquad\left.\frac{m(\mu^{*})-m(\tilde{\mu}^{*})}{\epsilon} +\frac{D_{\mu}\Psi(\mu^{*})}{2\epsilon}+\sqrt{\kappa}D_{\mu}\phi(\theta^{*}),\right.\] \[\qquad\qquad\left.\frac{D_{x\mu}\Psi(\mu^{*})}{2\epsilon}+\sqrt{ \kappa}D_{x\mu}\phi(\theta^{*}),(X^{*})^{12},(X^{*})^{22}+4\sqrt{\kappa}I_{d}\right)\] \[-h\left(\tilde{\theta}^{*},v(\tilde{\theta}^{*}),\frac{y^{*}- \tilde{y}^{*}}{\epsilon}-\sqrt{\kappa}D_{y}\tilde{\phi}(\tilde{\theta}^{*}),-( \tilde{X}^{*})^{11}-4\sqrt{\kappa}I_{d},\right.\] \[\qquad\qquad\qquad\left.\frac{m(\mu^{*})-m(\tilde{\mu}^{*})}{ \epsilon}-\frac{D_{\mu}\tilde{\Psi}(\tilde{\mu}^{*})}{2\epsilon}-\sqrt{ \kappa}D_{\mu}\tilde{\phi}(\tilde{\theta}^{*}),\right.\]
\[-\frac{D_{x\mu}\tilde{\Psi}(\tilde{\mu}^{*})}{2\epsilon}-\sqrt{\kappa}D_{x\mu} \tilde{\phi}(\tilde{\theta}^{*}),-(\tilde{X}^{*})^{12},-(\tilde{X}^{*})^{22}-4 \sqrt{\kappa}I_{d}\right).\]
For fixed \(\epsilon>0\), the family
\[\left(\frac{\begin{pmatrix}y^{*}-\tilde{y}^{*}\\ m(\mu^{*})-m(\tilde{\mu}^{*})\end{pmatrix}}{\epsilon},X^{*},\tilde{X}^{*} \right)=\left(\frac{\begin{pmatrix}y^{*}-\tilde{y}^{*}\\ m(\mu^{*})-m(\tilde{\mu}^{*})\end{pmatrix}}{\epsilon},X^{*},\tilde{X}^{*} \right)_{\kappa}\]
indexed by \(\{\kappa:\,\kappa_{\epsilon}\geq\kappa>0\}\) is bounded thanks to (7.10), (7.13) and the inequality \(t_{\epsilon,\kappa}\leq 2t_{\epsilon}\). Similarly to above, for fixed \(\epsilon>0\), \(\{t^{*},\tilde{t}^{*}\}\) is also bounded and as \(\kappa\downarrow 0\) admits an accumulation point. Let \((t_{1,\epsilon},\tilde{t}_{1,\epsilon},p,X,\tilde{X})\) be an accumulation point of this family as \(\kappa\downarrow 0\). Taking \(\epsilon>0\) small enough, we can insure that \(t_{1,\epsilon}<T\), \(\tilde{t}_{1,\epsilon}<T\), and \((p,X,\tilde{X})\) satisfies
\[-\frac{3}{\epsilon}I_{4d}\leq\begin{pmatrix}X&0\\ 0&\tilde{X}\end{pmatrix}\leq\frac{3}{\epsilon}\begin{pmatrix}I_{2d}&-I_{2d}\\ -I_{2d}&I_{2d}\end{pmatrix}\text{ and }|p|\leq C \tag{7.14}\]
thanks to (7.10).
Additionally, thanks to (7.4) applied for \(j=0\) and (7.10) we have that for \(\kappa>0\) small enough
\[d_{\sigma}^{2}(\theta^{*},\theta^{0})+d_{\sigma}^{2}(\tilde{\theta}^{*}, \tilde{\theta}^{0})+d_{F}(\theta^{*},\tilde{\theta}^{*})\leq 2t_{\epsilon} \sqrt{\epsilon}.\]
Denote by \(p_{1},p_{2}\) the first and second \(d\) coordinates of \(p\), and
\[\Delta:= \left|p_{1}-\frac{y^{*}-\tilde{y}^{*}}{\epsilon}\right|+\left|p_{ 2}-\frac{m(\mu^{*})-m(\tilde{\mu}^{*})}{\epsilon}\right|+|X-X^{*}|\] \[+\sqrt{\kappa}\left(|D_{y}\phi(\theta^{*})|+||D_{\mu}\phi(\theta ^{*}),||_{q}+||D_{x\mu}\phi(\theta^{*},\cdot)||_{q}+4|I_{d}|\right)\] \[\tilde{\Delta}:= \left|p_{1}-\frac{y^{*}-\tilde{y}^{*}}{\epsilon}\right|+\left|p_{ 2}-\frac{m(\mu^{*})-m(\tilde{\mu}^{*})}{\epsilon}\right|+|\tilde{X}-\tilde{X} ^{*}|\] \[+\sqrt{\kappa}\left(|D_{y}\tilde{\phi}(\tilde{\theta}^{*})|+||D_{ \mu}\tilde{\phi}(\tilde{\theta}^{*},\cdot)||_{q}+||D_{x\mu}\tilde{\phi}( \tilde{\theta}^{*},\cdot)||_{q}+4|I_{d}|\right)\]
\[I(x):=\frac{D_{\mu}\Psi(\mu^{*},x)+D_{\mu}\tilde{\Psi}(\tilde{\mu}^{*},x)}{2\epsilon}\]
so that by the definition of \(p,X,\tilde{X}\), (7.4), and the expresssions of derivatives of \(\phi,\tilde{\phi}\), \(\Delta+\tilde{\Delta}\to 0\) as \(\kappa\downarrow 0\). By Lemmas 5.2-5.3, we have
\[\frac{D_{\mu}\Psi(\mu,x)}{2\epsilon} =\frac{2}{\epsilon}\sum_{i,j}(V^{i,j}(\mathcal{S}_{0}(\mu))-V^{i, j}(\mathcal{S}_{0}(\mu^{*})))\big{(}(x^{j}-m^{j}(\mu))e_{i}+(x^{i}-m^{i}(\mu))e_{j} \big{)}\] \[+\frac{2}{\epsilon}\int\frac{Re\left(ik\left(F_{k}(\mathcal{S}_{0 }(\mu))-f_{k}(x-m(\mu))\right)(F_{k}(\mathcal{S}_{0}(\mu))-F_{k}(\mathcal{S}_{ 0}(\mu^{*})))^{*}\right)}{(1+|k|^{2})^{\lambda}}dk\] \[+\frac{1}{\epsilon}\int\frac{Re(ik(F_{k}(\mathcal{S}_{0}(\mu))-f _{k}(x-m(\mu)))(F_{k}(\mathcal{S}_{0}(\mu^{*}))-F_{k}(\mathcal{S}_{0}(\tilde{ \mu}^{*})))^{*})}{(1+|k|^{2})^{\lambda}}dk\] \[+\frac{1}{\epsilon}\sum_{i,j}\big{(}(x^{j}-m^{j}(\mu))e_{i}+(x^{ i}-m^{i}(\mu))e_{j}\big{)}(V^{i,j}(\mathcal{S}_{0}(\mu^{*}))-V^{i,j}(\mathcal{S}_{ 0}(\tilde{\mu}^{*})))\] \[\frac{D_{\mu}\tilde{\Psi}(\tilde{\mu},x)}{2\epsilon} =\frac{2}{\epsilon}\sum_{i,j}(V^{i,j}(\mathcal{S}_{0}(\tilde{\mu} ))-V^{i,j}(\mathcal{S}_{0}(\tilde{\mu}^{*})))\big{(}(x^{j}-m^{j}(\tilde{\mu})) e_{i}+(x^{i}-m^{i}(\tilde{\mu}))e_{j}\big{)}\] \[+\frac{2}{\epsilon}\int\frac{Re\left(ik\left(F_{k}(\mathcal{S}_{0 }(\tilde{\mu}))-f_{k}(x-m(\tilde{\mu}))\right)(F_{k}(\mathcal{S}_{0}(\tilde{ \mu}))-F_{k}(\mathcal{S}_{0}(\tilde{\mu}^{*})))^{*}\right)}{(1+|k|^{2})^{ \lambda}}dk\] \[-\frac{1}{\epsilon}\int\frac{Re(ik(F_{k}(\mathcal{S}_{0}(\tilde{ \mu}))-f_{k}(x-m(\tilde{\mu})))(F_{k}(\mathcal{S}_{0}(\mu^{*}))-F_{k}(\mathcal{ S}_{0}(\tilde{\mu}^{*})))^{*})}{(1+|k|^{2})^{\lambda}}dk\] \[-\frac{1}{\epsilon}\sum_{i,j}\big{(}(x^{j}-m^{j}(\tilde{\mu}))e_{ i}+(x^{i}-m^{i}(\tilde{\mu}))e_{j}\big{)}(V^{i,j}(\mathcal{S}_{0}(\mu^{*}))-V^{i,j}( \mathcal{S}_{0}(\tilde{\mu}^{*})))\]
Thus, given the symmetry of \(V\) and the cancellation with the choice \(\mu=\mu^{*}\) and \(\mu=\tilde{\mu}^{*}\), we obtain
\[\frac{D_{\mu}\Psi(\mu^{*},x)}{2\epsilon}=\frac{2}{\epsilon}\Big{(}V( \mathcal{S}_{0}(\mu^{*}))-V(\mathcal{S}_{0}(\tilde{\mu}^{*}))\Big{)}(x-m(\mu^{ *})) \tag{7.15}\] \[\qquad+\frac{1}{\epsilon}\int\frac{Re(ik(F_{k}(\mathcal{S}_{0}( \mu^{*}))-f_{k}(x-m(\mu^{*})))(F_{k}(\mathcal{S}_{0}(\mu^{*}))-F_{k}(\mathcal{ S}_{0}(\tilde{\mu}^{*})))^{*})}{(1+|k|^{2})^{\lambda}}dk\] \[\frac{D_{\mu}\tilde{\Psi}(\tilde{\mu}^{*},x)}{2\epsilon}=-\frac{ 2}{\epsilon}\Big{(}V(\mathcal{S}_{0}(\mu^{*}))-V(\mathcal{S}_{0}(\tilde{\mu}^{ *}))\Big{)}(x-m(\tilde{\mu}^{*}))\] \[\qquad-\frac{1}{\epsilon}\int\frac{Re(ik(F_{k}(\mathcal{S}_{0}( \tilde{\mu}^{*}))-f_{k}(x-m(\tilde{\mu}^{*})))(F_{k}(\mathcal{S}_{0}(\mu^{*}))- F_{k}(\mathcal{S}_{0}(\tilde{\mu}^{*})))^{*})}{(1+|k|^{2})^{\lambda}}dk\]
and
\[I(x) =\frac{1}{\epsilon}\int\frac{Re(ik(f_{k}(x-m(\tilde{\mu}^{*}))-f_{ k}(x-m(\mu^{*})))(F_{k}(\mathcal{S}_{0}(\mu^{*}))-F_{k}(\mathcal{S}_{0}(\tilde{ \mu}^{*})))^{*})}{(1+|k|^{2})^{\lambda}}dk\] \[+\frac{2}{\epsilon}\Big{(}V(\mathcal{S}_{0}(\mu^{*}))-V(\mathcal{ S}_{0}(\tilde{\mu}^{*}))\Big{)}(m(\tilde{\mu}^{*}-\mu^{*})).\]
Using \(k\)-Lipschitz continuity of \(x\mapsto f_{k}(x)\), and the Cauchy Schwarz inequality, we obtain
\[|I(x)|\leq \frac{|m(\tilde{\mu}^{*}-\mu^{*})|}{\epsilon}\int|k|^{2}|F_{k}( \mathcal{S}_{0}(\mu^{*}))-F_{k}(\mathcal{S}_{0}(\tilde{\mu}^{*}))|\frac{dk}{(1+| k|^{2})^{\lambda}}+\frac{Cd_{F}^{2}(\theta^{*},\tilde{\theta}^{*})}{\epsilon}\]
\[L_{h}r \leq\omega_{h}\left(\frac{d_{F}^{2}(\theta^{*},\tilde{\theta}^{*})}{ \epsilon}+d_{F}(\theta^{*},\tilde{\theta}^{*})\right)\left(1+\sqrt{\frac{\sup|u|+ \kappa+1}{\iota}}\right)\] \[+C\left(1+\epsilon+\sqrt{\frac{\sup|u|+\kappa+1}{\iota}}\right)( \Delta+\tilde{\Delta}+4Ct_{\epsilon}^{2})\] \[\leq C\left(1+\epsilon+\sqrt{\frac{\sup|u|+\kappa+1}{\iota}} \right)(\omega_{h}\left(4t_{\epsilon}^{2}+2t_{\epsilon}\sqrt{\epsilon}\right)+ \Delta+\tilde{\Delta}+4Ct_{\epsilon}^{2})\]
for all \(\kappa\in(0,\kappa_{\epsilon}]\). Taking \(\epsilon>0\), \(\kappa\in(0,\kappa_{\epsilon}]\) small enough, we obtain the contradiction with \(r>0\).
## 8. Proofs in Section 4
Proof of Lemma 4.1.: Let us take
\[Y_{s}^{t,\alpha}=\int_{t}^{s}\tilde{\sigma}(\alpha_{u})\,dW_{u},\quad\tilde{X}_{s }^{t,\mu,\alpha}=X_{s}^{t,\mu,\alpha}-Y_{s}^{t,\alpha},\quad\tilde{m}_{s}^{t, \mu,\alpha}=\mathcal{L}(\tilde{X}_{s}^{t,\mu,\alpha}\,|\,\mathcal{F}_{s}^{W}).\]
Then it can be easily checked that
\[d\tilde{X}_{s}^{t,\mu,\alpha}=b(\tilde{X}_{s}^{t,\mu,\alpha}+Y_{s}^{t,\alpha}, \alpha_{s})\,ds+\sigma(\tilde{X}_{s}^{t,\mu,\alpha}+Y_{s}^{t,\alpha},\alpha_{s })\,dV_{s},\]
and also \(m_{s}^{t,\mu,\alpha}=(I_{d}+Y_{s}^{t,\alpha})_{\sharp}\tilde{m}_{s}\)\(a.s\).
Noting that \(Y_{s}^{t,\alpha}\) is known given \(\mathcal{F}_{s}^{W}\), and hence by standard estimates of SDE regarding \(\tilde{X}_{s}\),
\[W_{2}(\tilde{m}_{s}^{t,\mu,\alpha},\mu)^{2}\leq L(s-t)\quad a.s.\]
Making use of the triangle inequality of Wasserstein metric and the fact that
\[W_{2}(m_{s}^{t,\mu,\alpha},\tilde{m}_{s}^{t,\mu,\alpha})^{2}\leq|Y_{s}^{t, \alpha}|^{2}\quad a.s.,\]
we get
\[W_{2}(m_{s}^{t,\mu,\alpha},\mu)^{2}\leq L(s-t+|Y_{s}^{t,\alpha}|^{2})\quad a.s.\]
Therefore \(s\mapsto m_{s}^{t,\mu,\alpha}(\omega)\) is continuous for almost every \(\omega\) and
\[\mathbb{E}[W_{2}(m_{s}^{t,\mu,\alpha},\mu)^{2}]\leq L(s-t).\]
Finally, thanks to the Sobolev embedding, we know that any \(f\in\mathbb{H}_{\lambda}\) is Lipschitz, and hence \(\|\nu-\eta\|_{\lambda}\leq W_{1}(\nu,\eta)\leq W_{2}(\nu,\eta)\) for any \(\nu,\eta\in\mathcal{P}_{2}(\mathbb{R}^{d})\). It follows that \(\mathbb{E}[\|m_{s}^{t,\mu,\alpha}-\mu\|_{\lambda}^{2}]\leq L(s-t)\).
Proof of Proposition 4.1.: Note that \((t,\mu)\mapsto J(t,\mu,\alpha)\) being \(L\)-Lipschitz uniformly for all admissible controls \(\alpha\) implies the \(L\)-Lipschitz continuity of \((t,\mu)\mapsto v(t,\mu)\). Also as commented in [3], with the continuity of value functions the verification of dynamic programming principle is standard. Therefore, it suffices to show the Lipschitz property of \((t,\mu)\mapsto J(t,\mu,\alpha)\). In the rest of the proof, let us fix a control \(\alpha\), and suppress \(\alpha\) in the notation \(X_{s}^{t,x,\alpha}\), where \(X_{s}^{t,x,\alpha}\) denotes the solution starting from \(\mu=\delta_{x}\).
_Step I_. Denoting \(G(x):=\mathbb{E}[g(X_{u}^{t,x})]\), we prove that \(G\in\mathbb{H}_{\lambda}\). Due to our assumption, \(g\) is of exponential decay, and hence we have the estimate
\[\mathbb{E}[g(X_{u}^{t,x})] \leq K\mathbb{E}\left[e^{-c|X_{u}^{t,x}|}\right]=K\int_{0}^{ \infty}e^{-cr}\,\mathbb{P}(|X_{u}^{t,x}|\in dr)\] \[=\frac{K}{c}\int_{0}^{\infty}\int_{0}^{\infty}\mathds{1}_{\{z\geq r \}}e^{-cz}\,dz\mathbb{P}(|X_{u}^{t,x}|\in dr)\] \[=\frac{K}{c}\int_{0}^{\infty}e^{-cz}\mathbb{P}(|X_{u}^{t,x}|\leq z )\,dz\]
where we use Fubini's theorem in the last equality. For \(z\leq|x|/2\), we have that
\[\mathbb{P}(|X_{u}^{t,x}|\leq z)\leq\mathbb{P}(|X_{u}^{t,x}-x|\geq|x|-z)\leq \frac{\mathbb{E}[|X_{u}^{t,x}-x|^{\alpha}]}{(|x|/2)^{\alpha}}.\]
Since \(b,\sigma\) are uniformly bounded, we have the boundedness of \(\mathbb{E}[|X_{u}^{t,x}-x|^{\alpha}]\) for any \(\alpha\geq 2\), and hence for \(|x|\geq 1\)
\[\mathbb{E}[g(X_{u}^{t,x})]\leq\frac{K}{c}\int_{|x|/2}^{\infty}e^{-cz}\,dz+\frac{ K}{c}\int_{0}^{|x|/2}\frac{C}{(|x|/2)^{\alpha}}\,dz\leq C\left(e^{-c|x|}+\frac{1}{|x| ^{\alpha-1}}\right), \tag{8.1}\]
where \(C\) is a constant depending on \(\alpha\), \(\|b\|_{\infty}\), and \(\|\sigma\|_{\infty}\). Choosing \(\alpha\geq d+1\), it shows that \(x\mapsto\mathbb{E}[g(X_{u}^{t,x})]\) is \(L^{2}\)-integrable.
According to [37, Chapter 3]\(x\mapsto X_{u}^{t,x}\) is \(\lambda\)-th differentiable, and we denote its \(k\)-th order derivative by \(X_{u}^{t,x,(k)}\). Let us now estimate the first-order derivative of \(G\). We actually have that
\[|G^{(1)}(x)|=\Big{|}\mathbb{E}\left[g^{(1)}(X_{u}^{t,x})X_{u}^{t,x,(1)}\right] \Big{|}\leq\sqrt{\mathbb{E}\left[\left|g^{(1)}(X_{u}^{t,x})\right|^{2}\right] }\sqrt{\mathbb{E}\left[\left|X_{u}^{t,x,(1)}\right|^{2}\right]}.\]
According to our assumptions, \(x\mapsto g^{(1)}(x)\) is of exponential decay, and therefore \(x\mapsto\sqrt{\mathbb{E}\left[\left|g^{(1)}(X_{u}^{t,x})\right|^{2}\right]}\) is also \(L^{2}\)-integrable due to (8.1) for a suitable choice of \(\alpha\). It suffices to show that the \(L^{2}\)-norm of \(X_{u}^{t,x,(1)}\) is uniformly bounded. According to [37, Chapter 3], \(X_{u}^{t,x,(1)}\) satisfies the SDE
\[dX_{u}^{t,x,(1)}= b^{(1)}(X_{u}^{t,x},\alpha_{u})X_{u}^{t,x,(1)}\,du+\sigma^{(1)}(X _{u}^{t,x},\alpha_{u})X_{u}^{t,x,(1)}\,dV_{u}\] \[X_{t}^{t,x,(1)}= \mathbf{1},\]
where \(\mathbf{1}\) represents a \(d\)-dimensional vector with \(1\) at each entry. Since \(\|b^{(1)}\|_{\infty}+\|\sigma^{(1)}\|_{\infty}<+\infty\), the SDE above is well-posed and has a unique solution. By standard estimate and the BDG inequality, it can be easily seen that for any \(\beta\geq 2\)
\[\mathbb{E}[|X_{u}^{t,x,(1)}|^{\beta}]\leq C\left(1+\int_{t}^{u}\mathbb{E}[|X_ {u}^{t,x,(1)}|^{\beta}]\,du\right). \tag{8.2}\]
Invoking Gronwall's inequality, we prove that \(\mathbb{E}[|X_{u}^{t,x,(1)}|^{\beta}]\) is bounded uniformly in \(x\) for any fixed \(\beta\geq 2\).
Induction: by chain rule we get that
\[|G^{(2)}(x)|=\Big{|}\mathbb{E}\left[g^{(2)}(X_{u}^{t,x})|X_{u}^{t,x,(1)}|^{2} +g^{(1)}(X_{u}^{t,x})X_{u}^{t,x,(2)}\right]\Big{|}\,.\]
Thanks to (8.1) and (8.2) for the suitable choice of \(\alpha,\beta\), the function \(x\mapsto\mathbb{E}\left[g^{(2)}(X_{u}^{t,x})|X_{u}^{t,x,(1)}|^{2}\right]\) is \(L^{2}\)-integrable. By the same reasoning, it suffices to show that \(\mathbb{E}[|X_{u}^{t,x,(2)}|^{2}]\) is uniformly bounded. To see this point, from the SDE
\[dX_{u}^{t,x,(2)}= \left(b^{(1)}(X_{u}^{t,x},\alpha_{u})X_{u}^{t,x,(2)}+b^{(2)}(X_{u }^{t,x},\alpha_{u})|X_{u}^{t,x,(1)}|^{2}\right)\,du\] \[+\left(\sigma^{(1)}(X_{u}^{t,x},\alpha_{u})X_{u}^{t,x,(2)}+ \sigma^{(2)}(X_{u}^{t,x},\alpha_{u})|X_{u}^{t,x,(1)}|^{2}\right)\,dV_{u},\] \[X_{t}^{t,x,(2)}= 0,\]
we can get an integral inequality for \(\mathbb{E}[|X_{u}^{t,x,(2)}|^{\beta}]\) as in (8.2) where the constant \(C\) depends on the moments of \(X_{u}^{t,x,(1)}\) and the sup-norm of \(b^{(2)},\sigma^{(2)}\). By Gronwall's inequality
again, we get a uniform bound for \(\mathbb{E}[|X_{u}^{t,x,(2)}|^{\beta}]\) and hence \(x\mapsto G^{(2)}(x)\) is \(L^{2}\)-integrable. Repeating the argument, we prove that \(G\in\mathbb{H}_{\lambda}\).
_Step II._ Similarly, under our assumptions it can be checked that \(x\mapsto\mathbb{E}[f(X_{s}^{t,x,\alpha},\alpha_{s})]\in\mathbb{H}_{\lambda}\) for all \(s\in[t,T]\), and hence \(x\mapsto J(t,x,\alpha)\in\mathbb{H}_{\lambda}\), where
\[J(t,x,\alpha):=\mathbb{E}\left[\int_{t}^{T}f(X_{s}^{t,x,\alpha},\alpha_{s})\, ds+g(X_{T}^{t,x,\alpha})\right].\]
Therefore we obtain the Lipschitz continuity in measure
\[J(t,\mu,\alpha)-J(t,\nu,\alpha)=\int J(t,x,\alpha)\,(\mu-\nu)(dx)\leq L\|\mu- \nu\|_{\lambda}.\]
Regarding the regularity in time variable, due to Lemma 4.1, it can be seen that
\[|J(t,\mu,\alpha)-J(s,\mu,\alpha)| \leq\mathbb{E}\left[\big{|}J(s,m_{s}^{t,\mu,\alpha},\alpha)-J(s, \mu,\alpha)\big{|}\right]\] \[\leq L\mathbb{E}\left[\|m_{s}^{t,\mu,\alpha}-\mu\|_{\lambda} \right]\leq L\sqrt{s-t}.\]
Proof of Proposition 4.2.: The proof of this result is divided into three steps. For simplicity of notation, we suppress \((t,\mu,\alpha)\) in notation \(X^{t,\mu,\alpha}\).
_Step I:_ Let us define two auxiliary processes
\[Y_{t}=\int_{0}^{t}\tilde{\sigma}(\alpha_{s})\,dW_{s},\quad\tilde{X}_{t}=X_{t}- Y_{t}.\]
Then \(\tilde{X}_{t}\) satisfies the dynamics
\[d\tilde{X}_{t}=b(\tilde{X}_{t}+Y_{t},\alpha_{t})dt+\sigma(\tilde{X}_{t}+Y_{t},\alpha_{t})\,dV_{t}.\]
Denote the conditional law of \(\tilde{X}_{t}\) given \(F_{t}^{W}\) by \(\tilde{m}_{t}\). Then it can be seen that for almost every \(\omega\), we have that
\[d\tilde{m}_{t}(\omega)(h)=\tilde{m}_{t}(\omega)\left(Dh(\cdot)^{\top}b(\cdot+Y _{t}(\omega),\alpha_{t}(\omega))+\frac{1}{2}Tr(\mathcal{H}h(\cdot)\sigma\sigma ^{\top}(\cdot+Y_{t}(\omega),\alpha_{t}(\omega)))\right)dt,\]
and hence for almost every \(\omega\), \(t\mapsto\tilde{m}_{t}(\omega)\) the measure flow of some SDE. Also, we have \(m_{t}(\omega)=(I_{d}+Y_{t}(\omega))_{\sharp}\tilde{m}_{t}\).
_Step II:_ Define \(\tilde{m}_{s}^{y}(\omega)=(id+y)_{\sharp}\tilde{m}_{s}\). Then we have that for each \(y\in\mathbb{R}^{d}\)
\[\psi(t,\tilde{m}_{t}^{y}(\omega))= \psi(0,\tilde{m}_{0}^{y}(\omega))+\int_{0}^{t}\partial_{t}\psi(s, \tilde{m}_{s}^{y}(\omega))\,ds \tag{8.3}\] \[+\int_{0}^{t}\,ds\int D_{\mu}\psi(s,\tilde{m}_{s}^{y}(\omega))(x )\cdot b(x+Y_{t}(\omega)-y,\alpha_{t}(\omega))\,\tilde{m}_{s}^{y}(\omega)(dx)\] \[+\frac{1}{2}\int_{0}^{t}\,ds\int Tr(D_{x\mu}\psi(s,\tilde{m}_{s}^{ y}(\omega))(x)\cdot\sigma\sigma^{\top}(x+Y_{s}(\omega)-y,\alpha_{s}(\omega)))\, \tilde{m}_{s}^{y}(\omega)(dx).\]
_Step III:_ For any \(y\in\mathbb{R}^{d}\), we define a process \(\Psi_{t}(y):=\psi(t,\tilde{m}_{t}^{y})\). Note that \(\psi(t,m_{t})=\Psi_{t}(Y_{t})\), and for each \(\omega\), \(y\mapsto\Psi_{t}(y)\) is second-order differentiable, and hence
\[D\Psi_{t}(Y_{t})=\lim_{h\to 0}\frac{\psi(t,(id+h)_{\sharp}m_{t})-\psi(t,m_{t})} {h}=D_{\mu}\psi(t,m_{t})[m_{t}],\]
\[D^{2}\Psi_{t}(Y_{t})=\mathcal{H}\psi(t,m_{t}).\]
Now we invoke Ito-Wentzell formula,
\[d\Psi_{t}(Y_{t})=d\psi(t,m_{t})+D\Psi_{t}(Y_{t})\cdot\tilde{\sigma}(\alpha_{t}) \,dW_{t}+\frac{1}{2}Tr(D^{2}\Psi_{t}(Y_{t})\cdot\tilde{\sigma}\tilde{\sigma}^{ \top}(\alpha_{t}))\,dt.\]
Using (8.3), we conclude the result.
Proof of Theorem 4.1.: _Step I: Subsolution property_ Suppose \(\psi\) is partial \(C^{2}\) regular such that \(v-\psi\) obtains a local maximum at a point \((t,\mu)\in[0,T)\times\mathcal{P}_{2}(\mathbb{R}^{d})\) and \(v(t,\mu)=\psi(t,\mu)\). It suffices to show that for any fixed \(a\in A\),
\[-\partial_{t}\psi(t,\mu)\leq K(a,\mu,D_{\mu}\psi(t,\mu),D_{x\mu}\psi(t,\mu), \mathcal{H}\psi(t,\mu)). \tag{8.4}\]
Suppose \(\delta>0\) is a fixed constant such that \(v(s,\nu)\leq\psi(s,\nu)\) for all \((s,\nu)\) satisfying \(|s-t|+\rho_{F}(\nu,\mu)\leq\delta\). Take \(\alpha\equiv a\in\mathcal{A}\), \(\tau_{\delta}:=\inf\{s\geq t:\rho_{F}(m_{s}^{t,\mu,\alpha},\mu)\geq\delta\}\), and a sequence of stopping times \(\tau_{n}:=\min\{t+1/n,\tau_{\delta}\}\) for all \(n\geq 0\). Then for any \(n\geq 1/\delta\), thanks to Proposition 4.1, we get
\[\psi(t,\mu)\leq\mathbb{E}\left[\int_{t}^{\tau_{n}}f(X_{s}^{t,\mu,\alpha},a)\, ds+\psi(\tau_{n},m_{\tau_{n}}^{t,\mu,\alpha})\right].\]
For the simplicity of notation, write \(m_{s}\) for \(m_{s}^{t,\mu,\alpha}\). Note that \(\int_{t}^{s}D_{\mu}\psi(u,m_{u})([m_{u}])\,dW_{u}\) is a martingale for \(s\leq\tau_{n}\) due to Lemma 4.1 and the fact that \(D_{\mu}\psi(u,m_{u})(\cdot)\in B_{\mathfrak{q}}^{d}\). Applying Lemma 4.2 to the term \(\psi(\tau_{n},m_{\tau_{n}})\), we obtain
\[0\leq n\mathbb{E}\left[\int_{t}^{\tau_{n}}-\partial_{t}\psi(s,m_{s})+K(a,m_{s },D_{\mu}\psi(s,m_{s}),D_{x\mu}\psi(s,m_{s}),\mathcal{H}\psi(s,m_{s}))\,ds \right].\]
Note that \(n\tau_{n}\to 1\)\(a.s.\) as \(n\to\infty\) and \(\rho(m_{s},\mu)\to 0\)\(a.s.\) as \(s\to t\) thanks to the pathwise continuity of \(m_{s}\). Then according to the continuity of \(\psi,K\), the mean value theorem, and the dominated convergence theorem, we conclude (8.4).
_Step II: Supersolution Property._ Take \(\psi\in C^{2}([0,T)\times\mathcal{P}_{2}(\mathbb{R}^{d});\mathbb{R})\) such that \(v-\psi\) obtains a local minima at a point \((t,\mu)\in[0,T)\times\mathcal{P}_{2}(\mathbb{R}^{d})\) and \(v(t,\mu)=\psi(t,\mu)\). Towards a contradiction, suppose that for some positive \(\epsilon>0\)
\[-\partial_{t}\psi(t,\mu)\leq\inf_{a\in A}K(a,\mu,D_{\mu}\psi(t,\mu),D_{x\mu} \psi(t,\mu),\mathcal{H}\psi(t,\mu))-2\epsilon.\]
Due to the continuity in \(\mu\) and the local minimal property of \((t,\mu)\), there exists some \(\delta>0\) such that for all \((s,\nu)\) with \(|s-t|+\rho_{F}(\mu,\nu)<\delta\)
\[-\partial_{t}\psi(s,\nu)\leq\inf_{a\in A}K(a,\nu,D_{\mu}\psi(s,\nu),D_{x\mu} \psi(s,\nu),\mathcal{H}\psi(s,\nu))-\epsilon, \tag{8.5}\]
and
\[v(s,\nu)\geq\psi(s,\nu). \tag{8.6}\]
For any \(n\in\mathbb{N}\) with \(1/n<\delta\), thanks to Proposition 4.1, we choose a control \(\alpha^{n}\) and a stopping time
\[\tau_{n}:=\min\left\{t+1/n,\,\inf\{s\geq t:\,\rho(\mu,m_{s}^{t,\mu,\alpha^{n}} )\geq\delta-1/n\}\right\}\]
satisfying
\[v(t,\mu)\geq\mathbb{E}\left[\int_{t}^{\tau_{n}}f(X_{s}^{t,\mu,\alpha^{n}})\, ds+v(\tau^{n},m_{\tau_{n}}^{t,\mu,\alpha^{n}})\right]-\frac{1}{n^{2}}.\]
Using (8.6) and Ito's formula Lemma 4.2, we get that
\[0\geq \mathbb{E}\left[-\psi(t,\mu)+\int_{t}^{\tau_{n}}f(X_{s}^{t,\mu,\alpha ^{n}})\,ds+\psi(\tau^{n},m_{\tau_{n}}^{n})\right]-\frac{1}{n^{2}}\] \[= \mathbb{E}\left[\int_{t}^{\tau_{n}}\partial_{t}\psi(s,m_{s}^{n})+ K(\alpha_{s}^{n},m_{s}^{n},D_{\mu}\psi(s,m_{s}^{n}),D_{x\mu}\psi(s,\nu), \mathcal{H}\psi(s,m_{s}^{n}))\,ds\right]-\frac{1}{n^{2}},\]
where \(m_{s}^{n}\) denotes \(m_{s}^{t,\mu,\alpha^{n}}\). Now it follows from the inequality (8.5) that
\[0\leq\mathbb{E}\left[\int_{t}^{\tau_{n}}\epsilon\,ds\right]-\frac{1}{n^{2}},\]
which contradicts with the fact that \(\lim_{n\to\infty}\mathbb{E}[n(\tau_{n}-t)]=1\).
_Step III: Uniqueness._ Given the Lipschitz continuity proven in Proposition 4.1, the uniqueness is a consequence of Theorem 3.2 if we can prove that the generator \(K\) satisfies the assumption of this theorem. To check the inequality (3.5), we estimate
\[|\inf_{a\in A}K(a,\mu,f,g,X^{22})-\inf_{a\in A}K(a,\mu,\tilde{f}, \tilde{g},\tilde{X}^{22})|\] \[\leq\sup_{a\in A}|K(a,\mu,f,g,X^{22})-K(a,\mu,\tilde{f},\tilde{g},\tilde{X}^{22})|\] \[\leq C\left(\int|f(x)-\tilde{f}(x)|+|g(x)-\tilde{g}(x)|\,\mu(dx) +|X^{22}-\tilde{X}^{22}|\right)\] \[\leq C\left((||f(x)-\tilde{f}(x)||_{\mathfrak{q}}+||g(x)-\tilde{ g}(x)||_{\mathfrak{q}})(1+Tr(V(\mu))+|m(\mu)|^{2})+|X^{22}-\tilde{X}^{22}|\right)\]
for a constant that only depends on the supremum of \(f,b,\sigma,\tilde{\sigma}\).
We now verify the assumption (3.7) for the Hamiltonian \(K\) by estimating
\[\inf_{a}K\left(a,\mu,\frac{m(\mu-\tilde{\mu})}{\epsilon}+\frac{D_ {\mu}\Psi(\mu)}{2\epsilon},\frac{D_{x\mu}\Psi(\mu)}{2\epsilon},X^{22}\right)\] \[\qquad-\inf_{a}K\left(a,\tilde{\mu},\frac{m(\mu-\tilde{\mu})}{ \epsilon}+\frac{D_{\mu}\Psi(\mu)}{2\epsilon},\frac{D_{x\mu}\Psi(\mu)}{2 \epsilon},-\tilde{X}^{22}\right)\] \[\leq\sup_{a}\left\{\Big{|}\int f(x,a)(\mu-\tilde{\mu})(dx) \Big{|}+\frac{1}{2}Tr(\tilde{\sigma}\tilde{\sigma}^{\top}(a)(X^{22}+\tilde{X}^ {22}))\right\}\] \[+\sup_{a}\left\{\Big{|}\int b^{\top}(x,a)\frac{m(\mu-\tilde{\mu} )}{\epsilon}(\mu-\tilde{\mu})(dx)\Big{|}+\Big{|}\int b(x,a)\frac{D_{\mu}\Psi( \mu)}{2\epsilon}(\mu-\tilde{\mu})(dx)\Big{|}\right\}\] \[+\frac{1}{2}\sup_{a}\left\{\Big{|}\int Tr\left(\sigma\sigma^{ \top}(x,a)\frac{D_{x\mu}\Psi(\mu)}{2\epsilon}\right)\,(\mu-\tilde{\mu})(dx) \Big{|}\right\}.\]
Denote
\[I_{1} :=\sup_{a}\left\{\Big{|}\int\left(f(x,a)+b^{\top}(x,a)\frac{m(\mu -\tilde{\mu})}{\epsilon}\right)(\mu-\tilde{\mu})(dx)\Big{|}+\frac{1}{2}Tr( \tilde{\sigma}\tilde{\sigma}^{\top}(a)(X^{22}-\tilde{X}^{22}))\right\}\] \[I_{2} :=\sup_{a}\left\{\Big{|}\int b^{\top}(x,a)\frac{D_{\mu}\Psi(\mu)}{ 2\epsilon}(\mu-\tilde{\mu})(dx)\Big{|}\right\}\] \[I_{3} :=\sup_{a}\left\{\Big{|}\int Tr\left(\sigma\sigma^{\top}(x,a) \frac{D_{x\mu}\Psi(\mu)}{2\epsilon}\right)\,(\mu-\tilde{\mu})(dx)\Big{|}\right\}\]
Right hand side of (3.6) shows that \(X+\tilde{X}\leq 0\) for the order of symmetric matrices. Thus, \(X^{22}+\tilde{X}^{22}\leq 0\). By Lemma 2.1, we have
\[I_{1} \leq C\sup_{a\in A}\left\{||f(\cdot,a)||_{\lambda}\rho_{F}(\mu, \tilde{\mu})+\frac{\rho_{F}(\mu,\tilde{\mu})}{\epsilon}\sup_{j}||b^{j}(\cdot,a) ||_{\lambda}\rho_{F}(\mu,\tilde{\mu})\right\}\] \[\leq C\left(\rho_{F}(\mu,\tilde{\mu})+\frac{\rho_{F}^{2}(\mu, \tilde{\mu})}{\epsilon}\right).\]
To estimate \(I_{2}\), for given \(a\in A\), define the functions
\[d_{1}(x,a) :=2b^{\top}(x,a)\Big{(}V(\mathcal{S}_{0}(\mu))-V(\mathcal{S}_{0} (\tilde{\mu}))\Big{)}(x-m(\mu))\] \[d_{2}(x,a) :=b^{\top}(x,a)\left(\int\frac{Re(ikF_{k}(\mathcal{S}_{0}(\mu))( F_{k}(\mathcal{S}_{0}(\mu))-F_{k}(\mathcal{S}_{0}(\tilde{\mu})))^{*})}{(1+|k|^{2})^{ \lambda}}dk\right)\] \[d_{3}(x,a) :=-b^{\top}(x,a)\int\frac{Re(ikf_{k}(x-m(\mu))(F_{k}(\mathcal{S}_ {0}(\mu))-F_{k}(\mathcal{S}_{0}(\tilde{\mu})))^{*})}{(1+|k|^{2})^{\lambda}}dk\] \[d_{4}(x) :=-\int\frac{Re(ikf_{k}(x)(F_{k}(\mathcal{S}_{0}(\mu))-F_{k}( \mathcal{S}_{0}(\tilde{\mu})))^{*})}{(1+|k|^{2})^{\lambda}}dk\]
so that
\[\int b^{\top}(x,a)\frac{D_{\mu}\Psi(\mu)}{2\epsilon}(\mu-\tilde{\mu})(dx)= \frac{1}{\epsilon}\int(d_{1}(x,a)+d_{2}(x,a)+d_{3}(x,a))(\mu-\tilde{\mu})(dx)\]
We need to estimate \(\lambda\) norms of \((d_{1}(\cdot,a),d_{2}(\cdot,a),d_{3}(\cdot,a))\). To bound the first two terms we use the inequalities
\[|V(\mathcal{S}_{0}(\mu))-V(\mathcal{S}_{0}(\tilde{\mu}))| \leq\rho_{F}(\mu,\tilde{\mu})\] \[\int\frac{Re(ikF_{k}(\mathcal{S}_{0}(\mu))(F_{k}(\mathcal{S}_{0}( \mu))-F_{k}(\mathcal{S}_{0}(\tilde{\mu}))))}{(1+|k|^{2})^{\lambda}}dk \leq\rho_{F}(\mu,\tilde{\mu})\sqrt{\int\frac{|k|^{2}}{(1+|k|^{2} )^{\lambda}}dk}\]
to get that
\[\frac{1}{\epsilon}\Big{|}\int(d_{1}(x,a)+d_{2}(x,a))(\mu-\tilde{ \mu})(dx)\big{|}\leq\frac{\rho_{F}(\mu,\tilde{\mu})}{\epsilon}\sup_{a\in A} \{||d_{1}(\cdot,a)||_{\lambda}+||d_{1}(\cdot,a)||_{\lambda}\}\] \[\leq\frac{(1+|m(\mu)|)\rho_{F}^{2}(\mu,\tilde{\mu})}{\epsilon} \sup_{i,j,a}\{||x^{i}b^{j}(x,a)||_{\lambda}+||b^{j}(x,a)||_{\lambda}\}\left(1 +\sqrt{\int\frac{|k|^{2}}{(1+|k|^{2})^{\lambda}}dk}\right)\] \[\leq\frac{C(1+|m(\mu)|)\rho_{F}^{2}(\mu,\tilde{\mu})}{\epsilon}.\]
Regarding the last term, by Lemma 2.1 we also have
\[\frac{1}{\epsilon}\Big{|}\int d_{3}(x,a)(\mu-\tilde{\mu})(dx)\Big{|}\leq\frac{ C}{\epsilon}||d_{3}(\cdot,a)||_{\lambda}\rho_{F}(\mu,\tilde{\mu})\]
and we need to estimate \(||d_{3}(\cdot,a)||_{\lambda}.\) Since \(\lambda>3\), the convexity of \(x\mapsto(1+|x|^{2})^{\lambda/2}\) leads to
\[2^{-\lambda}(1+|k|^{2})^{\lambda/2}\leq\left(1+\Big{|}\frac{k-l+l}{2}\Big{|}^{ 2}\right)^{\lambda/2}\leq\frac{1}{2}\left((1+|k-l|^{2})^{\lambda/2}+(1+|l|^{2} )^{\lambda/2}\right)\]
for all \(k,l\in\mathbb{R}^{d}.\) Using the convolution theorem we have
\[||d_{3}(\cdot,a)||_{\lambda}^{2}=\int\Big{|}\int(1+|k|^{2})^{\lambda /2}\mathcal{F}b(k-l,a)e^{-il^{\top}m(\mu)}\mathcal{F}d_{4}(l)\,dl\Big{|}^{2}dk\] \[\leq 2^{\lambda-1}\int\Big{|}\int\left((1+|k-l|^{2})^{\lambda/2}+ (1+|l|^{2})^{\lambda/2}\right)|\mathcal{F}b(k-l,a)||\mathcal{F}d_{4}(l)|\,dl \Big{|}^{2}dk\] \[\leq 2^{\lambda}\int\Big{|}\int(1+|k-l|^{2})^{\lambda/2}| \mathcal{F}b(k-l,a)||\mathcal{F}d_{4}(l)|dl\Big{|}^{2}dk\] \[+2^{\lambda}\int\Big{|}\int(1+|l|^{2})^{\lambda/2}|\mathcal{F}b( k-l,a)||\mathcal{F}d_{4}(l)|dl\Big{|}^{2}dk.\]
Thanks to Fourier inversion theorem, we have the explicit formula of \(\mathcal{F}d_{4}(l),\)
\[|\mathcal{F}d_{4}(l)|=\frac{|l||F_{l}(\mathcal{S}_{0}(\mu))-F_{l}(\mathcal{S }_{0}(\tilde{\mu}))|}{(1+|l|^{2})^{\lambda}}.\]
Thus we obtain that
\[||d_{3}(\cdot,a)||_{\lambda}^{2} \leq 2^{\lambda}\int\int\frac{(1+|k-l|^{2})^{\lambda}|\mathcal{F} b(k-l,a)|^{2}}{(1+|l|^{2})^{\lambda-1}}dldk\int\frac{|l|^{2}|F_{l}(\mathcal{S}_{0}( \mu))-F_{l}(\mathcal{S}_{0}(\tilde{\mu}))|^{2}}{(1+|l|^{2})^{2\lambda-\lambda+ 1}}dl\] \[+2^{\lambda}\int\Big{|}\int|\mathcal{F}b(k-l,a)|\frac{||l||F_{l}( \mathcal{S}_{0}(\mu))-F_{l}(\mathcal{S}_{0}(\tilde{\mu}))|}{(1+|l|^{2})^{ \lambda/2}}dl\Big{|}^{2}dk\] \[\leq 2^{\lambda}||b(\cdot,a)||_{\lambda}^{2}\rho_{F}^{2}(\mu, \tilde{\mu})\int\frac{1}{(1+|l|^{2})^{\lambda-1}}dl\] \[+2^{\lambda}\int\Big{|}\int\frac{|\mathcal{F}b(k-l,a)|}{(1+|l|^{ 2})^{\frac{d}{4}+\frac{1}{2}}}\frac{|F_{l}(\mathcal{S}_{0}(\mu))-F_{l}( \mathcal{S}_{0}(\tilde{\mu}))|}{(1+|l|^{2})^{\frac{\lambda}{10+d}}}\frac{1}{ (1+|l|^{2})^{\frac{\lambda}{2}-\frac{1}{2}-\frac{d}{4}-\frac{1}{2}-\frac{ \lambda}{10+d}}}dl\Big{|}^{2}dk.\]
Denote
\[C_{d}=\left(\int\frac{1}{(1+|l|^{2})^{\left(\frac{\lambda}{2}- \frac{1}{2}-\frac{d}{4}-\frac{1}{2}-\frac{\lambda}{10+d}\right)\frac{20+2d}{8+ d}}}dl\right)^{\frac{8+d}{20+2d}}. \tag{8.7}\]
Given the value of \(\lambda=d+7,\) the exponent is
\[\left(\frac{\lambda}{2}-\frac{1}{2}-\frac{d}{4}-\frac{1}{2}- \frac{\lambda}{10+d}\right)\frac{20+2d}{8+d}\] \[=d+7-\frac{d+4}{4}\frac{20+2d}{8+d}=d-\frac{d+4}{d+8}\frac{d}{2}+ 7-5\frac{d+4}{8+d}>\frac{d}{2}+2\]
and \(C_{d}<\infty\). We now apply Holder inequality to obtain that
\[||d_{3}(\cdot,a)||_{\lambda}^{2} \leq C||b(\cdot,a)||_{\lambda}^{2}\rho_{F}^{2}(\mu,\tilde{\mu})\] \[+C_{d}\int\int\frac{|\mathcal{F}b(k-l,a)|^{2}}{(1+|l|^{2})^{\frac {d}{2}+1}}dldk\left(\int\frac{|F_{l}(\mathcal{S}_{0}(\mu))-F_{l}(\mathcal{S}_{ 0}(\tilde{\mu}))|^{10+d}}{(1+|l|^{2})^{\lambda}}dl\right)^{\frac{1}{10+d}}.\]
Given the bound \(|F_{l}(\mathcal{S}_{0}(\mu))-F_{l}(\mathcal{S}_{0}(\tilde{\mu}))|<2\), we have that
\[||d_{3}(\cdot,a)||_{\lambda}^{2}\leq C\left(||b(\cdot,a)||_{\lambda}^{2}\rho_ {F}^{2}(\mu,\tilde{\mu})+C_{d}||b(\cdot,a)||_{2}^{2}\rho_{F}^{\frac{2}{10+d}} (\mu,\tilde{\mu})\right).\]
Given our assumption on \(b\), this implies
\[\sup_{a\in A}||d_{3}(\cdot,a)||_{\lambda}\leq C\rho_{F}^{\frac{1}{10+d}}\sqrt{1+ \rho_{F}^{\frac{18+2d}{10+d}}\left(\mu,\tilde{\mu}\right)}\]
and
\[I_{2}\leq C\rho_{F}^{\frac{1}{10+d}}\sqrt{1+\rho_{F}^{\frac{18+2d}{10+d}}(\mu, \tilde{\mu})}+\frac{C(1+|m(\mu)|)\rho_{F}^{2}(\mu,\tilde{\mu})}{\epsilon}.\]
To estimate \(I_{3}\) we proceed similarly to the estimation of \(I_{2}\). However, in this case, the term \(d_{4}\) is
\[-\int\frac{Re(ikk^{\top}f_{k}(x)(F_{k}(\mathcal{S}_{0}(\mu))-F_{k}(\mathcal{S }_{0}(\tilde{\mu})))^{*})}{(1+|k|^{2})^{\lambda}}dk.\]
Thus, in the estimate of \(||d_{3}(\cdot,a)||_{\lambda}^{2}\), we have
\[|\mathcal{F}d_{4}(l)|=\frac{|l|^{2}|F_{l}(\mathcal{S}_{0}(\mu))-F_{l}( \mathcal{S}_{0}(\tilde{\mu}))|}{(1+|l|^{2})^{\lambda}}\]
instead of
\[\frac{|l||F_{l}(\mathcal{S}_{0}(\mu))-F_{l}(\mathcal{S}_{0}(\tilde{\mu}))|}{ (1+|l|^{2})^{\lambda}}\]
and \(C_{d}\) in (8.7) has to be defined with an exponent
\[\left(\frac{\lambda}{2}-\frac{1}{2}-\frac{d}{4}-1-\frac{\lambda}{10+d}\right) \frac{20+2d}{8+d}.\]
This exponent is still strictly larger than \(\frac{d}{2}\), \(C_{d}<\infty\), and
\[I_{3}\leq C\rho_{F}^{\frac{1}{10+d}}\sqrt{1+\rho_{F}^{\frac{18+2d}{10+d}} \left(\mu,\tilde{\mu}\right)}+\frac{C(1+|m(\mu)|)\rho_{F}^{2}(\mu,\tilde{\mu}) }{\epsilon}\]
which is (3.7).
|
2309.16982 | Superconducting Properties of La$_2$(Cu$_{1-x}$Ni_x)$_5$As$_3$O$_2$: A
$\rm μ$SR Study | We report the results of muon spin rotation and relaxation ($\rm \mu$SR)
measurements on the recently discovered layered Cu-based superconducting
material La$_{2}($Cu$_{1-x}$Ni$_{x}$)$_{5}$As$_{3}$O$_{2}$ ($x =$ 0.40, 0.45).
Transverse-field $\rm \mu$SR experiments on both samples show that the
temperature dependence of superfluid density is best described by a two-band
model. The absolute values of zero-temperature magnetic penetration depth
$\lambda_{\rm ab}(0)$ were found to be 427(1.7) nm and 422(1.5) nm for $x =$
0.40 and 0.45, respectively. Both compounds are located between the
unconventional and the standard BCS superconductors in the Uemura plot. No
evidence of time-reversal symmetry (TRS) breaking in the superconducting state
is suggested by zero-field $\rm \mu$SR measurements. | Qiong Wu, Kaiwen Chen, Zihao Zhu, Cheng Tan, Yanxing Yang, Xin Li, Toni Shiroka, Xu Chen, Jiangang Guo, Xiaolong Chen, Lei Shu | 2023-09-29T04:58:45Z | http://arxiv.org/abs/2309.16982v1 | Superconducting Properties of La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\): A \(\mu\)SR Study
###### Abstract
We report the results of muon spin rotation and relaxation (\(\mu\)SR) measurements on the recently discovered layered Cu-based superconducting material La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x=0.40\), 0.45). Transverse-field \(\mu\)SR experiments on both samples show that the temperature dependence of superfluid density is best described by a two-band model. The absolute values of zero-temperature magnetic penetration depth \(\lambda_{\rm ab}(0)\) were found to be 427(1.7) nm and 422(1.5) nm for \(x=0.40\) and 0.45, respectively. Both compounds are located between the unconventional and the standard BCS superconductors in the Uemura plot. No evidence of time-reversal symmetry (TRS) breaking in the superconducting state is suggested by zero-field \(\mu\)SR measurements.
+
Footnote †: Corresponding Author: [email protected]
## I Introduction
The relation between magnetism and superconductivity is one of the most prominent issues in condensed matter physics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Since the discovery of cuprate high transition temperature (\(T_{\rm c}\)) superconductors, considerable efforts have been made to investigate the role of in-plane impurities in them. It is now well established that, in copper oxide superconductors, nonmagnetic Zn ions suppress \(T_{c}\) even more strongly than magnetic Ni ions [11; 12; 13; 14]. Such behavior is in sharp contrast to that of conventional BCS superconductors, in which magnetic impurities can act as pairing-breaking agents, rapidly suppressing superconductivity [15; 16]. Another interesting behavior of unconventional superconducting systems such as the heavy-fermion, high \(T_{\rm c}\) cuprate, and iron-pnictide superconductors is the dome shape of the chemical doping dependence of \(T_{\rm c}\)[17; 18; 19; 20].
Recently, the first Cu-As superconductor was discovered in the layered La\({}_{2}\)Cu\({}_{5}\)As\({}_{3}\)O\({}_{2}\) with \(T_{\rm c}=0.63\) K [21]. When Cu\({}^{2+}\) is replaced by Ni\({}^{2+}\), also in La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\), the \(T_{\rm c}\) exhibits a dome-like structure. Remarkably, while the superconductivity in cuprate- and iron-based superconductors is completely suppressed when the substitution ratio of Cu or Fe exceeds 20% [22; 23], superconductivity in La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) persists until the substitution ratio exceeds 60% [21]. In this case, the robustness of superconductivity reveals the unexpected effect of impurities on inducing and enhancing superconductivity. Hence, La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) provides a broader platform for studying the doping effect in the superconducting phase diagram.
Specific heat measurements have revealed that the optimally doped La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x=0.40\)) sample shows a sharp superconducting transition at \(T_{\rm c}=\) 2.05 K, with a dimensionless jump \(C_{\rm e}/\gamma_{\rm s}T_{\rm c}=1.42\)[21], consistent with the BCS weak-coupling limit (1.43). In the superconducting state, the temperature dependence of the specific heat coefficient is described by a fully-gapped model \(C_{\rm e}/T\propto e^{-\Delta/k_{\rm B}T}\) after subtracting up of the \(C_{\rm e}/T\) below \(T<0.5\) K, which is attributed to the Schottky effect. These results suggest that La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x=0.40\)) is a conventional BCS superconductor with a fully developed energy gap. However, the fit yields \(2\Delta/k_{\rm B}T_{\rm c}=2.58\)[21], much smaller than the BCS weak coupling limit.
\(\mu\)SR experiments have been widely utilized to probe superconductivity in type-II superconductors at the microscopic level [24], and they are free from the influence of the Schottky effect. Transverse-field (TF) \(\mu\)SR measures the absolute value of the magnetic penetration depth \(\lambda\), which is related to the density of superconducting carriers. The temperature dependence of \(\lambda\) is sensitive to the lowest-lying superconducting excitations and provides information on the symmetry of superconducting pairing [25; 26; 27; 28; 29; 30]. In addition, zero-field (ZF) \(\mu\)SR is a powerful method to detect small spontaneous internal magnetic fields due to the possible breaking of time-reversal symmetry (TRS) at the superconducting transition. These can be as small as 10 \(\mu\)T, corresponding to about \(10^{-2}\) of Bohr magneton \(\mu_{\rm B}\)[25; 31; 32; 33; 34].
Here, in order to study the doping effect on superconductivity, we perform \(\mu\)SR measurements on the polycrystalline samples of La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) for \(x=0.40\) (optimal doping) and \(x=0.45\) (overdoped). The temperature dependence of superfluid density determined from TF-\(\mu\)SR is best described by a two-band superconductivity model. \(d\)-wave superconductivity possibly exists in one of the bands, with the fraction of \(d\)-wave smaller in the overdoped sample than that in the optimally doped one. The superconducting energy gap is larger for optimal doping, suggesting that the coupling strength decreases with the increase of doping. Both compounds are located between the unconventional and the stan
dard BCS superconductors in the Uemura plot. Meanwhile, no evidence of TRS breaking is suggested by ZF-\(\mu\)SR measurements.
## II Experimental details
Solid-state reactions were used to produce polycrystalline samples of La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x\) = 0.40, 0.45) [21]. The performed X-ray diffraction (XRD) studies and the density functional theory (DFT) calculation shows the band structures of La\({}_{2}\)Cu\({}_{5}\)As\({}_{3}\)O\({}_{2}\), indicating that this newly synthesized sample is a layered superconducting compound, with the superconducting atomic layers consisting of the \([\mathrm{Cu}_{5}\mathrm{As}_{3}]^{2-}\) cage-like structure [21].
\(\mu\)SR experiments were carried out using nearly 100% spin-polarized positive muons (\(\mu^{+}\)) on the M15 beam line at TRIUMF, Vancouver, Canada for \(x=0.40\), and the DOLLY spectrometer of the S\(\mu\)S muon source at Paul Scherrer Institute (PSI), Switzerland for \(x=0.45\), respectively. The samples were mounted on a silver sample holder at TRIUMF and a copper sample holder at PSI, respectively. Only a very limited amount of muons stopped in the extremely thin copper sample holder. In TF-\(\mu\)SR measurements, where the external field is applied perpendicular to the initial muon spin polarization, muons are implanted one at a time into a sample which is cooled (from above \(T_{c}\)) in an external magnetic field. Muon spins precesses around the local field at the implantation site, and the functional form of the muon spin polarization depends on the field distribution of the vortex state, including the magnetic penetration depth, the vortex core radius, and the structure of the flux-line lattice. For ZF-\(\mu\)SR measurements, the ambient magnetic field was actively compensated to better than 1 \(\mu\)T. \(\mu\)SR data were analyzed using the musrfit software package [35].
## III Results
### Transverse-field \(\mu\)SR experiments
The \(\mu\)SR asymmetry spectrum usually consists of a signal from muons that stop in the sample and a slowly relaxing background signal from muons that miss the sample for example, stop in the sample holder. Fig. 1 (a) (b) show the typical TF-\(\mu\)SR muon spin precession signals at an applied field of \(\mu_{0}H=\) 30 mT in the normal (red squares) and superconducting states (blue circles) for La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x\) = 0.40 and 0.45) after subtracting the background signal. The superconducting volume fractions are estimated to be 60\(\%\) and 70\(\%\) from the TF \(\mu\)SR asymmetry at long times for x = 0.40 and 0.45, respectively. Fig. 1 (c) (d) show the Fourier transformations (FFT) of the total TF-\(\mu\)SR asymmetry. It can be inferred from the figure that vortex states are constructed in both samples at low temperatures [36]. Fig. 1 (e) (f) show the details of the FFT spectra. As shown in the figure, the magnetic fields in the superconducting state and the normal state are relatively close to each other. This is common in anisotropic powder superconducting samples [37; 38].
The TF-\(\mu\)SR time spectra after subtracting the background signal can be well fitted by the function
\[A(t)=A_{0}\exp(-\frac{1}{2}\sigma_{\mathrm{TF}}^{2}t^{2})\cos(\gamma_{\mu}B_{ s}t+\varphi_{s}), \tag{1}\]
where A\({}_{0}\) is the initial asymmetry of the muon spin in the
Figure 1: (a)-(b) Representative TF-\(\mu\)SR asymmetry spectra \(A(t)\) for La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x\) = 0.40 (panel (a)) and 0.45 (panel (b))) in the normal (red squares) and superconducting (blue circles) states with an external magnetic field of \(\mu_{0}\mathrm{H}=\) 30 mT after subtracting the background signal. Solid curves: fits to the data using Eq. (1). (c)-(f) Fourier transformation of the total TF-\(\mu\)SR time spectra. (e)-(f) show the details of the FFT spectra.
sample. The Gaussian relaxation rate \(\sigma_{\rm TF}\) due to the nuclear dipolar fields in the normal state is enhanced in the superconducting state due to the field broadening generated by the emergence of the flux-line lattice (FLL). \(\gamma_{\mu}=8.516\times 10^{8}\) s\({}^{-1}\)T\({}^{-1}\) is the gyromagnetic ratio of the muon, and \(B_{\rm s}\) is the magnetic field at muon stopping sites.
Figure 2 shows the temperature dependence of \(\sigma_{\rm TF}\) obtained from the fits using Eq. (1) for La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))As\({}_{3}\)O\({}_{2}\) (\(x=\) 0.40 and 0.45). There are noticeable upturns in \(\sigma_{\rm TF}\) that develop below \(T_{c}=2.2\) K and 1.8 K, respectively. The internal field distribution in the vortex state is the convolution of the FLL field distribution and the nuclear dipolar field distribution of the host material:
\[\sigma_{\rm TF}=\begin{cases}\sqrt{\sigma_{\rm FLL}^{2}+\sigma_{\rm dip}^{2}}& (T\leqslant T_{c})\\ \sigma_{\rm dip}&(T>T_{c})\end{cases}. \tag{2}\]
Above \(T_{\rm c}\), \(\sigma_{\rm TF}=\sigma_{\rm dip}\) generated from the nuclear dipolar fields is roughly independent of temperature. Therefore, it was fixed to the average value (0.1179(11) \(\mu\)s\({}^{-1}\) for \(x=0.45\) and 0.127(2) \(\mu\)s\({}^{-1}\) for \(x=0.40\)). The value of \(\sigma_{\rm dip}\) is smaller for larger \(x\), consistent with the fact that the nuclear magnetic moment of nickel is smaller than that of copper [39].
In the inset of Fig. 2, the normalized FLL relaxation rate \(\sigma_{\rm FLL}(T)/\sigma_{\rm FLL}(0)\) is plotted vs. the reduced temperature \(T/T_{\rm c}\) for \(x=\) 0.40 and 0.45. The data can be fitted with the phenomenological two-fluid model [40; 41]
\[\sigma_{\rm FLL}(T)=\sigma_{\rm FLL}(0)[1-(T/T_{\rm c})^{N}]\quad(T\leqslant T _{\rm c}). \tag{3}\]
The fitting parameters are listed in Table 1. There are several common predictions for the value of \(N\). For traditional BCS superconductors, \(N\sim 4\). However, both values of \(N\) for \(x=\) 0.40 and 0.45 are much lower than 4. The dirty-limit \(d\)-wave model predicts a value of \(N=2\)[42; 43]. However, in the La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) system, the upper critical field \(H_{c2}(0)\) is estimated to be 3 T [21], giving an estimated value of the Ginzburg-Landau coherence length \(\xi(0)=[\Phi_{0}/2\pi H_{c2}(0)]^{1/2}=10.47\) nm, where \(\Phi_{0}=2.07\times 10^{-3}\) Tm\({}^{2}\) represents the magnetic flux quantum. Given the metallic behavior, the mean free path can be estimated by \(l_{e}=\hbar k_{F}/\rho_{0}ne^{2}\)[44], with residual resistivity \(\rho_{0}=\)0.34 m\(\Omega\)cm [21], yielding \(l_{e}\) = 15 nm for \(x=0.40\). Thus, La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) may be a relatively clean superconductor with \(\xi/l_{e}\approx 1\). The predicted clean-limit \(d\)-wave model \(N\) value is \(N=1\)[45]. As a result, our system may include both \(s\) and \(d\) waves.
For powder superconductor samples with \(0.13/\kappa^{2}\ll H/H_{c2}\ll 1\) (\(\kappa=\lambda/\xi\) is the Ginzburg-Laudau parameter, \(H\) is the applied field), the Gaussian depolarization rate \(\sigma_{\rm FLL}\) is directly related to the magnetic penetration depth \(\lambda_{\rm eff}\) by [46; 47]
\[\frac{\sigma_{\rm FLL}}{\gamma_{\mu}}=\frac{0.172(1-b)[1+1.21(1-\sqrt{b})^{3}] \Phi_{0}}{2\pi\lambda_{\rm eff}^{2}}, \tag{4}\]
where \(b\) is the reduced applied field \(b=H/H_{c2}=0.01\ll 1\). Therefore, the absolute values of effective magnetic penetration depth \(\lambda_{\rm eff}(0)\) can be obtained and listed in Table 1. In addition, for layered superconductors, the in-plane magnetic penetration depth \(\lambda_{\rm ab}(0)\) is also estimated by the relation \(\lambda_{\rm eff}(0)=3^{1/4}\lambda_{ab}(0)\)[27; 48; 49].
The effective magnetic penetration depth \(\lambda_{\rm eff}\) for isotropic superconductors is related to the density \(n_{\rm s}\) of superconducting carriers and \(m^{*}\) by the general London equation [40]
\[\lambda_{\rm eff}^{2}=\frac{m^{*}}{\mu_{0}e^{2}n_{\rm s}}, \tag{5}\]
where \(\mu_{0}\) is the magnetic constant, \(m^{*}\) is the effective electron mass, and \(e\) is the elementary charge. Therefore, the FLL relaxation rate \(\sigma_{\rm FLL}\) is directly related to the superfluid density through \(\sigma_{\rm FLL}\propto\lambda_{\rm eff}^{-2}\propto n_{\rm s}\). Then we can use microscopic models to investigate the gap symmetry of La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) in more detail, as shown in Fig. 3.
The single-gap model is defined within the local London approximation [50]:
\[\frac{\sigma_{\rm FLL}(T)}{\sigma_{\rm FLL}(0)}=1+\frac{1}{\pi}\int_{0}^{2\pi} \int_{\Delta(T,\varphi)}^{\infty}\frac{\partial f(E)}{\partial E}\frac{E\;{\rm d }E\;{\rm d}\varphi}{\sqrt{E^{2}-\Delta^{2}(T,\varphi)}}, \tag{6}\]
Figure 2: Temperature dependencies of TF-\(\mu\)SR Gaussian relaxation rate \(\sigma_{\rm TF}\) in La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x=0.40\) and 0.45) for \(\mu_{0}H=30\) mT. The arrows indicate \(T_{c}\) from resistivity measurements [21]. Inset: the normalized superfluid density \(\sigma_{\rm FLL}(T)/\sigma_{\rm FLL}(0)\) from Eq. (2) vs. the reduced temperature \(T/T_{c}\). Solid curves: phenomenological two-fluid model fits (see text).
\[\Delta(T,\varphi)=\Delta_{0}\delta\left(T/T_{\rm c}\right)g(\varphi), \tag{7}\]
\[\delta\left(T/T_{\rm c}\right)=\tanh\left\{1.82\left[1.018\left(T_{\rm c}/T-1 \right)\right]^{0.51}\right\}, \tag{8}\]
where \(\sigma_{\rm FLL}(0)\) is the FLL relaxation rate at zero temperature, \(f(E)\) is the Fermi-Dirac distribution function, \(\varphi\) is the angle along the Fermi surface, and \(\Delta_{0}\) is the maximum superconducting gap value at \(T=0\). \(g\left(\varphi\right)\) in Eq. (7) describes the angular dependence of the superconducting gap. Here \(g\left(\varphi\right)=1\) and \(|\cos(2\varphi)|\) refers to the \(s\)-wave and \(d\)-wave model, respectively.
The fitting parameters are listed in Table 2. It is clear from Fig. 3 that the \(d\)-wave model (the dashed yellow lines) does not fit the data, also evidenced by the largest \(\chi_{\rm red}^{2}\) in Table 2. The solid red curves in Fig. 3 representing the \(s\)-wave model seem to fit the data well giving the reduced chi-square \(\chi_{\rm red}^{2}\) 1.74 and 2.19 for \(x\) = 0.40 and 0.45, respectively. Thus our results preliminarily suggest the \(s\)-wave pairing for both overdoped and underdoped samples for La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x\) = 0.45, 0.40). \(\Delta(0)\) is larger for the optimal doping sample, suggesting the coupling strength decreases with the increase of doping concentration.
However, we notice that the superfluid density of \(x\) = 0.40 has a minor upturn at low temperatures. This may be due to nodal or multiband superconductivity. As a result, in addition to single-gap functions, we also employ the phenomenological two-gap \(\alpha\) model with a weighting factor \(f_{\Delta_{1}}\)[51, 52, 53],
\[\frac{\sigma_{\rm FLL}(T)}{\sigma_{\rm FLL}(0)}=f_{\Delta_{1}}\,\frac{\sigma_ {\rm FLL,\Delta_{1}}(T)}{\sigma_{\rm FLL,\Delta_{1}}(0)}+(1-f_{\Delta_{1}}) \frac{\sigma_{\rm FLL,\Delta_{2}}(T)}{\sigma_{\rm FLL,\Delta_{2}}(0)}, \tag{9}\]
where \(\sigma_{\rm FLL,\Delta_{1}}^{-2}(T)/\sigma_{\rm FLL,\Delta_{1}}^{-2}(0)\)\((i=1,\ 2)\) is the superfluid density contribution of one of the gaps.
While the smallest \(\chi_{\rm red}^{2}\) in both compounds suggests that the \(s+s\) model may be the best model, the resulting values of 2\(\Delta_{2}/k_{B}T_{c}\) is unreasonably small. The \(s+d\) model is also better than the \(s\)-wave model according to the \(\chi_{\rm red}^{2}\) values. Moreover, the gap-to-\(T_{c}\) ratios 2\(\Delta_{1,2}/k_{B}T_{c}\) determined from the \(s+d\) model are also close to the BCS theoretical predictions (2\(\Delta_{s}/k_{B}T_{c}\) = 3.43(3) for \(s\)-wave, and 2\(\Delta_{d}/k_{B}T_{c}\) = 4.15(3) for \(d\)-wave, respectively) for \(x\) = 0.45. Both \(s+s\) and \(s+d\) models can well deal with the upwarping phenomenon of superfluid density at low temperatures in \(x\) = 0.40. However, we also notice that the relaxation rate of \(x\) = 0.45 does not show obvious upwarping at the current lowest temperature of 0.27 K. And also the obtained \(f_{\Delta_{d}}\) values for both compounds are extremely small. More studies are needed to determine whether \(d\)-wave superconductivity exists in one of the bands.
\begin{table}
\begin{tabular}{c c c c c c c} Ni doping & Model & \(f_{\Delta_{1}}\) & 2\(\Delta_{1}/k_{B}T_{c}\) & 2\(\Delta_{2}/k_{B}T_{c}\) & \(\lambda_{ab}(0)\) (nm) & \(\chi_{\rm red}^{2}\) \\ \hline & \(s\) & 1 & 3.69(17) & & & 427.4(1.8) & 1.74 \\ & \(d\) & 1 & 6.68(37) & & & 411(3) & 4.46 \\
0.40 & \(s+s\) & 0.95(3) & 3.86(17) & 0.20(13) & 419(5) & 1.31 \\ & \(s+d\) & 0.70(12) & 4.03(21) & 5.62(7) & 422(3) & 1.50 \\ \hline & \(s\) & 1 & 3.34(7) & & & 421.5(1.7) & 2.19 \\ & \(d\) & 1 & 5.09(26) & & & 392(5) & 12.9 \\
0.45 & \(s+s\) & 0.988(154) & 3.42(2) & 0.262(2) & 420(31) & 2.06 \\ & \(s+d\) & 0.976(114) & 3.43(3) & 4.15(3) & 422(5) & 2.09 \\ \end{tabular}
\end{table}
Table 2: Parameters from the fits to the single-gap and two-gap models from TF-\(\mu\)SR data. The weighting factor of phenomenological two-gap \(\alpha\) model \(f_{\Delta_{1}}\), zero-temperature superconducting gap \(\Delta(0)\), zero-temperature magnetic penetration depth \(\lambda_{ab}(0)\), and reduced \(\chi_{\rm red}^{2}\) from fits of Eq. (6) and Eq. (9).
### Uemura Plot
An Uemura plot [54; 55; 56] is shown in Fig. 4, including elemental superconductors (such as Nb, Al, Sn, and Zn), cuprates, alkali-doped C\({}_{60}\) (K\({}_{3}\)C\({}_{60}\) and Rb\({}_{3}\)C\({}_{60}\)), heavy-fermion superconductors, and La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\). In this classification, unconventional superconductors usually lie in the orange area for \(1/100\leqslant T_{\rm c}/T_{\rm F}\leqslant 1/10\), conventional BCS superconductors fall in the blue region for \(T_{\rm c}/T_{\rm F}\leqslant 1/1000\), where \(T_{\rm F}\) is the Fermi temperature [40; 55].
For the quasi-2D systems, \(T_{\rm F}\) can be estimated by the following relation [40; 58],
\[k_{\rm B}T_{\rm F}=\frac{\pi\hbar^{2}n_{\rm 2D}^{s}}{m^{*}}, \tag{10}\]
where \(n_{\rm 2D}^{s}\) is the two dimensional superfluid density within the superconducting planes derived from the in-plane superfluid density via \(n_{\rm 2D}^{s}=n_{\rm ab}^{s}d\), where \(d\) represents the interplanar distance. According to the general London equation Eq. (5), we can get the in-plane superfluid density through \(n_{\rm ab}^{s}=\frac{m^{*}}{\mu_{0}e^{2}\lambda_{\rm ab}^{s}}\).
Correspondingly,
\[T_{\rm F}=\frac{\pi\hbar^{2}\lambda_{\rm ab}^{-2}d}{k_{\rm B}\mu_{0}e^{2}}, \tag{11}\]
therefore, \(T_{\rm F}\) of La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x\) = 0.45 and 0.40) are estimated, and the results are listed in Table 3. As shown in Fig. 4, La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x\) = 0.40 and 0.45) fall in the crossover between the unconventional superconducting area and the conventional BCS region. The fact that \(T_{\rm c}/T_{\rm F}\) is larger in \(x\) = 0.40 than 0.45 may suggest a stronger pairing interaction in \(x=0.40\).
### ZF-\(\mu\)Sr
To further investigate the superconductivity in La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\), ZF-\(\mu\)SR experiments were performed. Fig. 5 (a) shows the time evolution of the decay positron count asymmetry, which is proportional to the muon spin polarization, at temperatures above and below \(T_{c}\) in \(x\) = 0.45. There is no noticeable difference between the superconducting and the normal state, suggesting the absence of a spontaneous magnetic field below \(T_{\rm c}\). Therefore, TRS is conserved below the superconducting transition. Over the entire temperature range, the ZF-\(\mu\)SR spectra can be well described by the following function,
\[\frac{A\left(t\right)}{A(0)}=(1-f_{\rm bg})G_{\rm KT}(t)+f_{\rm bg}G_{\rm bg}(t), \tag{12}\]
Figure 4: The Uemura classification scheme [54; 55; 57]. The superconducting transition temperature \(T_{\rm c}\) versus the effective Fermi temperature, \(T_{\rm F}\), where the position of La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x\) = 0.45, 0.40) is shown by the red stars. The unconventional superconductors fall within a band indicated by the red and orange lines. The BCS superconductors are in the lower right region with blue marks.
Figure 5: (a) ZF-\(\mu\)SR time spectra at representative temperatures for polycrystalline samples of La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x\) = 0.45). Curves: Fits to the data by Eq. (12). The background signal from muons that stop in the copper sample holder has not been subtracted. (b) Temperature dependence of the Gaussian KT relaxation rate \(\sigma_{\rm ZF}\) for La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x\) = 0.45). The red dash line represents \(T_{c}\) determined from the transport measurements. The gray dashed line shows an average of \(\sigma_{\rm ZF}\) = 0.1336 \(\mu\)s\({}^{-1}\).
where the two relaxing terms are the signals of the sample and the background. \(A(0)\) is the initial total asymmetry at time \(t=0\), and \(f_{\rm{bg}}\) is the proportion of the background signal, which has the same value as the one in the TF-\(\mu\)SR experiments. \(G_{\rm{KT}}(t)\) is the static Gaussian Kubo-Toyabe (KT) function [31]
\[G_{\rm{KT}}(\sigma_{\rm{ZF}},t)=\frac{1}{3}+\frac{2}{3}\left(1-\sigma_{\rm{ZF} }^{2}t^{2}\right)\exp\left(-\frac{1}{2}\sigma_{\rm{ZF}}^{2}t^{2}\right), \tag{13}\]
where \(\sigma_{\rm{ZF}}\) is the Gaussian KT relaxation rate which corresponds to the relaxation due to static, randomly oriented local fields associated with the nuclear moments at the muon site. Fig. 5 (b) shows that there is no significant change in relaxation rate \(\sigma_{\rm{ZF}}\) down to base temperature \(T=0.27\) K. The average value of \(\sigma_{\rm{ZF}}\) is 0.1336 \(\mu s^{-1}\), consistent with the typical nuclear dipolar moments of Ni and Cu [59; 60; 61].
## IV Discussion
Low-temperature investigations are crucial for determining the superconducting pairing mechanism. Our TF-\(\mu\)SR measurements were performed down to 20 mK, and the temperature dependence of superfluid density suggests that the two-band model best fits the data for both measured samples, one of which is dominated by the \(s\) wave and the other in such a small proportion that we cannot confirm whether it is the \(s\) wave or the \(d\) wave under the current data conditions. The two-band superconductivity scenario is also supported by the density functional theory (DFT) calculations showing that two bands crossing the Fermi energy contribute to the Fermi surface in La\({}_{2}\)Cu\({}_{5}\)As\({}_{3}\)O\({}_{2}\)[21]. Furthermore, the temperature dependence of the upper critical field \(H_{c2}(T)\) of La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) exhibits a upward curvature [21], significantly different from the Werthamer-Helfand-Hohenberg relation [62], which may also proposes multi-band superconductivity [63].
BCS superconductivity and Bose-Einstein condensation (BEC) are two asymptotic limits of a fermionic superfluid. Systems with a small \(T_{\rm{c}}/T_{\rm{F}}<\) 0.001 are usually considered to be BCS-like, while large \(T_{\rm{c}}/T_{\rm{F}}\) values are expected only in the BEC-like picture and are considered to be a hallmark feature of unconventional superconductivity [40; 54; 64]. As shown in Fig. 4, La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) falls in the crossover area between BCS and exotic superconductors regions, like many other superconductors such as TRS breaking superconductors La\({}_{7}\)Ir\({}_{3}\) and LaNiC\({}_{2}\)[65; 66; 57], and multi-band iron-based superconductors BaFe\({}_{2}\)(As\({}_{1-x}\)P\({}_{x}\))\({}_{2}\)[66]. In the Ue-mura plot classification scheme, these superconductors may not be traditional electron-phonon coupled BCS superconductors but more like exotic unconventional superconductors.
More experimental evidence is required to investigate whether \(d\)-wave superconductivity exists in one of the bands. Table 2 shows that the proportion of \(d\)-wave decreases as Ni concentration increases if it does exist. Such a behavior is rare but was observed in layered cuprates La\({}_{2-x}\)Ce\({}_{x}\)CuO\({}_{4-y}\) and Pr\({}_{2-x}\)Ce\({}_{x}\)CuO\({}_{4-y}\), where a transition from \(d\)- to \(s\)-wave pairing occurs near the optimal doping [67]. Also in LaFeAs\({}_{1-x}\)P\({}_{x}\)O [68], the superconducting order parameter evolves from nodal to nodeless as the doping concentration exceeds 50%. Moreover, a crossover from a nodal to nodeless superconducting energy gap was also suggested in skutterudite PrPt\({}_{4}\)Ge\({}_{12}\) through Ce substitution [69; 70], although the possibility of a transition from multi-band to single-band superconductivity can not be excluded. In addition, a molecular pairing scenario [45] was proposed to explain the transition from nodal to nodeless superconductivity in Yb-substituted CeCoIn\({}_{5}\)[71]. The Yb doping increases the chemical potential and drives a Lifshitz transition of the nodal Fermi surface, forming a fully gapped molecular superfluid of composite pairs. For La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\), the Fermi pocket around the \(\Gamma\) point is relatively small according to the DFT calculations [21], and detailed electronic structure study is required to investigate whether the molecular pairing scenario can be applied.
## V Conclusions
In summary, we performed ZF and TF -\(\mu\)SR measurements on the recently discovered layered superconductor La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) (\(x=0.40\), 0.45). The preservation of TRS is suggested by the ZF-\(\mu\)SR measurements. In combination with the \(H_{c2}(T)\) data and DFT calculations [21], the temperature dependence of the superfluid density of La\({}_{2}\)(Cu\({}_{1-x}\)Ni\({}_{x}\))\({}_{5}\)As\({}_{3}\)O\({}_{2}\) measured by TF-\(\mu\)SR is best described by the two-band model with the dominant \(s\) wave superconducting energy gap larger for optimal doping, suggesting the coupling strength decreases with the increase of doping concentration. Both samples are classified between unusual superconductors and traditional BCS superconductors in the Uemura plot. More experimental evidence is required to investigate whether \(d\)-wave superconductivity exists in one of the bands.
###### Acknowledgements.
This work is based on experiments performed at the Swiss Muon Source S\(\mu\)S, Paul Scherrer Institute, Villigen, Switzerland, and TRIUMF, Vancouver, Canada. The research performed in this work was supported by the National Key Research and Development Program of China, No. 2022YFA1402203, the National Natural Science Foundations of China, No. 12174065 and No.51922105, and the Shanghai Municipal Science and Technology (Major Project Grant No. 2019SHZDZX01 and No. 20ZR1405300).
|
2301.13014 | Attribute-Guided Multi-Level Attention Network for Fine-Grained Fashion
Retrieval | Fine-grained fashion retrieval searches for items that share a similar
attribute with the query image. Most existing methods use a pre-trained feature
extractor (e.g., ResNet 50) to capture image representations. However, a
pre-trained feature backbone is typically trained for image classification and
object detection, which are fundamentally different tasks from fine-grained
fashion retrieval. Therefore, existing methods suffer from a feature gap
problem when directly using the pre-trained backbone for fine-tuning. To solve
this problem, we introduce an attribute-guided multi-level attention network
(AG-MAN). Specifically, we first enhance the pre-trained feature extractor to
capture multi-level image embedding, thereby enriching the low-level features
within these representations. Then, we propose a classification scheme where
images with the same attribute, albeit with different values, are categorized
into the same class. This can further alleviate the feature gap problem by
perturbing object-centric feature learning. Moreover, we propose an improved
attribute-guided attention module for extracting more accurate
attribute-specific representations. Our model consistently outperforms existing
attention based methods when assessed on the FashionAI (62.8788% in MAP),
DeepFashion (8.9804% in MAP), and Zappos50k datasets (93.32% in Prediction
accuracy). Especially, ours improves the most typical ASENet_V2 model by 2.12%,
0.31%, and 0.78% points in FashionAI, DeepFashion, and Zappos50k datasets,
respectively. The source code is available in
https://github.com/Dr-LingXiao/AG-MAN. | Ling Xiao, Toshihiko Yamasaki | 2022-12-27T05:28:38Z | http://arxiv.org/abs/2301.13014v2 | # Attribute-Guided Multi-Level Attention Network
###### Abstract
This paper proposes an attribute-guided multi-level attention network (AG-MLAN) to learn fine-grained fashion similarity. AG-MLAN is able to make a more accurate attribute positioning and capture more discriminative features under the guidance of the specified attribute. Specifically, the AG-MLAN contains two branches, branch 1 aims to force the model to recognize different attributes, while branch 2 aims to learn multiple attribute-specific embedding spaces for measuring the fine-grained similarity. We first improve the Convolutional Neural Network (CNN) backbone to extract hierarchical feature representations, then the extracted feature representations are passed into branch 1 for attribute classification and branch 2 for multi-level feature extraction. In branch 2, we first propose a multi-level attention module to extract a more discriminative representation under the guidance of a specific attribute. Then, we adopt a masked embedding module to learn attribute-aware embedding. Finally, the AG-MLAN is trained with a weighted loss of the classification loss in branch 1 and the triplet loss of the masked embedding features in branch 2 to further improve the accuracy in attribute location. Extensive experiments on the DeepFashion, FashionAI, and Zappos50k datasets show the effectiveness of AG-MLAN for fine-grained fashion similarity learning and its potential for attribute-guided retrieval tasks. The proposed AG-MLAN outperforms the state-of-the-art methods in the fine-grained fashion similarity retrieval task.
## 1 Introduction
Similarity retrieval is an important research topic in the field of fashion [5, 6, 8, 16, 19, 35], especially for in-shop clothes retrieval [2, 19] and cross-domain fashion retrieval [13, 15, 22]. Currently, the mainstream approaches in fine-grained fashion retrieval aim to learn a general embedding space to compute the similarity among fashion items, and the triplet loss is always adopted to train the model [10, 38]. However, the majority of methods are proposed to learn the overall similarity. In this paper, we aim for fine-grained fashion similarity learning, such as neck-line design, pant length, lapel design, and so on. Fine-grained fashion retrieval pays attention to the specific regions and features in the image under specific attribute guidance [30, 31, 36, 20]. Specifically, the fine-grained similarity is also important in fashion copyright protection scenario [21], which can find items with plagiarized designs. Hence, learning fine-grained similarity is necessary.
Veit _et al._[30] first learned an overall embedding space, and then employed a fixed mask to select relevant embedding dimensions w.r.t. the specified attribute. The fine-grained similarity is measured in terms of the masked embedding feature. Ma _et al._[20] went further in this direction, they proposed to learn multiple attribute-specific embedding spaces and measure the fine-grained similarity in the corresponding space. Wan _et al._[31] proposed an attention-based attribute-guided fashion similarity learning network for fine-grained fashion retrieval as well. Yan _et al._[36] employed an iterative learning manner to make up for the shortcomings of the traditional attention mechanism, resulting in a more accurate attribute positioning. However, there is still much space to improve the accuracy of fine-grained fashion similarity learning.
In this work, we propose an attribute-guided multi-level attention network (AG-MLAN) to learn fine-grained fashion similarity. Specifically, we propose two branches. Branch 1 forces the model to recognize different attributes with the extracted hierarchical feature representations. While branch 2 is a multi-level attention network that aims to learn multiple attribute-specific embedding spaces for measuring fine-grained similarity. In branch 2, we first introduce a multi-level attention module to extract attribute-specific representations. Then, a fixed mask is applied to each attribute-specific embedding space to further improve the positing ability of the feature. The AG-MLAN learns multiple attribute-specific embeddings in an end-to
end manner and is trained with a weighted loss of the classification loss and the triplet loss of the masked embedding features.
In summary, this paper presents the following contributions:
1. We improve the CNN backbone to extract hierarchical feature representations and propose a classification branch to force the model to recognize different attributes.
2. A multi-level attention module is introduced to extract attribute-specific embedding that can better locate the related regions with a given attribute.
3. We propose to add a masked embedding module to further improve the positioning accuracy of the extracted attribute-specific embedding.
4. We propose an AG-MLAN model for fine-grained fashion similarity learning. AG-MLAN outperforms the state-of-the-art methods in fine-grained fashion retrieval. It can make a more accurate attribute positioning and capture more discriminative features under the guidance of the specified attribute.
## 2 Related work
### Fashion retrieval
Many fashion retrieval methods that focus on the overall similarity of fashion images have been proposed in recent years [14, 15, 38, 6, 10, 25]. For example, in the cross-domain retrieval task, Ji _et al._[15] utilized the rich label information on the e-commerce website to assist the attention mechanism to locate the clothing in the complex scene and then complete the in-shop retrieval according to the extracted clothing features. In the fashion compatibility task, given the fashion items in an outfit, Han _et al._[10] trained a bi-directional long short-term memory (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships. The above-mentioned work mainly focuses on the similarity of the whole image. Later, the emergence of the attention mechanism improves the positioning ability of the model and is widely used in computer vision [34, 28, 24, 3] and recommendation system [27, 18, 40]. Wang _et al._[33] employed a self-learning visual attention model to extract clothing features without external landmark labeling. However, this attention mechanism only focuses on the overall similarity of clothing and can not meet the needs of fine-grained retrieval.
### Attention mechanism
Recently attention mechanism has become a popular technique and showed superior effectiveness in various research areas, such as computer vision [32, 34, 39, 12, 17] and natural language processing [29, 7, 26]. To some extent, attention can be regarded as a tool to bias the allocation of the input information. As fashion images always present with complex backgrounds, pose variations, _etc._, using a fixed region for a specific attribute for all images is not optimal, thus attention mechanism is also common in the fashion domain [15, 2, 9, 1, 33, 2]. For instance, Ak _et al._[2] used the prior knowledge of clothes structure to locate the specific parts of clothes. However, their approach can only be used for upper-body clothes thus limiting its generalization. Wang _et al._[33] proposed to learn channel attention implemented by a fully convolutional network. Ji _et al._[15] utilized attributes to facilitate attention modeling, but they use all attributes of fashion items. In Ma _et al._[20], they proposed two attribute-aware attention modules, which utilize a specific attribute as the extra input in addition to a given image. The proposed attention modules capture the attribute-related patterns under the guidance of the specified attribute. Wan _et al._[31] proposed two attribute-aware attention modules as well. Yan _et al._[36] employed an iterative learning manner to the channel attention module for a more accurate attribute positioning. Different from the existing methods, we introduce a multi-level attention module and build coarse to fine attention for a more accurate attribute positioning and more discriminative embedding space under the guidance of the specified attribute.
## 3 Proposed method
### Overview
Given an image \(I\) and a specific attribute \(a\), we propose to learn an attribute-specific feature vector \(f(I,a)\in\mathbb{R}^{c}\) which reflects the characteristics of the corresponding attribute in the image. \(c\) is the number of channels. Therefore, for two fashion images \(I\) and \(I^{{}^{\prime}}\), the cosine similarity between \(f(I,a)\) and \(f(I^{{}^{\prime}},a)\) are used to represent the fine-grained similarity. Moreover, the fine-grained similarity for multiple attributes can be computed by summing up the similarity scores on the individual attributes. Note that the attribute-specific feature vector resides in the corresponding attribute-specific embedding space. If there are \(n\) attributes, \(n\) attribute-specific embedding spaces are learned jointly.
Figure 1 illustrates the structure of our proposed network. We first improve the CNN backbone to extract hierarchical features. The extracted hierarchical feature representation is then passed into branches 1 and 2, respectively. Branch 1 forces the model to recognize different attributes by making an attribute classification, while branch 2 aims to learn multiple attribute-specific embedding spaces for measuring fine-grained similarity. In branch 2, it contains a multi-level attention module and a masked embedding module. In what follows, we first detail the hierarchical feature extraction and branch 1, followed by the main body in branch 2, including the description of the multi-level attention module and masked embedding module. Finally, the
model learning process is given.
### Hierarchical feature extraction and branch 1
To represent the image, we employ a CNN model pre-trained on ImageNet [4] as a backbone network, e.g., ResNet50 [11]. In branch 1, to keep more of the low-level information, we first add the output of block 2 and block 3 in ResNet50, the addition result is then passed into block 4 of ResNet50 to extract the hierarchical feature representation for the attribute classification. Note that, each image has one label. Different from the feature that passed into branch 1, to keep the multi-level spatial feature of the image, we concatenate the addition result and block 3 extracted feature as the input feature representation of branch 2. So the image is represented by \(I\in\mathbb{R}^{c\times h\times w}\), where \(h\times w\) is the size of the feature map, and \(c\) indicates the number of channels.
### Multi-level attention
In branch 2, our multi-level attention module contains three parts, named attribute-aware spatial attention (ASA), spatial attention (SA), and attribute-aware channel attention (ACA), respectively. For the attribute, we represent it with a one-hot vector \(a\in\{0,1\}^{n}\), where \(n\in N\) indicates the number of different attributes.
**ASA** Fine-grained fashion retrieval needs to learn the feature representations of certain related regions. For instance, in order to extract the attribute-specific feature of the neckline design attribute, the region around the neck is much more important than the others. Besides, as fashion images always show up in large variations, e.g., various poses and scales, using a fixed region with respect to a specific attribute for all images is not optimal. Hence, we propose attribute-aware spatial attention that adaptively attends to certain regions of the input image under the guidance of a specific attribute. Specifically, we first transform the image representation \(I\in\mathbb{R}^{c\times h\times w}\) and the attribute to make their dimensionality the same. For the image, we employ a convolutional layer followed by batch normalization and a nonlinear tanh activation function. Formally, the mapped image \(p(I)\in\mathbb{R}^{c^{{}^{\prime}}\times h\times w}\) is given by
\[p(I)=\tanh(Conv_{c^{{}^{\prime}}}(I)), \tag{1}\]
where \(Conv_{c^{{}^{\prime}}}\) is a convolutional layer that contains \(c^{{}^{\prime}}\)\(1\times 1\) convolution kernels.
For the attribute, we first project it into a \(c^{{}^{\prime}}\)-dimensional vector through an attribute embedding, implemented by an embedding layer and a fully connected (FC) layer, then perform self-attention. Hence, the mapped attribute \(p(a)\in\mathbb{R}^{c^{{}^{\prime}}\times h\times w}\) is
\[p(a)=\sigma(W_{a}a)*W_{a}a, \tag{2}\]
where \(W_{a}\in\mathbb{R}^{c^{{}^{\prime}}\times n}\) denotes the transformation matrix of the embedding and FC layers. \(\sigma\) denotes a sigmoid function. Finally, spatial attention is given by
\[I_{s}=I*W_{1}s(p(I)\cdot p(a)), \tag{3}\]
where \(s\) indicates a softmax function and \(W_{1}\) is a learned weight matrix.
**SA** Given a feature map \(I_{s}\), the output feature maps of our SA layer (\(I_{s^{{}^{\prime}}}\)) could be computed as:
\[I_{s^{{}^{\prime}}}=\sigma(Conv_{1}([Avg(I_{s}),Max(I_{s})]))*I_{s}, \tag{4}\]
where \(Conv_{1}\) is a convolutional layer that only contains one \(1\times 1\) convolution kernel, \(Avg(I_{s})\) and \(Max(I_{s})\) are the output of the average channel pooling layer and max channel pooling layer.
**ACA** Although ASA and SA modules adaptively focus on the specific regions in the image, the same regions may still be related to multiple attributes. For example, attributes of collar design and collar color are all associated with the region around the collar. Hence, we further employ attribute-aware channel attention over the spatially attended feature vector \(I_{s^{{}^{\prime}}}\). The attribute-aware channel attention is designed as an element-wise gating function which selects the relevant dimensions of the spatially attended feature with respect to the given attribute. Concretely, we first employ an embedding layer and an FC layer to embed an attribute into an embedding vector with the same dimensionality of \(I_{s^{{}^{\prime}}}\), that is:
\[q(a)=\sigma(W_{a^{{}^{\prime}}}a)*(W_{a^{{}^{\prime}}}a), \tag{5}\]
where \(W_{a^{{}^{\prime}}}\in\mathbb{R}^{c^{{}^{\prime}}\times n}\) denotes transformation matrix of the embedding and FC layers. Then the attribute \(q(a)\) and the spatially attended feature \(I_{s^{{}^{\prime}}}\) are fused by simple concatenation and further fed into the subsequent two FC layers to obtain the attribute-aware channel attention weights. The final output becomes
\[I_{c}=I_{s^{{}^{\prime}}}*W_{2}\sigma(W_{c_{2}}(r(W_{c_{1}}([q(a),I_{s^{{}^{ \prime}}}])))), \tag{6}\]
where \(W_{c_{1}}\in\mathbb{R}^{c^{{}^{\prime}}\times n}\) and \(W_{c_{2}}\in\mathbb{R}^{c^{{}^{\prime}}\times n}\) denote two transformation matrix of two FC layers, respectively. \([,]\) denotes a concatenation operation, \(r\) is a ReLU function, and \(W_{2}\) is a learned weight matrix.
### Masked embedding
After obtaining \(I_{c}\), we introduce masks \(m\) over the embedding with \(m\in\mathbb{R}^{c^{{}^{\prime}}\times n}\). We define a set of parameters \(\beta\) of the same dimension as \(m\) such that \(m=r(\beta)\), with \(r\) denoting a ReLU so that \(r(\beta)=\max\{0,\beta\}\). As such, we denote \(m_{a}\) to be the selection of the \(a\)-th mask column
of dimension \(c^{{}^{\prime}}\) (in pseudocode \(m_{a}=m[:,a]\)). The mask plays the role of an element-wise gating function selecting the relevant dimensions of the embedding required to attend to a particular concept. The role of the masking operation is visually sketched in Figure 2. Thus the masked representations of the triplet input \((I,I^{p},I^{n}|a)\) are \(I_{c}\times m_{a}\), \(I_{c}^{p}\times m_{a}\), and \(I_{c}^{n}\times m_{a}\), denoted as \((I_{m},I_{m}^{p},I_{m}^{n}|a)\).
### Model Learning
In branch 1, we don't consider multi-label issues. Each image is assigned one label. The attribute classification loss is computed with
\[L_{c}=-w_{a}[y_{a}\cdot\log\sigma(x_{a})+(1-y_{a})\cdot\log(1- \sigma(x_{a}))]. \tag{7}\]
In branch 2, we require the distance of the masked embedding for images with the same specific attribute value is smaller than those for images with different ones. Concretely, the triplet ranking loss is defined as
\[L_{triplet} = \max\{0,m-D(I_{m},I_{m}^{p})+D(I_{m},I_{m}^{n})\}, \tag{8}\]
where \(m\) represents the margin, empirically set to be 0.2, \(D(I_{m},I_{m}^{p})\) denotes the cosine similarity between \(f(I,a)\) and \(f(I^{p},a)\).
We calculate a weighted loss of the attribute classification loss and the triplet ranking loss. Thus the final loss is given by
\[L=w_{0}L_{c}+w_{1}L_{triplet}, \tag{9}\]
where \(w_{0-1}\) are learned weight parameters.
## 4 Experimental results
### Experimental settings
To verify the viability of the proposed model for fine-grained fashion similarity computation, we evaluate it on the following two tasks. (1) Attribute-specific fashion retrieval: Given a fashion image and a specified attribute, its goal is to search for fashion images of the same attribute value as the given image. (2) Triplet relation prediction: Given a triplet of \(\{I,I^{{}^{\prime}},I^{{}^{\prime\prime}}\}\) and a specified attribute, the task is asked to predict whether the relevance between \(I\) and \(I^{{}^{\prime}}\) is larger than that between \(I\) and \(I^{{}^{\prime\prime}}\) in terms of the given attribute. For task (1), FashionAI [20], and DeepFashion [19] datasets are adopted. We report the Mean Average Precision (MAP), a popular performance metric in many retrieval-related tasks [20]. For task (2), Zappos50k dataset [37] is used. We utilize prediction accuracy as the metric. Note that, branch 1 of our model is removed when evaluating this task to evaluate the effectiveness of
Figure 1: The architecture of the proposed model.
Figure 2: The masking operation selects relevant embedding dimensions, given an attribute index. Masking can be seen as a soft gating function to attend to a particular attribute.
Figure 3: Top-10 retrieval examples when cloth and attribute are given on the FashionAI dataset
our multi-level attention module compared with the attention modules proposed in other methods. Details of all the used datasets are given in the supplementary material.
### Results
#### 4.2.1 Attribute-specific fashion retrieval
Tables 1 and 2 summarize the performance of different models on the FashionAI and DeepFashion datasets respectively. The performance of the compared models on each attribute type is reported. As we can see, our model achieves better performance for all attribute types than the state-of-the-art methods. Note that, for design-related attributes, our model consistently outperforms the other counterparts. Even for the more complicated length-related attributes, our model shows a big advantage compared with existing methods.
Figures 3 and 4 present the top-10 retrieval examples when cloth and attribute are given on the FashionAI and DeepFashion datasets, respectively. Note that the FashionAI dataset contains complex backgrounds and the attributes are all about local regions. While the DeepFashion dataset contains a clean background and most of the attributes are about global regions. As we can see, most of the top-10 retrieval examples are in accordance with human perception. It demonstrates the effectiveness of the proposed AG-MLAN model.
To gain further insight into our proposed network, we visualize the learned attribute-aware spatial attention. As shown in Figures 5 and 6, the learned attention map gives relatively high responses on the relevant regions while low responses on irrelevant regions with the specified attribute, showing the attention is able to figure out which regions are more important for a specific attribute. Compared with the state-of-the-art methods, our model can better locate the related regions for both design-related and length-related attributes, which further demonstrates the effectiveness of attribute-aware spatial attention. Moreover, attention maps for length-related attributes are more complicated than that for design-related attributes; multiple regions show a high response for the former. However, our model can better locate the start and end of a fashion item to speculate its length.
Figure 4: Top-10 retrieval examples when cloth and attribute are given on the DeepFashion dataset.
Figure 5: Visualization of the attribute-aware spatial attention with the guidance of a specified attribute (above the attention image) on FashionAI.
Figure 6: Visualization of the attribute-aware spatial attention with the guidance of a specified attribute (above the attention image) on DeepFashion.
#### 4.2.2 Triplet relation prediction
Table 3 shows the results on the Zappos50k dataset. Among the five embedding learning models, our proposed model outperforms other methods. As we already removed branch 1 of our AG-MLAN model, the result verifies the effectiveness of our multi-level attention module for triplet relation prediction compared with other attention methods. Some visualization results are given in the supplementary material.
## 5 Conclusions
To solve the problem of fine-grained similarity retrieval in the fashion field, this paper proposes an attribute-guided multi-level attention network (AG-MLAN) to learn fine-grained fashion similarity. AG-MLAN is able to make a more accurate attribute positioning and capture more discriminative features under the guidance of the specified attribute. Concretely, we first improve the Resnet50 to extract hierarchical feature representations. The extracted features are then passed into two branches, where branch 1 forces the model to recognize different attributes by making an attribute classification, while branch 2 aims to learn multiple attribute-specific embedding spaces for measuring the fine-grained similarity. In branch 2, we first propose a multi-level attention module to extract a more discriminative attribute-specific representation. The extracted representation is then passed into a masked embedding module to further improve its positioning ability. Finally, the AG-MLAN is trained with a weighted loss of the classification loss in branch 1 and the triplet loss of the masked embedding features in branch 2 to improve the accuracy in attribute location. Extensive experiments on the DeepFashion, FashionAI, and Zappos50k datasets show the higher performance of our AG-MLAN for fine-grained fashion similarity learning compared with the state-of-the-art methods. The proposed AG-MLAN has a high potential to be adopted in other attribute-guided recognition tasks.
|
2302.03757 | Sum-of-squares bounds on correlation functions in a minimal model of
turbulence | We suggest a new computer-assisted approach to the development of turbulence
theory. It allows one to impose lower and upper bounds on correlation functions
using sum-of-squares polynomials. We demonstrate it on the minimal cascade
model of two resonantly interacting modes, when one is pumped and the other
dissipates. We show how to present correlation functions of interest as part of
a sum-of-squares polynomial using the stationarity of the statistics. That
allows us to find how the moments of the mode amplitudes depend on the degree
of non-equilibrium (analog of the Reynolds number), which reveals some
properties of marginal statistical distributions. By combining scaling
dependence with the results of direct numerical simulations, we obtain the
probability densities of both modes in a highly intermittent inverse cascade.
We also show that the relative phase between modes tends to $\pi/2$ and
$-\pi/2$ in the direct and inverse cascades as the Reynolds number tends to
infinity, and derive bounds on the phase variance. Our approach combines
computer-aided analytical proofs with a numerical algorithm applied to
high-degree polynomials. | Vladimir Parfenyev, Evgeny Mogilevskiy, Gregory Falkovich | 2022-12-20T16:52:43Z | http://arxiv.org/abs/2302.03757v1 | # Sum-of-squares bounds on correlation functions in a minimal model of turbulence
###### Abstract
We suggest a new computer-assisted approach to the development of turbulence theory. It allows one to impose lower and upper bounds on correlation functions using sum-of-squares polynomials. We demonstrate it on the minimal cascade model of two resonantly interacting modes, when one is pumped and the other dissipates. We show how to present correlation functions of interest as part of a sum-of-squares polynomial using the stationarity of the statistics. That allows us to find how the moments of the mode amplitudes depend on the degree of non-equilibrium (analog of the Reynolds number), which reveals some properties of marginal statistical distributions. By combining scaling dependence with the results of direct numerical simulations, we obtain the probability densities of both modes in a highly intermittent inverse cascade. We also show that the relative phase between modes tends to \(\pi/2\) and \(-\pi/2\) in the direct and inverse cascades as the Reynolds number tends to infinity, and derive bounds on the phase variance. Our approach combines computer-aided analytical proofs with a numerical algorithm applied to high-degree polynomials.
## I Introduction
Many systems in nature receive and dissipate energy on very different scales, having conservative dynamics in between, and the energy is transferred across the scales through a turbulent cascade. Among the best-known examples are isotropic fluid turbulence [1] and surface waves in the ocean [2]. The complexity of such systems makes them difficult to describe in detail, so it makes sense to consider simpler dynamical models to deepen our understanding of the statistical properties and energy transfer in such highly non-equilibrium systems [3; 4; 5].
A minimal model, which still captures the basic properties of turbulent cascades, is a system of two resonantly interacting oscillators whose natural frequencies differ by a factor of two [6]. This system allows studying direct and inverse cascades with the energy flux directed towards either higher or lower frequencies. The system has one non-dimensional governing parameter \(\chi\), which plays the role of the Reynolds number. In what follows, we are mostly interested in the regime when this parameter is large, and the probability distribution tends to be singular. The analytical studies in this limit resulted in the steady-state probability density for the direct cascade, while constructing the probability density for the inverse cascade turns out to be tricky [6].
Here we apply a complementary approach to study the mode statistics in this system. We exploit the polynomial nature of dynamic equations (common for practically all turbulent systems), which allows us to impose inequalities on the correlation functions. The main idea is to find a non-negative polynomial expression, \(\phi(\mathbf{x})-L+F(\mathbf{x})\geq 0\) that combines the correlation function \(\phi(\mathbf{x})\) to be bounded, the constant value of the bound \(L\), and an auxiliary polynomial function \(F(\mathbf{x})\) that has zero mean value \(\langle F(\mathbf{x})\rangle=0\) in a statistically steady state, which entails the inequality \(\langle\phi(\mathbf{x})\rangle\geq L\)[7; 8]. The essence of the approach is to construct the function \(F(\mathbf{x})\) so that the value of the lower bound \(L\) is as large as possible. Although testing a polynomial expression \(\phi(\mathbf{x})-L+F(\mathbf{x})\) for non-negativity is NP-hard algorithmic task, there exist numerical procedures [9; 10; 11; 12; 13] that solve the problem with a more strict requirement on the proposed expression to be sum-of-squares (SoS) of other polynomials. Moreover, these procedures allow one to maximize the value \(L\) of the bound using some ansatz for the auxiliary function \(F(\mathbf{x})\). If the ansatz is simple enough, the bound can be found analytically, while a computer algorithm can be used in advance to suggest the optimal form of the SoS polynomial. The upper bound \(\langle\phi(\mathbf{x})\rangle\leq U\) can be constructed in a similar way. Recent examples of the application of SoS programming to study dynamical systems can be found in Refs. [7; 8; 14; 15; 16; 17].
The rest of the paper is organized as follows. In Section II, we remind the general theory behind SoS optimization and then apply the method for the two-mode system. In Section III, we present the results obtained
by SoS programming for the direct cascade and compare them with the analytics and direct numerical simulations (DNS). It turns out that the upper and lower bounds for the moments of pumped and dissipating modes are close to each other and differ by less than a percent in the limit of a large Reynolds number. This allows us to determine not only their scaling dependence but also numerical values with accuracy comparable to DNS. The method can also be used to study correlations between modes, and we show that the relative phase between the modes tends to \(\pi/2\) and determine upper and lower bounds for its root-mean-square fluctuations.
After the validation, in Section IV, we use the method for the inverse cascade where the probability density is unknown a priori. Analytically, we obtained only lower bounds for the correlation functions, but they demonstrate scaling consistent with the results of DNS. A numerical algorithmic analysis of the high moments of the dissipating mode leads to scaling \(\langle n_{1}^{k}\rangle\propto\chi^{k-1}\), which is a fingerprint of intermittency, where \(n_{1}\) is the mode intensity and \(\chi\) is the Reynolds number. Based on this observation, we were able to shed light on the structure of the distribution function of this mode - the DNS results fall on the universal curve for different values of \(\chi\), which has a form close to a power law with an exponential cutoff. As for the pumped mode, its statistics are close to Gaussian, and we analytically found the lower bound for its intensity \(\langle n_{2}\rangle\geq 5\chi/16\) that is close to DNS. We also show that the relative phase between modes tends to \(-\pi/2\) in the limit \(\chi\gg 1\) and estimate the rate of this transition. Finally, we summarize and discuss our findings in Section V.
## II Theoretical framework
This section briefly explains how sum-of-squares optimization can impose inequalities on correlation functions in stochastic systems with polynomial dynamics. The presentation follows Ref. [8], where a more detailed discussion can be found.
Let us consider a stochastic dynamical system
\[\dot{x_{i}}=f_{i}(\mathbf{x})+\sigma_{ij}(\mathbf{x})\xi_{j}(t),\quad\mathbf{x}\in\mathbb{ R}^{n},\ \mathbf{\xi}\in\mathbb{R}^{m}, \tag{1}\]
where \(f_{i}(\mathbf{x})\) and \(\sigma_{ij}(\mathbf{x})\) are polynomial, \(\xi_{i}(t)\) is a Gaussian noise with zero mean \(\langle\xi_{i}(t)\rangle=0\) and the variance \(\langle\xi_{i}(t)\xi_{j}(t^{\prime})\rangle=\delta_{ij}\delta(t-t^{\prime})\), and here and below we sum over repeated indices. We assume that the system has reached a statistical steady-state, and then the probability density function \(\rho(\mathbf{x})\) satisfies the stationary Fokker-Planck equation
\[\frac{1}{2}\partial_{i}[\sigma_{ij}\partial_{k}(\sigma_{kj}\rho)]-\partial_{i }(f_{i}\rho)=0. \tag{2}\]
We also assume that the solution to this equation is unknown. We wish to prove a constant lower bound \(\langle\phi(\mathbf{x})\rangle\geq L\) for some polynomial correlation function \(\phi(\mathbf{x})\), where the angle brackets mean the averaging over the probability density \(\rho(\mathbf{x})\).
For this purpose, we consider an auxiliary function \(Q(\mathbf{x})\), which does not grow fast when \(|\mathbf{x}|\to\infty\), so that all the moments considered below are finite. Performing integration by parts and neglecting boundary terms, one can show that \(\langle\frac{1}{2}\sigma_{kj}\partial_{k}(\sigma_{ij}\partial_{i}Q)+f_{i} \partial_{i}Q\rangle=0\). The idea is to properly design \(Q(\mathbf{x})\) so that
\[\left\langle\frac{1}{2}\sigma_{kj}\partial_{k}(\sigma_{ij}\partial_{i}Q)+f_{i }\partial_{i}Q+\phi-L\right\rangle\geq 0. \tag{3}\]
Evaluation of the expectation in expression (3) requires knowing \(\rho(\mathbf{x})\), but it is sufficient for the inequality to hold point-wise for all \(\mathbf{x}\).
Checking a polynomial expression for non-negativity is NP-hard algorithmic task, therefore to reduce computational complexity, it can be replaced with a semidefinite programming (SDP) problem of checking the stronger condition that expression (3) belongs to a set of polynomials that are sum-of-squares (SoS). Finally, to find the maximum value of the constant \(L\), we arrive at the following optimization problem
\[\max_{Q(\mathbf{x})}L:\ \frac{1}{2}\sigma_{kj}\partial_{k}(\sigma_{ij}\partial_{i}Q) +f_{i}\partial_{i}Q+\phi-L\in SoS. \tag{4}\]
One specifies an ansatz for the auxiliary function \(Q(\mathbf{x})\) with undetermined coefficients and then solves the problem using the software packages such as SOSTOOLS [11], YALMIP [12] or SumOfSquares.jl [13] with one of the appropriate SDP-solvers [18; 19; 20; 21]. The upper bounds \(\langle\phi(\mathbf{x})\rangle\leq U\) can be obtained in a similar way by considering the optimization problem
\[\min_{Q(\mathbf{x})}U:\ -\frac{1}{2}\sigma_{kj}\partial_{k}(\sigma_{ij}\partial_{i}Q) -f_{i}\partial_{i}Q-\phi+U\in SoS. \tag{5}\]
If the ansatz is simple enough, the bounds \(L\) and \(U\) can be found analytically, and the computer algorithm suggests the optimal form of the SoS polynomials. The more complex ansatz leads to more rigorous bounds; however, complex ansatz requires more computational efforts, and the numerical algorithm could fail to find an optimal solution.
## III Direct cascade
The direct cascade for the two-mode system is governed by the following non-dimensional system of equations [6]
\[\dot{b}_{1}=-\frac{2ib_{1}^{*}b_{2}}{\sqrt{\chi}}+\xi(t), \tag{6}\] \[\dot{b}_{2}=-\frac{ib_{1}^{2}}{\sqrt{\chi}}-b_{2}, \tag{7}\]
where \(b_{1}\) and \(b_{2}\) are complex envelopes of low- and high-frequency modes, respectively, \(\xi(t)\) is a complex Gaussian random force with zero mean \(\langle\xi(t)\rangle=0\) and the
variance \(\langle\xi(t)\xi^{*}(t^{\prime})\rangle=\delta(t-t^{\prime})\), and the asterisk means complex conjugate. In other words, real and imaginary parts of \(\xi(t)\) are independent white noises having zero mean values and intensities \(1/2\). The parameter \(\chi\) is the only control parameter in the system, and it quantifies the dissipation relative to interaction strength. In the statistical steady-state, the energy input rate equals the energy flux from the first mode to the second and the dissipation rate. For the amplitude of the dissipating mode and the flux, one finds \(\langle n_{2}\rangle\equiv\langle|b_{2}|^{2}\rangle=1/4\), \(\langle J\rangle\equiv\langle ib_{1}^{2}b_{2}^{*}+c.c.\rangle/\sqrt{\chi}=-1/2\).
In the limit \(\chi\ll 1\), the interaction between modes is strong, and the energy transfer is fast, so one may expect that the occupation numbers are close to energy equipartition \(\langle|b_{1}|^{2}\rangle\equiv\langle n_{1}\rangle=2\langle n_{2}\rangle\) corresponding to thermal equilibrium [6]. The DNS shows that the marginal distributions of the mode amplitudes are indeed close to the Gaussian statistics with the corresponding occupation numbers. Still, the phase between the modes \(\theta=\arg(b_{1}^{2}b_{2}^{*})\) is unevenly distributed, which indicates the presence of correlations between the modes [6].
In the opposite limit of weak interaction and strong noise, \(\chi\gg 1\), one expects that the driven mode needs much higher amplitude to provide for the flux: \(\langle n_{1}\rangle\gg\langle n_{2}\rangle\). In other words, the mode statistics are far from thermal equipartition. In this case, the asymptotic analytical solution for the whole distribution function was argued to be singular [6]:
\[\mathcal{P}(b_{1},b_{2})=\frac{2^{3/2}}{\pi^{3/2}\chi^{1/2}}\exp\left(-\frac{2 }{\chi}|b_{1}|^{4}\right)\delta\left(b_{2}+\frac{ib_{1}^{2}}{\sqrt{\chi}} \right). \tag{8}\]
We will use this result later for comparison with the results of our approach, which provides mutual validation.
Let us now apply the SoS optimization method described in the previous section. To proceed, we need to specify an ansatz for the auxiliary function \(Q\). We have found that the numerical procedure works better if \(Q\) is a polynomial with respect to \(n_{1},\ n_{2},\ J\) and takes the monomials \(n_{1}^{i}n_{2}^{j}J^{k}\) with the total degree of mode amplitudes less than \(d\geq 2i+2j+3k\). A more general ansatz does not improve estimates for the correlation functions, but the algorithm is less stable due to the expansion of the optimization space. Let us emphasize that for small values of \(d\), the bounds for the correlation functions can be obtained analytically, the computer algorithm operates for higher values of \(d\) and improves the result.
We begin to present our results with the intensity of the pumped mode. For \(d=4\) we obtain analytically [22]
\[\frac{\chi^{1/2}}{2\sqrt{2}}\leq\langle n_{1}\rangle\leq\frac{1}{2}\left(3- \sqrt{3}+\sqrt{12-6\sqrt{3}+\chi}\right), \tag{9}\]
and in the limit \(\chi\gg 1\), this implies \(\chi^{1/2}/2\sqrt{2}\leq\langle n_{1}\rangle\leq\sqrt{\chi}/2\). Inequality (9) means, in particular, that the intensity of the pumped mode is much greater than the intensity of the dissipated mode, and \(\langle n_{1}\rangle/\langle n_{2}\rangle\propto\sqrt{\chi}\). As the parameter \(d\) increases, the numerically found upper and lower bounds for \(d=10\) approach very closely the asymptotic \(\langle n_{1}\rangle=\sqrt{\chi/2\pi}\) following from expression (8), see Fig. 1a. In the opposite case \(\chi\ll 1\), the upper bound \(\langle n_{1}\rangle\leq 3-\sqrt{3}\) does not depend on \(\chi\) in qualitative agreement with DNS. The lower bound is not tight, and increasing the parameter \(d\) does not qualitatively change its behavior.
Similar results are also obtained for higher moments of \(n_{1}\). For the second moment and \(d=4\) we analytically find [22]
\[\chi^{2}r_{l}(\chi)\leq\langle n_{1}^{2}\rangle\leq\frac{\chi}{4}+\chi^{2}r_{u }(\chi), \tag{10}\]
where \(r_{l}(\chi)\) is the largest real root of the equation \(1-\chi^{2}r+8\chi^{3}r^{2}-16\chi^{4}r^{3}=0\) and \(r_{u}(\chi)\) is the smallest positive real root of the equation \(1+16\chi+(4\chi+152\chi^{2})r+(348\chi^{3}-32\chi^{4})r^{2}-216\chi^{5}r^{3}+16 \chi r^{4}=0\). In the limit \(\chi\gg 1\), the upper and lower bounds coincide with each other and therefore \(\langle n_{1}^{2}\rangle\rightarrow\chi/4\), while in the opposite case \(\chi\ll 1\), one obtains \(2^{-4/3}\chi^{2/3}\leq\langle n_{1}^{2}\rangle\lesssim 1.88\), where the upper bound reflects scaling consistent with DNS, see Fig. 1b. For even higher moments of \(n_{1}\), the values
Figure 1: Average values of mode intensities (a), their second (b), and higher moments (c) for the direct cascade. Solid and dashed lines show numerical (\(d=10\)) and analytical (\(d=4\)) results for the upper (red) and lower (blue) bounds. The square markers present results of DNS, and dash-dotted lines correspond to asymptotics following from relation (8). Expressions for the asymptotic dependence are shown in the panels.
in the limit \(\chi\gg 1\) can be determined numerically with good accuracy since the upper and lower bounds tend to each other as \(d\) increases. Analyzing the values of the moments, one can conclude that the statistics of the mode \(b_{1}\) are essentially non-Gaussian in agreement with expression (8), see Fig. 1c. This kind of analysis can help one to guess the marginal distribution function if it is not known a priori.
Next, we turn to the dissipating mode. Its mean intensity is determined exactly from the energy balance condition, \(\langle n_{2}\rangle=1/4\), and the upper and lower bounds for \(\langle n_{2}^{2}\rangle\) are shown in Fig. 1b. In the limit \(\chi\gg 1\), the reasonable bounds are obtained for relatively large values of \(d\geq 6\), so we do not present analytical results. In the opposite case \(\chi\ll 1\), the bounds demonstrate scaling consistent with DNS, but they are not close to each other. Analytically we obtain \(1/16\leq\langle n_{2}^{2}\rangle\leq 3/16\) for \(d=4\) and \(\chi\ll 1\)[22]. The analysis of higher moments for \(\chi\gg 1\) is presented in Fig. 1c, and the results are in agreement with the marginal distribution \(P(b_{2})=\frac{2^{1/2}}{\pi^{3/2}|b_{2}|}c^{-2|b_{2}|^{2}}\) following from expression (8).
The SoS programming method can also be used to analyze correlations between modes. To estimate the relative phase between modes, we rewrite the initial equations (6)-(7) in terms of real variables \(\rho_{1}=|b_{1}|\), \(\rho_{2}=|b_{2}|\), \(\vartheta=\arg(b_{1}^{2}b_{2}^{2})\):
\[\dot{\rho_{1}}=-\frac{2\rho_{1}\rho_{2}\sin\theta}{\sqrt{\chi}}+ \frac{1}{4\rho_{1}}+\frac{\zeta_{1}(t)}{\sqrt{2}}\,, \tag{11}\] \[\dot{\rho_{2}}=\frac{\rho_{1}^{2}\sin\theta}{\sqrt{\chi}}-\rho_{ 2},\] (12) \[\dot{\theta}=\frac{\rho_{1}^{2}-4\rho_{2}^{2}}{\rho_{2}\sqrt{ \chi}}\cos\theta+\frac{\sqrt{2}\zeta_{2}(t)}{\rho_{1}}, \tag{13}\]
where the overall phase drops out, and \(\zeta_{i}(t)\) is a real Gaussian noise with zero mean and the pair correlation function \(\langle\zeta_{i}(t)\zeta_{j}(t^{\prime})\rangle=\delta_{ij}\delta(t-t^{\prime})\)[22]. Despite the right-hand sides of these equations are not polynomial, for any functions \(Q\) and \(\phi\) that are polynomial with respect to \(\rho_{1}\), \(\rho_{2}\), \(\sin\theta\) the expressions in left-hand sides of Eqs. (4), (5) are polynomials divided by \(\rho_{1}^{2}\rho_{2}>0\). Thus, to apply the algorithm, it is sufficient to multiply the expression in the optimization problems (4), (5) by \(\rho_{1}^{2}\rho_{2}>0\) to return it into the class of polynomial functions, see details in Ref. [22]. Now the ansatz for the function \(Q\) is a polynomial of \(\rho_{1}\), \(\rho_{2}\), \(\sin\theta\) of the power of \(d\). We have also found that the numerical procedure gives better results if we solve the optimization problem with the additional constraint \(1-\sin^{2}\theta\geq 0\)[22].
Fig. 2 shows the upper and lower bounds on \(\langle\cos^{2}\theta\rangle\). The value of \(\langle\cos^{2}\theta\rangle\to 0\) as \(\chi\rightarrow\infty\), which means that the phase \(\theta\rightarrow\pi/2\) (the point \(\theta=-\pi/2\) is not suitable because \(\langle J\rangle\equiv\langle-2\rho_{1}^{2}\rho_{2}\sin\theta\rangle/\sqrt{ \chi}=-1/2<0\)). The power-law fit \(\langle\cos^{2}\theta\rangle\propto\chi^{-q}\) results in \(q=1/3\) and \(q=2/5\) for the upper and lower bounds, respectively. We could not determine the bounds analytically since the algorithm gives reasonable estimates only for large values of \(d\). The overall picture is that the distribution function of the phase between the modes has a peak at \(\theta=\pi/2\), and its width decreases in a power-law manner with the parameter \(\chi\). These results complement Ref. [6], in which the narrowing of the phase distribution was not quantified.
## IV Inverse cascade
Now we turn to the inverse cascade where the high-frequency mode is pumped, and the low-frequency mode dissipates
\[\dot{b}_{1}=-\frac{2\dot{ib}_{1}^{*}b_{2}}{\sqrt{\chi}}-b_{1}, \tag{14}\] \[\dot{b}_{2}=-\frac{\dot{ib}_{1}^{2}}{\sqrt{\chi}}+\xi(t). \tag{15}\]
In the statistical steady-state, from the energy balance consideration, we find \(\langle n_{1}\rangle\equiv\langle|b_{1}|^{2}\rangle=1\) and \(\langle J\rangle\equiv\langle ib_{1}^{2}b_{2}^{*}+c.c.\rangle/\sqrt{\chi}=1\), which is true for any value of \(\chi\). The energy flux \(\langle J\rangle\) is positive that corresponds to the transfer of energy down the frequencies. As in the case of a direct cascade, in the limit \(\chi\ll 1\), one can expect the occupation numbers to be close to the energy equipartition \(\langle n_{1}\rangle=2\langle n_{2}\rangle\), although the statistics is not expected to be close to the product of two Gaussians corresponding to thermal equilibrium. In the opposite case \(\chi\gg 1\), the system is far from thermal equilibrium, and the pumped mode is expected to have larger intensity, \(\langle n_{2}\rangle\gg\langle n_{1}\rangle\). All these expectations were confirmed by DNS [6].
We found that we can get better bounds for correlation functions if we rewrite dynamic equations (14)-(15) in
Figure 2: Relative phase between modes for the direct cascade: numerically obtained upper (red) and lower (blue) bounds (circles, triangles, and diamonds are for \(d=10,~{}12,~{}14\)); the square markers present results of DNS.
terms of real variables \(\rho_{1}=|b_{1}|\), \(\rho_{2}=|b_{2}|\), \(\theta=\arg(b_{1}^{2}b_{2}^{*})\):
\[\dot{\rho_{1}}=-\frac{2\rho_{1}\rho_{2}\sin\theta}{\sqrt{\chi}}- \rho_{1}, \tag{16}\] \[\dot{\rho_{2}}=\frac{\rho_{1}^{2}\sin\theta}{\sqrt{\chi}}+\frac{1 }{4\rho_{2}}+\frac{\zeta_{1}(t)}{\sqrt{2}},\] (17) \[\dot{\theta}=\frac{\rho_{1}^{2}-4\rho_{2}^{2}}{\rho_{2}\sqrt{\chi }}\cos\theta+\frac{\zeta_{2}(t)}{\sqrt{2}\rho_{2}}, \tag{18}\]
where the overall phase drops out [22]. We also found that in contrast to the direct cascade, it is useful to extend the ansatz for the function \(Q\) by adding the term \(\log\rho_{1}\). The term \(\log\rho_{2}\) does not have a noticeable effect on the results. The left-hand sides of Eqs. (4), (5) should be multiplied by \(\rho_{2}^{2}>0\) so that they become polynomial and we can apply the SDP algorithm to solve the optimization problem [22].
In contrast to the direct cascade, we were able to obtain only lower bounds for the correlation functions in the case of the inverse cascade. The algorithm does not find a feasible solution for the upper bounds, even for a relatively large parameter value \(d=10\). Fortunately, the lower bounds are quite informative precisely in the turbulent limit of large \(\chi\) which we focus on. In particular, for the intensity of the pumped mode, we analytically find [22]
\[\langle n_{2}\rangle\geq\frac{\chi+1}{4}, \tag{19}\]
and both asymptotics at \(\chi\ll 1\) and \(\chi\gg 1\) give a scaling that agrees qualitatively with DNS, see Fig. 3a. Physically, inequality (19) means that in the limit \(\chi\gg 1\), the intensity of the pumped mode is much greater than the intensity of the dissipating mode, \(\langle n_{2}\rangle/\langle n_{1}\rangle\geq\chi/4\). We thus have shown that deviation from the equipartition is much stronger in the inverse cascade than in the direct one (where the ratio is \(\propto\sqrt{\chi}\)).
Similarly, we find for the amplitude fourth moment [22]
\[\langle n_{2}^{2}\rangle\geq\frac{(\chi+1)^{2}}{16},\quad\langle n_{1}^{2} \rangle\geq\frac{\chi+4}{3}. \tag{20}\]
The first condition is trivial and follows from the positive variance of the pumped mode intensity. The second condition means that in the limit \(\chi\gg 1\), the ratio \(\langle n_{1}^{2}\rangle/\langle n_{1}\rangle^{2}\geq\chi/3\), i.e., the statistics of the dissipated mode is intermittent. This agrees with the analysis carried out in Ref. [6], where it was shown that the \(\rho_{1}\) dynamics is a sequence of burst events, see also Fig. 4a. Between bursts, the amplitude \(\rho_{1}\) is close to zero, and during short bursts with a duration of order unity, \(\rho_{1}\) reaches large values \(\sim\sqrt{\chi}\). The time interval between bursts is \(\sim\chi\), and the correlation functions of \(\rho_{1}\) saturate on bursts. In Fig. 3b, we compare the obtained inequalities with DNS. All asymptotics demonstrate correct scaling with the parameter \(\chi\). Note that analytical inequalities are improved by the numerical algorithm when we increase the parameter \(d\).
Fig. 3c shows the upper bound for \(\langle\cos^{2}\theta\rangle\leq U\),which is equivalent to the lower bound for \(\langle\sin^{2}\theta\rangle\geq 1-U\). In the limit \(\chi\gg 1\), the value of \(\langle\cos^{2}\theta\rangle\to 0\) and it means that the the phase \(\theta\rightarrow-\pi/2\), since the value of \(\langle J\rangle=\langle-2\rho_{1}^{2}\rho_{2}\sin\theta\rangle/\sqrt{\chi}=1\). The power-law fit \(\langle\cos^{2}\theta\rangle\propto\chi^{-q}\) results in \(q=1/2\), although the interval is short and with increasing \(\chi\) we have to increase \(d\) so that the algorithm finds a feasible solution. Compared to the direct cascade, the phase fluctuations in the inverse cascade are smaller for the same value of \(\chi\). Based on this observation, we can simplify equations (16)-(18) by assuming that the relative phase is locked on \(\theta=-\pi/2\). Then, we obtain
\[\dot{\rho_{1}}=\frac{2\rho_{1}\rho_{2}}{\sqrt{\chi}}-\rho_{1}, \tag{21}\] \[\dot{\rho_{2}}=-\frac{\rho_{1}^{2}}{\sqrt{\chi}}+\frac{1}{4\rho_{ 2}}+\frac{\zeta(t)}{\sqrt{2}}, \tag{22}\]
and these equations can be used to improve the above bounds in the limit \(\chi\gg 1\). In particular, for the intensity of the pumped mode, we analytically obtain \(\langle n_{2}\rangle\geq 5\chi/16\)[22], which is closer to DNS results, see Fig. 3a.
The reduction in the number of degrees of freedom allows us to use larger values of parameter \(d\) in the limit
Figure 3: Average values of mode intensities (a), their second moments (b), and the relative phase (c) for the inverse cascade. Solid and dashed lines show numerical (\(d=10\)) and analytical results for upper and lower bounds. The square markers present results of DNS, and dash-dotted lines correspond to asymptotic bounds in the limit \(\chi\gg 1\). Expressions for the analytic results are shown in the panels.
\(\chi\gg 1\) since the size of the optimization space is also reduced. Moreover, we were able to obtain upper bounds (\(d=32\)) for correlation functions \(\langle n_{2}\rangle\) and \(\langle n_{1}^{2}\rangle\), which are close to the lower bounds, see Figs. 3a and 3b. To estimate the higher moments of both modes, we will further use the values of lower bounds obtained for \(d=32\). In this way, for the pumped mode, we found the usual scaling \(\langle n_{2}^{k}\rangle\propto\langle n_{2}\rangle^{k}\)[22]. In the intervals between bursts, the amplitude \(\rho_{1}\) is close to zero, and, according to Eq. (15), one can expect that the statistics of the pumped mode is close to Gaussian. The value \(\langle n_{2}\rangle\) can be estimated as a diffusion displacement during the time between bursts, \(\langle n_{2}\rangle\sim\chi\), in agreement with the results reported earlier. The DNS data for the probability density function confirm this qualitative analysis, although the agreement is not perfect, see Fig. 4b.
For the dissipating mode, we found that \(\langle n_{1}^{k}\rangle\propto\chi^{k-1}\)[22], and such scaling is a fingerprint of intermittence discussed above. This suggests that in the limit \(\chi\gg 1\), the distribution function of the dissipating mode has the following form:
\[P(n_{1}/\chi)=\frac{1}{\chi}F(n_{1}/\chi). \tag{23}\]
DNS confirms this hypothesis and Fig. 4c shows the reduced probability density function \(F(x)\), which is proportional to \(1/x\) for \(x\ll 1\) and has an exponential cutoff for \(x\gtrsim 1\).
A simplified model for the dynamics of the dissipating mode explains the exponent \(-1\) of the power law and resolves the singularity at \(x\to 0\). During the burst, the amplitude \(\rho_{2}\) of the pumped mode quickly diminishes (see Fig. 4a), therefore the second term in Eq. (14) dominates and the subsequent dynamics of the dissipating mode corresponds to the exponential decay, \(\dot{x}=-2x\). For a given \(x\), the probability density is determined by the time spent in the vicinity of \(x\) and thus proportional to \(F(x)\propto 1/|\dot{x}|\propto 1/x\). The exponential dynamics is valid until the amplitude of the pumped mode is below \(\sim\sqrt{\chi}\). In this regime, the pumped mode grows due to diffusion and reaches the level of \(\sqrt{\chi}\) in time of the order of \(\chi\), so the dependence \(F(x)\propto 1/x\) holds for \(x\gtrsim e^{-2\chi}\). This value should be taken as the lower limit of integration in the expression for the total probability to resolve the singularity. Since the minimum value of \(x\) depends on the parameter \(\chi\), the height of the first bin on the histograms is proportional to \(\chi\). Except for this, the shape of the curve \(F(n_{1}/\chi)\) is universal, see Fig. 4c.
When calculating the positive moments of \(x\), the singularity disappears and the lower limit of integration can be replaced by zero. From the condition \(\langle n_{1}\rangle=1\) we obtain \(\int_{0}^{\infty}xF(x)dx=1\), and so the function has the form \(F(x)\simeq ax^{-1}e^{-ax}\). Numerical fitting leads to \(a\approx 1.7\), and the corresponding curve is shown in Fig. 4c by a dashed line.
## V Conclusion
So what have we learned about the statistics of the far from the equilibrium state of the system using SoS programming? We obtained computer-aided analytical and numerical bounds for correlation functions in the system of two interacting modes in the regimes corresponding to both direct and inverse energy cascades. The bounds revealed the scaling of mode intensities and their higher moments with the Reynolds number \(\chi\), which shows how far the system is from the energy equipartition corresponding to the thermal equilibrium. By combining scaling dependence with DNS, we collapsed the probability densities of dissipating and pumped modes on the universal curves in a highly intermittent regime of the inverse energy cascade and determined their shapes. Analyzing cross-mode correlations, we showed that the relative phase between modes tends to \(\pi/2\) and \(-\pi/2\) in the direct and inverse energy cascades, respectively, and estimated the rates of these transitions.
The method applied does not work well in the opposite limit of low Reynolds number \(\chi\ll 1\). This can be explained by the fact that the term produced by the white noise forcing in the optimization problems (4) has a relatively small amplitude and, therefore, the expression that is analyzed practically coincides with the case of a deterministic problem, which has a trivial solution
Figure 4: (a) DNS fragment of the mode dynamics for the inverse cascade demonstrate intermittency of the dissipated mode, \(\log_{10}\chi=2.4\); (b) Probability density for the pumped mode in the inverse cascade. The dashed line corresponds to the exponential distribution, \(P(x)=e^{-x}\); (c) Reduced probability density \(F(n_{1}/\chi)\) for the dissipating mode in the inverse cascade, see Eq. (23). The inset shows the same data in the log-log scale, and the dashed line corresponds to the numerical fit \(F(x)=1.7x^{-1}e^{-1.7x}\). The bin size is \(0.05\) for the main panel and \(0.001\) for the inset.
\(b_{1}=b_{2}=0\), and for this reason the lower bound does not exist. Methods for overcoming this issue are discussed in Ref. [8], but we did not use them since the regime \(\chi\ll 1\) is less interesting for us from the physical point of view.
Let us also note that the direct and inverse cascades are very different, although the equations at first glance may seem similar. In the direct cascade, the dissipating mode is excited by the pumped mode in an additive way. For the inverse cascade, this process is multiplicative, and the general experience suggests that this regime is more complicated because of higher temporal intermittency. Our analysis supports this intuition: in the inverse cascade we were able to obtain only lower bounds for the correlation functions, while for the direct cascade, we obtained both lower and upper bounds, and they are so close that we can determine the numerical values of the correlation functions with an accuracy comparable to DNS.
It would be promising to extend the present study to other shell models of turbulence with longer chains of resonantly interacting modes [23; 5]. The main difficulty is that for the long chains the extreme modes in the correlation function of interest will be coupled to the modes adjacent to them, and therefore the dynamic equations for the subset of modes involved in the correlation function are not closed. A similar problem arose when applying the sum-of-squares method to the Galerkin expansion of the Navier-Stokes equation in Ref. [7]. Although the generalization is not straightforward, the development of a complementary approach for this analysis significantly advances this area of research.
###### Acknowledgements.
The work was supported by the Excellence Center at WIS, grant 662962 of the Simons Foundation, grant 075-15-2022-1099 of the Russian Ministry of Science and Higher Educations, grant 823937 and 873028 of the EU Horizon 2020 programme, the BSF grants 2018033 and 2020765. V.P. is grateful to the Weizmann Institute, where most of the work was done, for their hospitality. E.M. thanks Benoziyo Endowment Fund for the Advancement of Science for funding his visit to the Weizmann Institute of Science. DNS were performed on the cluster of the Landau Institute.
|
2308.16467 | A Posteriori Analysis and Adaptive Algorithms for Blended Type
Atomistic-to-Continuum Coupling with Higher-Order Finite Elements | The efficient and accurate simulation of material systems with defects using
atomistic- to-continuum (a/c) coupling methods is a topic of considerable
interest in the field of computational materials science. To achieve the
desired balance between accuracy and computational efficiency, the use of a
posteriori analysis and adaptive algorithms is critical. In this work, we
present a rigorous a posteriori error analysis for three typical blended a/c
coupling methods: the blended energy-based quasi-continuum (BQCE) method, the
blended force-based quasi-continuum (BQCF) method, and the atomistic/continuum
blending with ghost force correction (BGFC) method. We employ first and
second-order finite element methods (and potentially higher-order methods) to
discretize the Cauchy-Born model in the continuum region. The resulting error
estimator provides both an upper bound on the true approximation error and a
lower bound up to a theory-based truncation indicator, ensuring its reliability
and efficiency. Moreover, we propose an a posteriori analysis for the energy
error. We have designed and implemented a corresponding adaptive mesh
refinement algorithm for two typical examples of crystalline defects. In both
numerical experiments, we observe optimal convergence rates with respect to
degrees of freedom when compared to a priori error estimates. | Yangshuai Wang | 2023-08-31T05:21:45Z | http://arxiv.org/abs/2308.16467v1 | A posteriori analysis and adaptive algorithms for blended type atomistic-to-continuum coupling with higher-order finite elements
###### Abstract.
The efficient and accurate simulation of material systems with defects using atomistic-to-continuum (a/c) coupling methods is a topic of considerable interest in the field of computational materials science. To achieve the desired balance between accuracy and computational efficiency, the use of _a posteriori_ analysis and adaptive algorithms is critical. In this work, we present a rigorous _a posteriori_ error analysis for three typical blended a/c coupling methods: the blended energy-based quasi-continuum (BQCE) method, the blended force-based quasi-continuum (BQCF) method, and the atomistic/continuum blending with ghost force correction (BGFC) method. We employ first and second-order finite element methods (and potentially higher-order methods) to discretize the Cauchy-Born model in the continuum region. The resulting error estimator provides both an upper bound on the true approximation error and a lower bound up to a theory-based truncation indicator, ensuring its reliability and efficiency. Moreover, we propose an _a posteriori_ analysis for the energy error. We have designed and implemented a corresponding adaptive mesh refinement algorithm for two typical examples of crystalline defects. In both numerical experiments, we observe optimal convergence rates with respect to degrees of freedom when compared to _a priori_ error estimates.
## 1. Introduction
Atomistic-to-continuum (a/c) coupling methods are a type of concurrent multi-scale scheme [37, 15] that have received significant attention from both the engineering and mathematical communities over the past two decades [27, 23, 36, 20, 26, 38]. The fundamental principle behind a/c coupling methods involves applying a more accurate atomistic model in a localized defected region, while introducing the continuum model (such as the Cauchy-Born rule) for areas far from the defect cores.
A comprehensive overview of various a/c coupling methods for defect simulation can be found in [15, 23], while [20] provides a rigorous analysis of these methods. In this work, we mainly focus on the blended types of a/c coupling methods, including the BQCE (blended energy-based quasi-continuum) method [22, 16], the BQCF (blended force-based quasi-continuum) method [19, 9], and the recently developed BGFC (atomistic/continuum blending with ghost force correction) method [33, 13].
One of the fundamental challenges in a/c coupling methods is to determine the optimal assignment of atomistic and continuum regions, as well as the mesh structure, in order to achieve a (quasi-)optimal balance between accuracy and efficiency. _A priori_ choices, although feasible, often result in sub-optimal distribution of computational resources and are only suitable for simple setups such as single point defects. Therefore, _a posteriori_ analysis and corresponding adaptive algorithms are essential for the efficient implementation and simulation of a/c coupling methods for crystalline defects. However, the development of _a posteriori_ analysis and adaptive algorithms for a/c coupling methods in two or three dimensions is relatively recent, and it serves as a cornerstone for the application of these methods in real-world material systems.
The first approach to a/c coupling methods from a heuristic engineered standpoint was the goal-oriented _a posteriori_ error estimate. Various applications of this approach on different model problems can be found in [25, 35, 39, 2, 1]. However, the reliability of the error estimator cannot be guaranteed. On the other hand, the residual based _a posteriori_ error estimate provides a quantitative estimate of the error in a certain energy-norm. This approach was first analyzed in [28, 32] for consistent a/c coupling methods in one dimension and extended to two dimensions for the GRAC method with nearest neighbor interaction in [40], and then to the case of finite range interactions in [18]. However, these studies were limited by the computational cost incurred in evaluating the modeling residual. To address this issue, the authors of [44] proposed a theory-based approximation for the residual-based _a posteriori_ error estimator that significantly reduces the computational effort. Although the error estimators presented in these studies have been shown to be reliable, their derivation and construction are considered ad hoc and complicated, as discussed in [41]. Recently, a residual force-based _a posteriori_ error estimator was developed for blended a/c coupling methods (BQCE, BQCF, and BGFC), which is relatively intuitive and has been shown to be reliable [45]. However, it is important to note that the aforementioned studies utilized \(\mathcal{P}_{1}\) finite element to discretize the Cauchy-Born approximation, and the adaptive computations with higher-order finite element and the _a posteriori_ analysis for energy error have not been explored in the literature to the best of our knowledge.
The aim of this study is to present an improved residual-based _a posteriori_ error estimation method for blended atomistic/continuum (a/c) coupling methods with higher order finite elements. The proposed error estimator is developed by solving an auxiliary Poisson equation, where the interpolated residual forces serve as the source term. This construction is based on the Rieze representation of the residual in dual norm and builds upon our prior work on quantum mechanics/molecular mechanics (QM/MM) coupling methods [42]. The resulting error estimator provides both upper and lower bounds for the true approximation error, subject to a controllable truncation indicator. An adaptive algorithm is developed that allows for real-time adjustments to the atomistic, blending, and continuous mesh regions, enabling the optimal balance between these regions to be achieved. We also examine the truncation error and define a theory-based indicator to control the computational domain size. Finally, an error estimator for energy error is presented. Our tests include different types of crystalline defects that have not been previously considered in the literature on adaptive a/c coupling methods. The numerical results demonstrate that our adaptive algorithm achieves optimal convergence rates consistent with the _a priori_ error estimate.
The scope of this paper is primarily focused on three specific blended a/c coupling methods for Bravais lattices with point defects in two dimensions, with the aim of presenting clear ideas and adaptive strategies. However, the analysis and methodology proposed can be extended to other consistent multi-scale coupling methods, multilattice crystals, and problems in three dimensions. The author plan to investigate the generalization of these approaches in future research and provide a discussion of potential future developments in Section 5.
### Outline
This paper is structured as follows: Section 2 introduces the atomistic and continuum models, along with three typical blended atomistic/continuum (a/c) coupling schemes, including the BQCE, BQCF, and BGFC methods, which are the focus of this work. In Section 3, we review the _a priori_ error estimate for the blended a/c coupling methods with higher order finite elements, which has been previously established in the literature [16, 34], providing a solid foundation for our analysis. We then present the rigorous residual-based _a posteriori_ error estimates (Theorem 3.4), which offer both upper and lower bounds on the true approximation error, subject to a controlled truncation indicator. Section 4.2 presents our approach to assigning local error contributions and proposing an adaptive algorithm for the blended a/c coupling methods,
based on the theoretical results established in the previous section. We report on numerical results obtained using this adaptive algorithm for crystalline defects that have not been previously considered in the literature, and provide a thorough discussion and explanation. Our conclusions and suggestions for future research are presented in Section 5. Additional supporting materials are provided in the Appendices.
### Notations
We use the symbol \(\langle\cdot,\cdot\rangle\) to denote an abstract duality pairing between a Banach space and its dual space. The symbol \(|\cdot|\) normally denotes the Euclidean or Frobenius norm, while \(\|\cdot\|\) denotes an operator norm. For the sake of brevity of notation, we will denote \(A\backslash\{a\}\) by \(A\backslash a\), and \(\{b-a\ |\ b\in A\}\) by \(A-a\). For \(E\in C^{2}(X)\), the first and second variations are denoted by \(\langle\delta E(u),v\rangle\) and \(\langle\delta^{2}E(u)v,w\rangle\) for \(u,v,w\in X\). For a finite set \(A\), we will use \(\#A\) to denote the cardinality of \(A\). For second order tensors \(\mathsf{A}\) and \(\mathsf{B}\), we denote \(\mathsf{A}:\mathsf{B}=\sum_{i,j}\mathsf{A}_{ij}\mathsf{B}_{ij}\). The closed ball with radius \(r\) and center \(x\) is denoted by \(B_{r}(x)\), or \(B_{r}\) if the center is the origin. To further simplify notation we will often write \(\lesssim\) to mean \(\leq C\) as well as \(\widetilde{\sim}\) to mean both \(\lesssim\) and \(\gtrsim\). We use the standard definitions and notations \(L^{p}\), \(W^{k,p}\), \(H^{k}\) for Lebesgue and Sobolev spaces. In addition we define the homogeneous Sobolev spaces \(\dot{H}^{k}(\Omega):=\big{\{}f\in H^{k}_{\rm loc}(\Omega)\,|\,\nabla^{k}f\in L ^{2}(\Omega)\big{\}}\).
## 2. Atomistic-to-Continuum (A/C) Coupling Model
In this section, we present the atomistic model and its three blended a/c coupling approximations, namely the energy-based blended quasi-continuum (BQCE) method, the force-based blended quais-continuum (BQCF) method and the blended ghost force correction (BGFC) method. For the sake of simplicity, we mainly consider the point defects in two dimensions. The extension to three dimensions and more complex defects such as dislocations and cracks will require substantial additional efforts [3, 4], which will be explored in our future work. We keep the presentation of this section as concise as possible since much of the details can be found in various earlier works [20, 16, 34, 22, 13].
### Atomistic model
Let \(\Lambda^{\rm hom}=\mathsf{AZ}^{2}\) with some non-singular matrix \(\mathsf{A}\in\mathbb{R}^{2\times 2}\) be a perfect single lattice possessing no defects and \(\Lambda\subset\mathbb{R}^{2}\) be the corresponding single lattice with some point defects. Without loss of generality, for the case of a single defect, we assume that it is contained within a ball \(B_{R_{\rm DEF}}\) for \(R_{\rm DEF}>0\); that is, \(\Lambda\cap B_{R_{\rm DEF}}\) is finite and \(\Lambda\setminus B_{R_{\rm DEF}}=\Lambda^{\rm hom}\setminus B_{R_{\rm DEF}}\). The case with multiple local defects can be similarly defined [44].
The deformed configuration of the infinite lattice \(\Lambda\) is a map \(y\in\mathscr{U}\) with \(\mathscr{U}:=\{v:\Lambda\to\mathbb{R}^{2}\}\), and it can be decomposed as
\[y(\ell)=\ell+u(\ell),\qquad\forall\ \ell\in\Lambda. \tag{2.1}\]
We define the interaction neighborhood for each \(\ell\in\Lambda\) by \(\mathcal{N}_{\ell}:=\{\ell^{\prime}\in\Lambda\ |\ 0<|\ell^{\prime}-\ell|\leq r_{\rm cut}\}\) with a given cut-off radius \(r_{\rm cut}\). We also denote the interaction range \(\mathcal{R}_{\ell}:=\{\ell^{\prime}-\ell\ |\ \ell^{\prime}\in\mathcal{N}_{\ell}\}\). For each atom \(\ell\in\Lambda\), we define the finite difference stencil for \(u:\Lambda\to\mathbb{R}^{d}\)
\[Du(\ell):=\{D_{\rho}u(\ell)\}_{\rho\in\mathcal{R}_{\ell}}:=\{u(\ell+\rho)-u( \ell)\}_{\rho\in\mathcal{R}_{\ell}}.\]
For \(u\in\mathscr{U}\), the nodal interpolant of \(u\) with respect to the background mesh is denoted as \(I_{a}u\)[34, 12, 16]. Identifying \(u=I_{a}u\), we can define the piecewise constant gradient \(\nabla u=\nabla I_{a}u:\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\), and therefore the corresponding functional space of finite-energy displacements is defined by
\[\mathscr{U}^{1,2}(\Lambda):=\big{\{}u:\Lambda\to\mathbb{R}^{d}\ \big{|}\ \| \nabla u\|_{L^{2}(\mathbb{R}^{d})}<\infty\big{\}}. \tag{2.2}\]
We also define the following subspace of compact displacements
\[\mathscr{U}^{\rm c}(\Lambda):=\big{\{}u:\Lambda\to\mathbb{R}^{d}\ \big{|}\ \exists\ R>0\ \text{s.t.}\ u=\text{const in}\ \Lambda\setminus B_{R}\big{\}}. \tag{2.3}\]
The site potential is a collection of mappings \(V_{\ell}:(\mathbb{R}^{3})^{\mathcal{R}_{\ell}}\to\mathbb{R}\), which represents the energy distributed to each atomic site in \(\Lambda\). We refer to [12, Section 2] for the detailed discussions on the assumptions of general site potentials. In this work, we will use the well-known EAM (Embedded Atom Method) model [7] throughout the numerical experiments (cf. Section 4.2).
Furthermore, we assume that \(V_{\ell}(\mathbf{0})=0\) for all \(\ell\in\Lambda\). As a matter of fact, if \(V(\mathbf{0})\neq 0\), then it can be replaced with \(V_{\ell}(Du)\equiv V_{\ell}(Du)-V_{\ell}(\mathbf{0})\); that is, \(V\) should be understood as a site energy difference. We can formally define the energy-difference functional
\[\mathcal{E}^{\rm a}(u)=\sum_{\ell\in\Lambda}V_{\ell}\big{(}Du(\ell)\big{)}. \tag{2.4}\]
It was shown in [12] that \(\mathcal{E}(u)\) is well-defined on the space \(\mathscr{U}^{1,2}(\Lambda)\).
We can now rigorously formulate the variational problem for the equilibrium state as
\[u^{\rm a}\in\arg\min\big{\{}\mathcal{E}^{\rm a}(u),\ u\in\mathscr{U}^{1,2}( \Lambda)\big{\}}, \tag{2.5}\]
where "\(\arg\min\)" is understood as the set of local minimizers.
In order to conduct an analysis of the approximation error, a stronger stability condition is necessary. However, this condition can only be rigorously justified for certain simple physical models. Therefore, we present it here as an assumption.
**Assumption 2.1**.: \(\exists\ \gamma>0\)_, such that \(\big{\langle}\delta^{2}\mathcal{E}^{\rm a}(u^{\rm a})v,v\big{\rangle}\geq \gamma\|v\|_{\mathscr{U}^{1,2}}^{2}\qquad\forall\ v\in\mathscr{U}^{1,2}( \Lambda).\)_
### A/C coupling methods
The atomistic-to-continuum (a/c) coupling methods is a class of concurrent multiscale methods which hybridizes atomistic and continuum models, aiming to achieve an optimal balance between accuracy and computational complexity. We refer to [15, 21, 24] for the extensive overview and benchmark. In this work, we mainly focus on the blended types of a/c coupling methods since they are of practical interest and meanwhile outperform most alternative approaches [34].
#### 2.2.1. Domain decomposition and solution space
Due to the infinite nature of the space in which (2.5) is defined, it is impractical to solve it computationally. A common approach to obtain a feasible approximation is to impose a clamped boundary condition. Therefore, we limit the infinite lattice \(\Lambda\) to a finite computational domain \(\Omega\) with a radius of \(R_{\Omega}\).
Next, the computational domain \(\Omega=\Omega^{\rm a}\cup\Omega^{\rm b}\cup\Omega^{\rm c}\subset\mathbb{R}^{2}\) can be decomposed into three regions, the _atomistic region_\(\Omega^{\rm a}\) with radius \(R^{\rm a}\), the _blending region_\(\Omega^{\rm b}\) with width \(L^{\rm b}\) and the _continuum region_\(\Omega^{\rm c}\). We define the set of core atoms \(\Lambda^{\rm a}:=\Lambda\cap\Omega^{\rm a}\) and the set of blended atoms \(\Lambda^{\rm b}:=\Lambda\cap\Omega^{\rm b}\). Let \(\mathcal{T}^{\rm ab}\) be the _canonical_ background mesh induced by \(\Lambda^{\rm ab}:=\Lambda^{\rm a}\cup\Lambda^{\rm b}\), and \(\mathcal{T}^{\rm c}_{h}\) be a shape-regular tetrahedral partition of the continuum region. We denote \(\mathcal{T}_{h}=\mathcal{T}^{\rm ab}\bigcup\mathcal{T}^{\rm c}_{h}\) as the tetrahedral partition. The subscript \(h\) represents the _coarse-graining_ in the continuum region by applying the finite element methods. See Figure 2.1 for an illustration of \(\mathcal{T}_{h}\). The nodes of \(\mathcal{T}_{h}\) is denoted as \(\mathcal{X}_{h}\). Let \(\text{DoF}:=\#\mathcal{X}_{h}\) and \(\Omega_{h}=\bigcup_{T\in\mathcal{T}_{h}}T\).
We consider a blending function \(\beta\in C^{2,1}(\mathbb{R}^{2})\) satisfying \(\beta=0\) in \(\Omega^{\rm a}\), \(\beta=1\) in \(\Omega^{\rm c}\) and \(\text{supp}(\nabla\beta)\subset\Omega^{\rm b}\). The choice of \(\beta\) heavily influences the error estimates of the blended a/c coupling methods, we refer to [16, 20, 22] for a detailed discussion. In this work, following the construction in [22, 34], \(\beta\) is obtained in a preprocessing step by approximately minimizing \(\|\nabla^{2}\beta\|_{L^{2}}\).
If we use \(\mathcal{P}_{1}\) finite element to discretize the Cauchy-Born continuum model, the space of _coarse-grained_ displacements is then given by
\[\mathscr{U}_{h}^{(1)}:=\big{\{}u_{h}\in C(\mathbb{R}^{d};\mathbb{R}^{d})\ \big{|}\ u_{h}\in\mathcal{P}_{1}(\mathcal{T}_{h}),\,u_{h}=0\text{ in }\mathbb{R}^{d}\setminus\Omega_{h}\big{\}}. \tag{2.6}\]
To achieve the optimal rates of convergence in terms of the number of degrees of freedom (DoF) among all a/c coupling methods employing Cauchy-Born model in the continuum region [13, Theorem 2.1], we can apply \(\mathcal{P}_{2}\) finite element method in the continuum region. Hence, we decompose \(\mathcal{T}_{h}=\mathcal{T}_{h}^{(1)}\cup\mathcal{T}_{h}^{(2)}\), where
\[\mathcal{T}_{h}^{(1)}=\big{\{}T\in\mathcal{T}_{h}\ \big{|}\ \beta|_{T}<1\big{\}},\]
and replace \(\mathscr{U}_{h}\) with the approximate space
\[\mathscr{U}_{h}^{(2)}:=\big{\{}u_{h}\in C(\mathbb{R}^{d};\mathbb{R}^{d})\ \big{|}\ u_{h}\in\mathcal{P}_{1}(\mathcal{T}_{h}^{(1)}),\,u_{h}\in\mathcal{P} _{2}(\mathcal{T}_{h}^{(2)}),\,u_{h}=0\text{ in }\mathbb{R}^{d}\setminus\Omega_{h}\ \big{\}}. \tag{2.7}\]
That is, we retain the \(\mathcal{P}_{1}\) discretization in the fully refined atomistic and blending region, but employ \(\mathcal{P}_{2}\) finite elements in the continuum region.
#### 2.2.2. Cauchy-Born approximation
To overwhelm the number of degrees of freedom and meanwhile preserve considerable accuracy, we first define a continuum approximation by the Cauchy-Born rule which is a typical choice in the multiscale context [11, 31]. The Cauchy-Born energy density functional \(W:\mathbb{R}^{3\times 3}\to\mathbb{R}\) reads
\[W(\mathsf{F}):=\det(\mathsf{A}^{-1})\cdot V(\mathsf{F}\mathsf{R}),\]
where \(V\) is the homogeneous site energy potential on \(\Lambda^{\text{hom}}\).
We then define the pure Cauchy-Born finite element functional [16],
\[\mathcal{E}_{h}^{\text{cb}}(u_{h}):=\int_{\Omega_{h}}Q_{h}\big{[}W(\nabla u_{ h})\big{]}\,\mathrm{d}x, \tag{2.8}\]
where \(Q_{h}\) is the quadrature operator. In particular, it is the \(\mathcal{P}_{0}\) midpoint interpolation operator when \(\mathcal{P}_{1}\) finite element method is used in the continuum region, i.e., \(u_{h}\in\mathscr{U}_{h}^{(1)}\). While \(u_{h}\in\mathscr{U}_{h}^{(2)}\)
Figure 2.1. Computational domain, finite element grid and atomistic region as used in the construction of blended a/c coupling methods. The color of the spheres in the right sub-figure represent the value of the blending function (red as \(\beta=0\) while blue as \(\beta=1\)).
for stability analysis [34, Section 4.1], the operator \(Q_{h}\) must now be adjusted to provide a third-order quadrature scheme (e.g., the face midpoint trapezoidal rule) so that \(\nabla u_{h}\otimes\nabla u_{h}\in\mathscr{U}_{h}^{(2)}\) can be integrated exactly.
#### 2.2.3. BQCE method
We define the BQCE energy functional for as
\[\mathcal{E}_{h}^{\mathrm{b}\mathrm{o}ce}(u_{h}):=\sum_{\ell\in\Lambda\cap \Omega_{h}}\big{(}1-\beta(\ell)\big{)}V_{\ell}\big{(}Du_{h}(\ell)\big{)}+\int_{ \Omega_{h}}Q_{h}\big{[}\beta(x)W(\nabla u_{h})\big{]}\,\mathrm{d}x. \tag{2.9}\]
The BQCE method is to find
\[u_{h}^{\mathrm{b}\mathrm{o}ce}\in\arg\min\big{\{}\mathcal{E}_{h}^{\mathrm{b} \mathrm{o}ce}(u_{h}),\ u_{h}\in\mathscr{U}_{h}^{(1)}\big{\}}. \tag{2.10}\]
According to previous studies [34, 8], using higher-order finite elements in the blending region does not result in an improved BQCE method as the blending error dominates. Therefore, we do not take into account the \(\mathcal{P}_{2}\) finite element when considering the BQCE method.
#### 2.2.4. BQCF method
While the BQCE method (2.9) blends atomistic and continuum energies, the BQCF method [16, 17] blends atomistic and continuum forces. It is worthwhile mentioning that there is no energy functional for this force-based method.
Recall that \(\beta\in C^{2,1}(\mathbb{R}^{2})\) is a blending function, and if we employ \(\mathcal{P}_{1}\) finite element method, then the BQCF operator is the nonlinear map \(\mathcal{F}_{h}^{\mathrm{b}\mathrm{o}cf}:\mathscr{U}_{h}^{(1)}\to(\mathscr{U}_ {h}^{(1)})^{*}\), defined by
\[\langle\mathcal{F}_{h}^{\mathrm{b}\mathrm{o}cf}(u_{h}),v_{h}\rangle:=\langle \delta\mathcal{E}^{*}(u_{h}),(1-\beta)v_{h}\rangle+\langle\delta\mathcal{E}_{ h}^{\mathrm{c}b}(u_{h}),\beta v_{h}\rangle. \tag{2.11}\]
In the P1-BQCF method we solve the following variational nonlinear system
\[\langle\mathcal{F}_{h}^{\mathrm{b}\mathrm{o}cf}(u_{h}^{\mathrm{b}\mathrm{o} cf,1}),v_{h}\rangle=0,\quad\forall v_{h}\in\mathscr{U}_{h}^{(1)}. \tag{2.12}\]
If one replaces \(\mathscr{U}_{h}^{(1)}\) by \(\mathscr{U}_{h}^{(2)}\), we obtain the P2-BQCF method as well as its solution \(u_{h}^{\mathrm{b}\mathrm{o}\mathrm{o}\mathrm{f},2}\).
#### 2.2.5. BGFC method
The energy functional of BGFC method is based on a _second renormalization_ of the potential [34] while the _first renormalization_ is itself due to the assumption \(V_{\ell}(\mathbf{0})=0\) made in Section 2.1. For \(\ell\in\Lambda\) and \(u\in\mathscr{U}^{1,2}\), we define
\[V_{\ell}^{\prime\prime}\big{(}Du(\ell)\big{)}:=V_{\ell}\big{(}Du(\ell)\big{)} -\langle\delta V_{\ell}(\mathbf{0}),Du\rangle. \tag{2.13}\]
The corresponding second renormalized Cauchy-Born energy density is
\[W^{\prime\prime}(\mathsf{F}):=W(\mathsf{F})-\langle\delta W(\mathbf{0}), \mathsf{F}\rangle,\quad\text{for $\mathsf{F}\in\mathbb{R}^{d\times d}$}. \tag{2.14}\]
We then obtain the BGFC energy functional
\[\mathcal{E}_{h}^{\mathrm{b}\mathrm{o}\mathrm{f}\mathrm{e}}(u_{h})=\sum_{\ell \in\Lambda\cap\Omega_{h}}(1-\beta(\ell))V_{\ell}^{\prime\prime}\big{(}Du_{h}( \ell)\big{)}+\int_{\Omega_{h}}Q_{h}\big{[}\beta(x)W^{\prime\prime}(\nabla u_{h })\big{]}\,\mathrm{d}x. \tag{2.15}\]
Hence, the P1-BGFC and P2-BGFC methods are given respectively by
\[u_{h}^{\mathrm{b}\mathrm{o}\mathrm{f}\mathrm{e},1} \in\arg\min\big{\{}\mathcal{E}_{h}^{\mathrm{b}\mathrm{o}\mathrm{f }\mathrm{e}}(u_{h}),\ u_{h}\in\mathscr{U}_{h}^{(1)}\big{\}},\] \[u_{h}^{\mathrm{b}\mathrm{o}\mathrm{f}\mathrm{e},2} \in\arg\min\big{\{}\mathcal{E}_{h}^{\mathrm{b}\mathrm{o}\mathrm{f }\mathrm{e}}(u_{h}),\ u_{h}\in\mathscr{U}_{h}^{(2)}\big{\}}. \tag{2.16}\]
According to the discussions in [34, Section 2.7], the BGFC energy functional (2.15) can be written as an equivalent ghost force removal formulation
\[\mathcal{E}_{h}^{\mathrm{b}\mathrm{o}\mathrm{f}\mathrm{e}}(u_{h})=\mathcal{E }_{h}^{\mathrm{b}\mathrm{o}\mathrm{e}\mathrm{e}}(u_{h})-\big{\langle}\delta \mathcal{E}_{\mathrm{h}\mathrm{o}\mathrm{o}\mathrm{e}}^{\mathrm{b}\mathrm{e} \mathrm{e}}(\mathbf{0}),u_{h}\big{\rangle}, \tag{2.17}\]
where \(\mathcal{E}_{\mathrm{h}\mathrm{o}\mathrm{e}}^{\mathrm{b}\mathrm{e}}(\mathbf{0})\) is the BQCE energy functional at the homogeneous lattice. We make fully use of this formulation throughout this paper due to its simplicity of implementation [14, 45].
## 3. A Posteriori Error Estimates
### A priori error estimates
The _a priori_ error estimate for three typical blended a/c coupling methods (BQCE, BQCF, and BGFC methods) with both \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) finite element methods. The result is presented in Theorem 3.1 and is essentially derived from previous works such as [34, 13]. However, we adapt the result to the specific setting of this paper.
**Theorem 3.1**.: _If \(u^{\mathrm{a}}\) is a strongly stable solution of (2.5) satisfying Assumption 2.1. Under additional assumptions on the blending function and the mesh [13, Assumption 1], then there exist solutions to BQCE (2.10), P1-(P2-)BQCF (2.12) and P1-(P2-)BGFC (2.16) methods, such that_
\[\|u^{\mathrm{a}}-u_{h}^{\mathrm{bqce},1}\|_{\mathscr{U}^{1,2}} \lesssim(\mathrm{DoF})^{1/2-1/d}, \|u^{\mathrm{a}}-u_{h}^{\mathrm{bqcf},2}\|_{\mathscr{U}^{1,2}} \lesssim(\mathrm{DoF})^{-1/2-2/d},\] \[\|u^{\mathrm{a}}-u_{h}^{\mathrm{bqcf},1}\|_{\mathscr{U}^{1,2}} \lesssim(\mathrm{DoF})^{-1/2-1/d}, \|u^{\mathrm{a}}-u_{h}^{\mathrm{bcf},2}\|_{\mathscr{U}^{1,2}} \lesssim(\mathrm{DoF})^{-1/2-2/d}. \tag{3.18}\]
_Moreover, we have the corresponding energy error estimates_
\[\begin{split}\big{|}\mathcal{E}^{\mathrm{a}}(u^{\mathrm{a}})- \mathcal{E}_{h}^{\mathrm{bqce}}(u_{h}^{\mathrm{bqce}})\big{|}& \lesssim(\mathrm{DoF})^{1-4/d},\\ \big{|}\mathcal{E}^{\mathrm{a}}(u^{\mathrm{a}})-\mathcal{E}_{h}^{ \mathrm{bqcf}}(u_{h}^{\mathrm{bqcf},1})\big{|}&\lesssim(\mathrm{ DoF})^{-1-2/d},\\ \big{|}\mathcal{E}^{\mathrm{a}}(u^{\mathrm{a}})-\mathcal{E}_{h}^{ \mathrm{bqcf}}(u_{h}^{\mathrm{bqcf},2})\big{|}&\lesssim(\mathrm{ DoF})^{-1-2/d}.\end{split} \tag{3.19}\]
### Residual-based a posteriori error estimate for geometry error
In the context of a/c coupling methods, a key challenge is to effectively determine the atomistic and continuum regions and to devise an appropriate mesh structure that strikes an optimal balance between accuracy and efficiency. To tackle this challenge, the _a posteriori_ analysis and corresponding adaptive algorithms play a critical role in enabling the efficient implementation of a/c coupling methods.
In this section, we present the _a posteriori_ error estimate for the geometry error. The estimate for the energy error will be examined in the subsequent section. Specifically, we consider the solution \(u_{h}\) obtained from either the BQCE, P1-(P2-)BQCF, or P1-(P2-)BGFC model.
\[u_{h}\in\{u_{h}^{\mathrm{bqce}},u_{h}^{\mathrm{bqcf},1},u_{h}^{\mathrm{bqfc},1 },u_{h}^{\mathrm{bqcf},2},u_{h}^{\mathrm{bqfc},2}\}.\]
Recalling that \(I_{\mathrm{a}}u_{h}\in\mathscr{U}^{1,2}\), we define the residual \(\mathsf{R}(I_{\mathrm{a}}u_{h})\) as an operator on \(\mathscr{U}^{1,2}\), that is,
\[\mathsf{R}(I_{\mathrm{a}}u_{h})[v]:=\langle\delta\mathcal{E}^{\mathrm{a}}(I_{ \mathrm{a}}u_{h}),v\rangle,\quad\forall v\in\mathscr{U}^{1,2}. \tag{3.20}\]
The following Lemma characterizes the dual norm of the residual and has been previously presented in our previous works [6]. We include it here for the sake of completeness.
**Lemma 3.2**.: _Let \(u^{\mathrm{a}}\) be a strongly stable solution of the atomistic model (2.5). If either BQCE, BQCF or BGFC is consistent in the sense of (3.18), then there exists a solution \(u_{h}\) of (2.10) or (2.16), and constants \(c,C\) independent of the approximation parameters such that_
\[c\|I_{\mathrm{a}}u_{h}-u^{\mathrm{a}}\|_{\mathscr{U}^{1,2}}\leq\|\mathsf{R}(I _{\mathrm{a}}u_{h})\|_{(\mathscr{U}^{1,2})^{*}}\leq C\|I_{\mathrm{a}}u_{h}-u^{ \mathrm{a}}\|_{\mathscr{U}^{1,2}}, \tag{3.21}\]
_where the residual \(\mathsf{R}(I_{\mathrm{a}}u_{h})\) is defined by (3.20)._
Based on the previous Lemma, the optimal _a posteriori_ error estimator for the geometry error can be derived as follows
\[\eta^{\mathrm{ideal}}(u_{h}):=\|\mathsf{R}(I_{\mathrm{a}}u_{h})\|_{(\mathscr{U }^{1,2})^{*}}.\]
It is highly desirable as it offers both upper and lower bounds for geometry error up to independent constants. However, it cannot be computed in practice, which is a crucial limitation in the context of _a posteriori_ error control and adaptive algorithms design.
#### 3.2.1. Residual stress-based error estimator
In this section, we present an alternative approach to residual estimates that exploits the "implicit stress" of atomistic models, motivated by the _a posteriori_ error estimates of QM/MM coupling methods [6, 42].
Specifically, we consider the equivalent formulation of \(\|\mathsf{R}(I_{\mathrm{a}}u_{h})\|_{(\mathscr{U}^{1,2})^{*}}\) in terms of the solution of a whole-space auxiliary Poisson problem. This allows us to express the residual in terms of the _residual force_ for any \(v\in\mathscr{U}^{1,2}\).
\[\mathsf{R}(I_{\mathrm{a}}u_{h})[v]=\langle\delta\mathcal{E}^{\mathrm{a}}(I_{ \mathrm{a}}u_{h}),v\rangle=\sum_{\ell\in\Lambda}\mathcal{F}^{\mathrm{a}}_{ \ell}(I_{\mathrm{a}}u_{h})\cdot v(\ell), \tag{3.22}\]
where the _residual force_\(\mathcal{F}^{\mathrm{a}}_{\ell}(I_{\mathrm{a}}u_{h}):=\partial_{u(\ell)} \mathcal{E}^{\mathrm{a}}(u)\big{|}_{u=I_{\mathrm{a}}u_{h}}\). With this definition of residual force, we have the following Lemma and the proof is given in the Appendix B.
**Lemma 3.3**.: _Up to a constant shift, there exists a unique \(\phi(I_{\mathrm{a}}u_{h};x)\in\dot{H}^{1}(\mathbb{R}^{2})\) such that_
\[\int_{\mathbb{R}^{2}}\nabla\phi(I_{\mathrm{a}}u_{h};x)\cdot\nabla v\,\, \mathrm{d}x=\int_{\mathbb{R}^{2}}\widehat{\mathcal{F}}^{\mathrm{a}}(I_{ \mathrm{a}}u_{h};x)\cdot v\,\,\mathrm{d}x\qquad\forall v\in H^{1}(\mathbb{R}^ {2}), \tag{3.23}\]
_where_
\[\widehat{\mathcal{F}}^{\mathrm{a}}(I_{\mathrm{a}}u_{h};x):=\sum_{\ell\in \Lambda}c_{\ell}\mathcal{F}^{\mathrm{a}}_{\ell}(I_{\mathrm{a}}u_{h})\zeta_{ \ell}(x)\qquad\text{with}\quad c_{\ell}=\frac{1}{\int_{\mathbb{R}^{2}}\zeta_{ \ell}(x)\,\,\mathrm{d}x} \tag{3.24}\]
_is the rescaled piecewise affine nodal interpolant of the residual forces. Moreover, there exist constants \(c_{1},C_{1}\) such that_
\[c_{1}\|\mathsf{R}(I_{\mathrm{a}}u_{h})\|_{(\mathscr{U}^{1,2})^{*}}\leq\|\nabla \phi(I_{\mathrm{a}}u_{h})\|_{L^{2}(\mathbb{R}^{2})}\leq C_{1}\|\mathsf{R}(I_{ \mathrm{a}}u_{h})\|_{(\mathscr{U}^{1,2})^{*}}. \tag{3.25}\]
The presented Lemma yields several key observations: (1) the solution \(\phi\) acts as a Riesz representation of the residual \(\mathsf{R}(I_{\mathrm{a}}u_{h})\); (2) the resulting equivalence demonstrates that \(|\nabla\phi(I_{\mathrm{a}}u_{h})|_{L^{2}(\mathbb{R}^{2})}\) can serve as an ideal error estimator since it can provide upper and lower bounds to the true approximation error; (3) \(\nabla\phi\) corresponds to the second Piola atomistic stress tensor up to a divergence-free term [41], which will be discussed further in Appendix A. For these reasons, we refer to this error estimator as the residual stress-based error estimator.
#### 3.2.2. Practical error estimator
The estimator \(|\nabla\phi|L^{2}\) obtained from Lemma 3.3 is considered an ideal error estimator because of the equivalence (3.25). However, it is not computable due to the implicit nature of equation (3.23). To overcome this limitation, we propose a practical approach by (1) restricting the computational domain to a finite region \(\Omega\) and (2) discretizing the Poisson problem using the canonical triangulation \(\mathcal{T}_{\Omega}\) and the finite element method.
Different from the discretization of Cauchy-Born approximation where \(\mathcal{P}_{2}\) can be applied, it is sufficient to use \(\mathcal{P}_{1}\) to solve (3.23) due to the discrete nature of atomistic stress. For simplicity, suppose that \(\Omega\) is compatible with \(\mathcal{T}_{\Lambda}\), that is, there exists a subset \(\mathcal{T}_{\Omega}\subset\mathcal{T}_{\Lambda}\) such that \(\operatorname{clos}(\Omega)=\cup\mathcal{T}_{\Omega}\). We then define the finite element space utilizing \(\mathcal{P}_{1}\) finite element
\[\mathscr{U}^{\Omega}_{\mathrm{a}}:=\{\phi_{\mathrm{a}}\in\mathcal{P}_{1}( \mathcal{T}_{\Omega}):\phi_{\mathrm{a}}=0\text{ on }\partial\Omega\}.\]
We can now obtain an approximation to the idealised estimator \(\phi\) by solving for \(\phi_{\mathrm{a}}\in\mathscr{U}^{\Omega}_{\mathrm{a}}\) such that
\[\int_{\Omega}\nabla\phi_{\mathrm{a}}(I_{\mathrm{a}}u_{h})\cdot\nabla v_{ \mathrm{a}}\,\mathrm{d}x=\int_{\Omega}\widehat{\mathcal{F}}^{\mathrm{a}}(I_{ \mathrm{a}}u_{h})\cdot v_{\mathrm{a}}\,\mathrm{d}x\qquad\forall v_{\mathrm{a }}\in\mathscr{U}^{\Omega}_{\mathrm{a}}. \tag{3.26}\]
Hence, we define the approximate _a posterior_ error estimator by
\[\eta(u_{h}):=\|\nabla\phi_{\mathrm{a}}(I_{\mathrm{a}}u_{h})\|_{L^{2}(\Omega)}, \tag{3.27}\]
and the indicator arising from the truncation by
\[\rho_{\rm tr}(u_{h}):=R_{\Omega}\cdot\|\widehat{\mathcal{F}}^{\rm a}(I_{\rm a}u_{ h})\|_{L^{2}(\Omega\setminus B_{R_{\Omega}/2})}+\|\nabla\phi_{\rm a}(I_{\rm a}u_{h}) \|_{L^{2}(\Omega\setminus B_{R_{\Omega}/2})}. \tag{3.28}\]
From the results in [42], we can obtain the following theorem, demonstrating the equivalence of the _idealised residual estimate_\(\|{\sf R}(I_{\rm a}u_{h})\|_{(\mathscr{U}^{1,2})^{*}}\) and the _approximate estimator_\(\eta(u_{h})\) up to the truncated indicator defined by (3.28). The proof is given in the Appendix B.
**Theorem 3.4**.: _Let the approximate a posteriori error estimator \(\eta(u_{h})\) and the truncation indicator \(\rho_{\rm tr}(u_{h})\) be define by (3.27) and (3.28) respectively. Under the conditions of Lemma 3.2, there exists positive constants \(c,C\) such that_
\[c\eta(u_{h})\leq\|{\sf R}(I_{\rm a}u_{h})\|_{(\mathscr{U}^{1,2})^{*}}\leq C \big{(}\eta(u_{h})+\rho_{\rm tr}(u_{h})\big{)}. \tag{3.29}\]
It is worthwhile to mention that compared to our previous work [44], the crucial improvements of the residual stress-based estimator are two-folds: (1) it provides both upper and lower bounds to the approximation error; (2) the resulting error from truncation is also specified rigorously. These will benefit to the design of corresponding adaptive algorithm given in Section 4.1.
### A posteriori error estimate for energy error
The total energy is a significant material property, in addition to the geometry error, in various applications. In this section, we aim to establish a rigorous _a posteriori_ error estimate for the energy error, represented by \(\big{|}\mathcal{E}^{\rm a}(u^{\rm a})-\mathcal{E}^{\beta}h(uh)\big{|}\), where \(\mathcal{E}^{\beta}_{h}\) denotes the energy functional of one of the blended type a/c coupling methods. The analysis we present is fundamentally based on [40], but we have modified it to suit the context of this work.
The energy error can be split into two parts
\[\big{|}\mathcal{E}^{\rm a}(u^{\rm a})-\mathcal{E}^{\beta}_{h}(u_{h})\big{|} \leq\big{|}\mathcal{E}^{\rm a}(u^{\rm a})-\mathcal{E}^{\rm a}(I_{\rm a}u_{h}) \big{|}+\big{|}\mathcal{E}^{\rm a}(I_{\rm a}u_{h})-\mathcal{E}^{\beta}_{h}(u_ {h})\big{|}=:E_{1}+\big{|}\mu_{\rm E}(u_{h})\big{|}\]
We first estimate the term \(E_{1}\). As \(\mathcal{E}^{\rm a}\) is twice differentiable along the segment \(\{(1-s)u^{\rm a}+sI_{\rm a}u_{h}\mid s\in(0,1)\}\) with a Lipschitz constant \(M\), one can deduce from (3.29) that
\[|E_{1}| =\Big{|}\int_{0}^{1}\langle\delta\mathcal{E}^{\rm a}\big{(}(1-s)u ^{\rm a}+sI_{\rm a}u_{h}\big{)},u^{\rm a}-u_{h}\rangle\,{\rm d}s\Big{|} \tag{3.30}\] \[\leq M\|I_{\rm a}u_{h}-u^{\rm a}\|^{2}_{\mathscr{U}^{1,2}}\leq C \big{(}\eta(u_{h})^{2}+\rho_{\rm tr}(u_{h})^{2}\big{)}.\]
For the second term \(\mu_{\rm E}(u_{h})\), we utilize an equivalent version of atomistic energy functional such that it can be easily assigned into local contributions for both BQCE and BGFC methods. To that end, let \(u^{\rm a}_{h}:=I_{\rm a}u_{h}\) for simplicity of notations and recall the definition (2.4), we first rewrite \(\mathcal{E}^{\rm a}\) in an element-wise style
\[\mathcal{E}^{\rm a}(u^{\rm a}_{h})=\sum_{T\in\mathcal{T}_{\Lambda}}\frac{1}{6 }\sum_{\ell\in T\cap\Lambda}V_{\ell}\big{(}Du^{\rm a}_{h}(\ell)\big{)}.\]
For BQCE method, the energy functional can also be written as
\[\mathcal{E}^{\rm bace}_{h}(u_{h}) =\sum_{T\in\mathcal{T}_{\Lambda}}\frac{1}{6}\sum_{\ell\in T\cap \Lambda}\big{(}1-\beta(\ell)\big{)}\cdot V_{\ell}\big{(}Du_{h}(\ell)\big{)}\] \[\quad+\sum_{T\in\mathcal{T}_{\Lambda}}\sum_{T^{\prime}\in\mathcal{ T}_{\Lambda},T^{\prime}\cap T\neq\emptyset}\beta_{T^{\prime}}|T\cap T^{ \prime}|\cdot W(\nabla_{T^{\prime}}u_{h}).\]
Hence, \(\mu_{\rm E}(u_{h})\) for BQCE method reads
\[\mu_{\rm E}(u_{h}) =\mathcal{E}^{\rm a}(u_{h}^{\rm a})-\mathcal{E}_{h}^{\beta}(u_{h}) \tag{3.31}\] \[=\sum_{T\in\mathcal{T}_{\rm e}}\Big{(}\frac{1}{6}\sum_{\ell\in T \cap\Lambda}\beta(\ell)\cdot V_{\ell}\big{(}Du_{h}^{\rm a}(\ell)\big{)}-\beta_ {T}|T|\cdot W(\nabla_{T}u_{h})\Big{)}\] \[\quad+\sum_{T\in\mathcal{T}_{\rm e}}\sum_{T^{\prime}\in\mathcal{T }_{\Lambda}\omega(T^{\prime})\cap\partial T\neq\emptyset}\frac{|T\cap T^{ \prime}|}{2|T^{\prime}|}\Big{(}\frac{1}{3}\sum_{\ell\in T^{\prime}\cap \Lambda}V\big{(}Du_{h}^{\rm a}(\ell)\big{)}-V(\nabla u_{h}\cdot\mathcal{R}) \Big{)}.\]
The energy error estimate for BGFC method can be extended to include the correction term by replacing the potential functions \(V\) and \(W\) in (3.31) with \(V^{\prime\prime}\) and \(W^{\prime\prime}\), respectively, as per the definition in Section 2.2.5.
**Theorem 3.5**.: _Under the conditions of Lemma 3.2, let the approximate a posteriori error estimator \(\eta(u_{h})\) and the truncation indicator \(\rho_{\rm tr}(u_{h})\) be define by (3.27) and (3.28) respectively, there exists positive constants \(C_{\rm E}\) such that_
\[\big{|}\mathcal{E}^{\rm a}(u^{\rm a})-\mathcal{E}_{h}^{\beta}(u_{h})\big{|} \leq\ C_{\rm E}\big{(}\eta(u_{h})^{2}+\rho_{\rm tr}(u_{h})^{2}\big{)}+|\mu_{ \rm E}(u_{h})|. \tag{3.32}\]
As a result, we define the energy estimator as
\[\eta_{\rm E}(u_{h}):=\eta(u_{h})^{2}+|\mu_{\rm E}(u_{h})|. \tag{3.33}\]
## 4. Adaptive Algorithms and Numerical Experiments
In this section, we first propose the residual-based local error estimators based on the theoretical results derived in the last section and design an adaptive algorithm for the _a posteriori_ error control in Section 4.1 The adaptive computations for several typical examples of crystalline defects are then conducted in Section 4.2.
### Adaptive algorithm
In order to devise and implement the adaptive algorithm, it is necessary to allocate the global residual estimator \(\eta(y_{h})\) into elementwise local contributions. Before introducing this, the role of the truncation indicator \(\rho_{\rm tr}(u_{h})\) is discussed. As a rule, the truncation residual estimator is calculated directly in practice because it is comparatively small if the computational domain \(\Omega\) is adequately large. During the main adaptive process, it is checked whether it exceeds the threshold. It is important to note that the truncation residual is not assigned to local contributions, but rather computed directly using (3.28), and its value is checked during the adaptive process. This is because it is much smaller than the total local estimator \(\rho_{T}\) when \(\Omega\) is sufficiently large. The resulting Algorithm 1 provides additional details.
#### 4.1.1. Local error contributions
To design the adaptive algorithm that controls the extension of atomistic region in addition to the local mesh refinement in continuum region, we derive the local error estimators based on \(\eta(u_{h})\) defined in (3.27) as follows.
For \(T\in\mathcal{T}_{h}\), we define
\[\eta_{T}(u_{h}):=\sum_{T^{\prime}\in\Omega_{1},T^{\prime}\cap T\neq\emptyset} \frac{|T^{\prime}\cap T|}{|T^{\prime}|}\cdot\frac{\|\nabla\phi_{\rm a}(I_{a}u_ {h})\|_{L^{2}(T^{\prime})}^{2}}{\eta(u_{h})}. \tag{4.34}\]
It is straightforward to see \(\sum_{T\in\mathcal{T}_{h}}\eta_{T}(u_{h})=\eta(u_{h})\).
For the energy estimator \(\mu_{\mathrm{E}}(u_{h})\) it is already given in (3.33), we can define the local contributions similarly as \(\mu_{\mathrm{E}}(T;u_{h})\) such that \(\sum_{T\in\mathcal{T}_{h}}\mu_{\mathrm{E}}^{2}(T;u_{h})=\mu_{\mathrm{E}}^{2}(u_{ h})\). Meanwhile, for the energy based estimate, we have,
\[\eta_{T}^{\mathrm{E}}(u_{h}):=\sum_{T^{\prime}\in\mathcal{T}_{\Omega},T^{ \prime}\cap T\neq\emptyset}\frac{|T^{\prime}\cap T|}{|T^{\prime}|}\cdot\|\nabla \phi_{\mathrm{a}}(I_{\mathrm{a}}u_{h})\|_{L^{2}(T^{\prime})}^{2}+|\mu_{ \mathrm{E}}(T;u_{h})|. \tag{4.35}\]
The additional energy part \(\mu_{\mathrm{E}}(u_{h})\) is already defined by elements, and therefore it is straightforward to get \(\mu_{\mathrm{E}}(T;u_{h})\). Notice that the sum of local estimators is equal to the global estimator.
#### 4.1.2. Adaptive algorithm
Our adaptive procedure follows from the classic one [41, 42, 44], namely
\[\mathrm{Solve}\,\to\,\mathrm{Estimate}\,\to\,\mathrm{Mark}\,\to\,\mathrm{Refine}.\]
The main adaptive algorithm essentially follows from [40, Algorithm 3] but we adapt it to the current setting of blended a/c coupling methods, which is summarized below. We are ready to give the main adaptive algorithm of the error estimators based on the implicit stresses.
1. Prescible \(\Omega\), \(\mathcal{T}_{h}\), \(N_{\max}\), \(\eta_{\mathrm{tol}}\), \(\tau_{1}\) and \(\tau_{2}\).
2. _Solve_: Solve the BQCE (BQCF or BGFC) solution \(u_{h}\) of (2.10) ((2.12) or (2.16)) on the current mesh \(\mathcal{T}_{h}\).
3. _Estimate_: Compute the local error estimator \(\eta_{T}(u_{h})\) by (4.34) for each \(T\in\mathcal{T}_{h}\), and the truncation indicator \(\rho_{\mathrm{tr}}(u_{h})\) by (3.28). Compute the degrees of freedom \(N\) and \(\tilde{\eta}^{\Omega}:=\sum_{T}\tilde{\eta}^{\Omega}_{T}\). If \(\rho^{\mathrm{tr}}>\tau_{1}\tilde{\eta}^{\Omega}\), enlarge the computational domain. Stop if \(N>N_{\max}\) or \(\tilde{\eta}^{\Omega}<\eta_{\mathrm{tol}}\).
4. _Mark_: Apply Algorithm 2 to construct the refinement set \(\mathcal{M}\), the number of atomistic layers \(k\) and the ratio \(\alpha\).
5. _Refine:_ Expand the atomistic region outward by \([\alpha k]\) layers and the blending region outward by \(k-[\alpha k]\) layers. Construct \(\mathcal{M}_{h}:=\{T\in\mathcal{T}_{h}:\exists T^{\prime}\in\mathcal{M},T\cap T ^{\prime}=T^{\prime}\}\). Bisect all elements \(T\in\mathcal{M}_{h}\). Go to Step 1.
**Algorithm 1** Adaptive blended a/c coupling based on residual forces
We aim to provide a comparison between our main adaptive algorithm (Algorithm 1) and [40, Algorithm 3]. We note that [40, Algorithm 3] focuses on enlarging the atomistic region, while our algorithm extends both the atomistic and blending regions during the adaptive procedure.
Toward that end, we first introduce some notations. Let \(\mathrm{dist}(T,\Lambda_{\mathrm{a}}):=\inf\{|\ell-x_{T}|,\forall\ell\in \Lambda_{\mathrm{a}}\}\) be the distance between \(T\) and \(\Lambda_{\mathrm{a}}\) with \(x_{T}\) the barycenter of \(T\). Similarly we can define \(\mathrm{dist}(T,\Lambda_{\mathrm{c}})\) and \(\mathrm{dist}(T,\Lambda_{\mathrm{a}}\cup\Lambda_{\mathrm{b}})\). The _Mark_ step mainly applies the standard Dorfler strategy [10] to choose a set \(\mathcal{M}\) for model refinement and then we design a strategy demonstrating how to enlarge \(\Lambda_{\mathrm{a}}\) and \(\Lambda_{\mathrm{b}}\) based on the distance to different regions. We summarize it as follows, which is mainly based on the algorithm proposed in our previous work [45].
**Remark 4.6**.: The main contribution of Algorithm 2 is that it introduces a control parameter \(\theta\) inside the mesh refinement step to gain more information from the error estimator belonging to different regions. This allows for a more balanced refinement between the atomistic and blending regions, resulting in a more accurate and efficient adaptive algorithm. The parameter \(\theta\) is defined as \(\mathcal{T}^{\mathrm{ref}}\mathrm{b}(\theta):=T\in\mathcal{T}\mathrm{b}: \mathrm{dist}(T,\Lambda_{\mathrm{a}})\leq\theta L^{\mathrm{b}}\), where \(L^{\mathrm{b}}\) is the size of the blending region. By adjusting \(\theta\), one can control the number of elements in the blending region to achieve a desired accuracy.
To determine the optimal value of \(\theta\), we use a bisection method to find the value that gives an approximately equal total error contribution between \(\mathcal{T}^{\mathrm{ref}}\mathrm{b}(\theta)\) and \(\mathcal{T}\mathrm{b}\). However, we note that this approach can be further improved by considering anisotropic refinement of the interface, which
can be achieved by evolving the interface instead of just updating the radius of each sub-region. We discuss this and other potential future developments in Section 5.
### Numerical experiments
In this section, based on our main adaptive algorithm proposed in Algorithm 1, we conduct the adaptive computations for two typical types of crystalline defects, namely micro-crack and Frenkel defect.
#### 4.2.1. Model setup
Our prototype implementation for three blended a/c coupling methods is the 2D triangular lattice \(\Lambda^{\text{hom}}=\mathsf{A}\mathbb{Z}^{2}\) defined by
\[\mathsf{A}=\left(\begin{array}{cc}1&\cos(\pi/3)\\ 0&\sin(\pi/3)\end{array}\right),\]
where \(\Lambda^{\text{hom}}\) is in fact the projection of a BCC crystal along the (111) direction.
We use the well-known EAM potential as the site potential \(V_{\ell}\) through our numerical experiments, where
\[V_{\ell}(y) :=\sum_{\ell^{\prime}\in\mathcal{N}_{\ell}}\phi(|y(\ell)-y(\ell^{ \prime})|)+F\Bigl{(}\sum\nolimits_{\ell^{\prime}\in\mathcal{N}_{\ell}}\psi(|y (\ell)-y(\ell^{\prime})|)\Bigr{)}, \tag{4.38}\] \[=\sum_{\rho\in\mathcal{R}_{\ell}}\phi\bigl{(}|D_{\rho}y(\ell)| \bigr{)}+F\Bigl{(}\sum\nolimits_{\rho\in\mathcal{R}_{\ell}}\psi\bigl{(}|D_{ \rho}y(\ell)|\bigr{)}\Bigr{)},\]
for a pair potential \(\phi\), an electron density function \(\psi\) and an embedding function \(F\). In particular, we choose
\[\phi(r)=\exp(-2a(r-1))-2\exp(-a(r-1)),\quad\psi(r)=\exp(-br)\]
\[F(\tilde{\rho})=C\left[(\tilde{\rho}-\tilde{\rho_{0}})^{2}+(\tilde{\rho}- \tilde{\rho_{0}})^{4}\right]\]
with parameters \(a=4,b=3,c=10\) and \(\tilde{\rho_{0}}=6\exp(0.9b)\), which is the same as the numerical experiments in [40, 18]. Compared to these previous works, one of the main contribution of this work is that we admit the finite range interactions. We typically choose \(r_{\text{cut}}=2\), that is, the next nearest neighbor interactions are taken into consideration for our numerical experiments. We
also note that the specific choice of site potential is not essential and in fact we expect that most common site potentials would lead to similar results.
We mainly follow the constructions of blended a/c coupling methods in [34], where the blending function is obtained in a preprocessing step by approximately minimising \(\|\nabla^{2}\beta\|_{L^{2}}\), as described in detail in [22]. For BGFC method, we implement the equivalent "ghost force removal formulation" (2.17) instead of the "renormalisation formulation" (2.15) since it is relatively easy to be implemented.
We numerically test the main adaptive algorithm (Algorithm 1) with the _approximated_ error estimator defined by (4.34). To better reveal the effectiveness and the efficiency of our error estimators, we compare the results of our adaptive computations with those using an _a priori_ graded mesh [22, 16]. In all numerical experiments, the radius of computational domain \(\Omega\) is 300 initially. The adaptive processes start with the initial configuration with \(R_{\text{a}}=3\). The adaptive parameters in Algorithm 1 are fixed to be \(\tau_{1}=1.0\) and \(\tau_{2}=0.7\).
#### 4.2.2. Micro-crack
We apply an isotropic stretch S and shear \(\gamma_{II}\) by setting
\[\mathsf{B}=\left(\begin{array}{cc}1&\gamma_{II}\\ 0&1+\text{S}\end{array}\right)\cdot\mathsf{F}_{\mathsf{0}}\]
where \(\mathsf{F}_{\mathsf{0}}\propto\text{I}\) minimizing the Cauchy-Born energy density W and S = \(\gamma_{II}=0.03\).
The first defect we consider is the micro-crack, which is a prototypical example of point defects. We note that this is not technically a "crack", it serves as an example of a localized defect with an anisotropic shape. To generate this defect, we remove \(k\) atoms from \(\Lambda^{\text{hom}}\). In practice, we set \(k=6\). The geometry of micro-crack is illustrated in Figure 4.2.
We observe in Figure 4.3 that all adaptive computations achieve optimal convergence rate (\(N^{-0.5}\) for BQCE, \(N^{-1.0}\) for BQCF-P1 and BGFC-P1, and \(N^{-1.5}\) for BQCF-P2 and BGFC-P2) compared with the _a priori_ graded mesh given by [34, 13]. The accuracy and robustness of the main adaptive algorithm are also demonstrated in this case.
Figure 4.4 plots the efficiency factors for both _rigorous_ and _approximated_ error estimators for the blended a/c coupling methods. We observe that the efficiency factors are all moderate during the adaptive computations for all a/c coupling methods, which is a novel improvement compared to the previous works for residual-force based error estimators [45]. This provides a numerical evidence of proving both reliability and efficiency for the proposed residual-stress based error estimator.
Figure 4.2. Computational domain, finite element grid and atomistic region as used in the construction of blended a/c coupling methods for micro-crack.
We plot the relationship between \(R_{\rm a}\) and \(R_{\rm b}\) in the adaptive computations for micro-crack in Figure 4.5, where \(R_{\rm a}\) is the radius of the atomistic region while \(R_{\rm b}\) represent the width of the blending region. Our adaptive algorithm can achieve the optimal relationship between \(R_{\rm a}\) and \(R_{\rm b}\) for the case of micro-crack, indicating the robustness of main adaptive algorithm.
As for the energy error, we observe in Figure 4.6 that all adaptive computations achieve the same optimal convergence rate (\(N^{-1.0}\) for BQCE whereas \(N^{-2.0}\) for BGFC-P1 and BGFC-P2) compared with the _a priori_ graded mesh given by [34, 13], which indicate the error estimator for energy error is still reliable.
#### 4.2.3. Frenkel defect
In our second numerical experiment, we consider the well-known Frenkel defect to justify the performance of the main algorithm. We consider the lattice with one vacancy and one interstitial that are close, in this case, \(\Lambda=(\Lambda^{\rm hom}\setminus\Lambda_{1}^{\rm def})\cup\{(3/2,0)\}\). This is a very common point defect type but has never been considered so far in the literature of adaptive a/c methods to the best knowledge of the author. The geometry of micro-crack is illustrated in Figure 4.7. We apply the same isotropic stretch S and shear \(\gamma_{II}\) as that in previous example.
Figure 4.4. The efficiency factors of error estimators for different a/c coupling methods with respect to the number of degrees of freedom for micro-crack.
Figure 4.3. The convergence of geometry error for various a/c coupling methods with respect to the number of degrees of freedom for micro-crack.
We observe in Figure 4.8 that all adaptive computations achieve optimal convergence rate (\(N^{-0.5}\) for BQCE, \(N^{-1.0}\) for BQCF-P1 and BGFC-P1, and \(N^{-1.5}\) for BQCF-P2 and BGFC-P2) compared with the _a priori_ graded mesh given by [34, 13]. The accuracy and robustness of the main adaptive algorithm are also demonstrated in this case.
Figure 4.9 plots the efficiency factors for both _rigorous_ and _approximated_ error estimators for the blended a/c coupling methods. We observe that the efficiency factors are all moderate during the adaptive computations for all a/c coupling methods, which is a novel improvement compared to the previous works for residual-force based error estimators [45]. This provides a numerical evidence of proving both reliability and efficiency for the proposed residual-stress based error estimator.
We plot the relationship between \(R_{\text{a}}\) and \(R_{\text{b}}\) in the adaptive computations for Frenkel defect in Figure 4.10, where \(R_{\text{a}}\) is the radius of the atomistic region while \(R_{\text{b}}\) represent the width of the blending region. Our adaptive algorithm can achieve the optimal relationship between \(R_{\text{a}}\) and \(R_{\text{b}}\) for the case of Frenkel defect, indicating the robustness of main adaptive algorithm.
As for the energy error, we observe in Figure 4.11 that all adaptive computations achieve the same optimal convergence rate (\(N^{-1.0}\) for BQCE whereas \(N^{-2.0}\) for BGFC-P1 and BGFC-P2)
Figure 4.5. The relationship between the radius of the atomistic region \(R_{\text{a}}\) and the width of the blending region \(R_{\text{b}}\) in the adaptive computations for micro-crack.
Figure 4.6. The convergences of energy error for both a priori and a posteriori errors with respect to the number of degrees of freedom for micro-crack.
compared with the _a priori_ graded mesh given by [34, 13], which indicate the error estimator for energy error is still reliable.
## 5. Conclusion and Outlook
In this work we propose a unified framework of the residual based _a posteriori_ error estimates and design the corresponding adaptive algorithms which is in principle suitable for general consistent multiscale coupling methods. We prove that the error estimator based on the residual forces can provide the upper bound of the true approximation error. As prototypical examples, we present a range of adaptive computations based on this reliable error estimator for the blended atomistic-to-continuum (a/c) coupling methods including the energy-based blended quasi-continuum (BQCE), the force-based blended quasi-continuum (BQCF) and the recently developed blended ghost force correction (BQFC) methods. We develop some coarse-grained techniques for the efficient evaluation of the error estimator and test with different types of crystalline defects some of which are not considered in previous related literature of the adaptive a/c coupling methods. The various
Figure 4.7. Computational domain, finite element grid and atomistic region as used in the construction of blended a/c coupling methods for Frenkel defect.
Figure 4.8. The convergences of geometry errors with respect to the number of degrees of freedom for Frenkel defect.
numerical results demonstrate that, compared with the _a priori_ error estimate, the adaptive algorithm leads to the same optimal convergence rate of the error with considerable computational efficiency. We point out that the techniques and strategies developed in this work do not require too much specific choice of the multiscale coupling scheme as long as the method is consistent.
Although we believe that the adaptive algorithm proposed in this paper is generally applicable for other common multiscale coupling schemes and more complex crystalline defects, this research still raise a few open problems which deserve further mathematical analysis and algorithmic developments.
* _More complex crystalline defects_: The main bottleneck for more complex defects such as dislocations and cracks is the construction and implementation of the corresponding a/c coupling methods. It is very difficult (if not impossible) to give rigorous _a priori_ analysis for such problems. However, it should be the place where the adaptive a/c coupling methods reveal its most advantage and power and we believe that the current research makes it possible to develop efficient and robust adaptive algorithms for those problems with practical interests.
Figure 4.10. The relationship between the radius of the atomistic region \(R_{\mathrm{a}}\) and the width of the blending region \(R_{\mathrm{b}}\) in the adaptive computations for Frenkel defect.
Figure 4.9. The efficiency factors of error estimator for various a/c coupling methods with respect to the number of degrees of freedom for Frenkel defect.
* _Three dimensional model problems_: We expect to extend the current two dimensional setting to three dimensions. The adaptive algorithm proposed in this work has no fundamental obstacle but the implementation needs additional substantial effort since the highly non-trivial 3D mesh generation and adaptation are required. The recent advance [14] should provides good reference in this direction.
* _More robust and flexible refinement strategy_: Although most of the steps in our analysis and algorithm are independent of the geometry of the material defect and computational domain, the adaptive algorithm can only adjust the radius of the atomistic region to refine the model. This limitation prevents us from capturing significant anisotropy in the defect core, elastic field, or defect nucleation. To address this issue and consider such generalizations, we need to evolve the interface anisotropically. One approach is to treat this as a free interface problem based on the error distribution, which may lead to a more robust implementation of model adaptivity. Recent advancements in the literature of QM/MM coupling methods provide one such example [43].
Both the theoretical and practical aspects discussed above will be explored in future work.
## Appendix A Derivation and connection of atomistic stress
In this section, motivated by the stress formulation of the a/c coupling methods and corresponding stress based _a posteriori_ error estimators [18, 40], we will make the connection between the mechanical notion of stress and the _a posteriori_ estimator defined through \(\phi\) in (3.23).
The atomistic stress formulation plays a significant role in the analysis of a/c coupling methods. Given displacement \(u\in\mathscr{U}^{1,2}\), the second Piola stress tensor \(\sigma(u)\) of atomistic model satisfies
\[\langle\delta\mathcal{E}^{\mathrm{a}}(u),v\rangle=\int_{\mathbb{R}^{2}}\sigma (u)\cdot\nabla v\,\mathrm{d}x,\quad\forall v\in\mathscr{U}^{1,2}.\]
Next, we derive the concrete formulation of \(\sigma(u)\) in the following. The first variation of the atomistic energy functional (2.4) is given by
\[\langle\delta\mathcal{E}^{\mathrm{a}}(u),v\rangle=\sum_{\ell\in\Lambda}\sum_{ \rho\in\mathcal{R}}V_{\ell,\rho}\big{(}Du(\ell)\big{)}\cdot D_{\rho}v(\ell), \quad\forall v\in\mathscr{U}^{1,2}. \tag{1.39}\]
Figure 4.11. The convergences of energy error for various a/c coupling methods with respect to the number of degrees of freedom for Frenkel defect.
To derive its continuous version from which \(\sigma(u)\) is then obtained, we apply the so-called localization formulation introduced in [31], that is,
\[D_{\rho}\tilde{v}(\ell)=\int_{0}^{1}\nabla_{\rho}\tilde{v}(\ell+t\rho)\,\mathrm{d }t=\int_{\mathbb{R}^{2}}\int_{0}^{1}\zeta(\ell+t\rho-x)\cdot\nabla_{\rho}v(x)\, \mathrm{d}t\,\mathrm{d}x, \tag{1.40}\]
where \(\tilde{v}\) is a (quasi-)interpolation of \(v\). We omit the construction here for the brevity of presentation and refer to [31, 30] for its detail.
Due to the discreteness nature of atomistic models as well as the proposition of \(\tilde{v}\)[30], we can evaluate (1.39) by replacing the test function \(v\) with \(\tilde{v}\), which then leads to
\[\langle\delta\mathcal{E}^{\mathrm{a}}(u),\tilde{v}\rangle =\sum_{\ell\in\Lambda}\sum_{\rho\in\mathcal{R}}V_{\ell,\rho}\big{(} Du(\ell)\big{)}\cdot D_{\rho}\tilde{v}(\ell)\] \[=\int_{\mathbb{R}^{2}}\sum_{\ell\in\Lambda}\sum_{\rho\in\mathcal{ R}}\rho\otimes V_{\ell,\rho}\big{(}Du(\ell)\big{)}\int_{0}^{1}\zeta(\ell+t\rho-x )\,\mathrm{d}t\cdot\nabla v(x)\,\mathrm{d}x\] \[=:\int_{\mathbb{R}^{2}}\sigma(u)(x)\cdot\nabla v(x)\,\mathrm{d}x. \tag{1.41}\]
Hence, the concrete atomistic stress reads
\[\sigma(u)(x):=\sum_{\ell\in\Lambda}\sum_{\rho\in\mathcal{R}}\rho\otimes V_{ \ell,\rho}\big{(}Du(\ell)\big{)}\cdot\chi_{\ell,\rho}(x) \tag{1.42}\]
with the "smeared bonds" \(\chi_{\ell,\rho}(x):=\int_{0}^{1}\zeta(\ell+t\rho-x)\,\mathrm{d}t\). Recall that \(u_{h}\) is the solution of blended a/c coupling methods, we have \(I_{\mathrm{a}}u_{h}\in\mathscr{U}^{1,2}\). Then, the _a posteriori_ atomistic stress tensor \(\sigma(I_{\mathrm{a}}u_{h})\) is acquired. However, it can not be applied in practical adaptive algorithm since \(\sigma(I_{\mathrm{a}}u_{h})(x)\) is relatively difficult to be evaluated through the "smeared bonds" \(\chi_{\ell,\rho}(x)\).
Moreover, we could rewrite the first variation of atomistic energy functional by considering the residual forces (cf. (3.22)), namely
\[\langle\delta\mathcal{E}^{\mathrm{a}}(I_{\mathrm{a}}u_{h}),\tilde {v}\rangle= \sum_{\ell\in\Lambda}\sum_{\rho\in\mathcal{R}}\Big{(}V_{\ell- \rho,\rho}\big{(}Du_{h}(\ell-\rho)\big{)}-V_{\ell,\rho}\big{(}Du_{h}(\ell) \big{)}\Big{)}\cdot\tilde{v}(\ell)\] \[= \int_{\mathbb{R}^{2}}\widehat{\mathcal{F}}^{\mathrm{a}}(I_{ \mathrm{a}}u_{h})(x)\cdot v(x)\,\mathrm{d}x. \tag{1.43}\]
Combining (1.43) with (1.41), one can obtain the following equation
\[\int_{\mathbb{R}^{2}}\sigma(I_{\mathrm{a}}u_{h})(x)\cdot\nabla v(x)\,\mathrm{d }x=\int_{\mathbb{R}^{2}}\widehat{\mathcal{F}}^{\mathrm{a}}(I_{\mathrm{a}}u_{h })(x)\cdot v(x)\,\mathrm{d}x,\quad\forall v\in\mathscr{U}^{1,2}. \tag{1.44}\]
We note that it is highly related to the equation from which the _a posteriori_ error estimator \(\|\nabla\phi\|_{L^{2}(R^{2})}\) is obtained. As a matter of fact, according to the Helmholtz-Hodge decomposition [5, 29], \(\sigma(I_{\mathrm{a}}u_{h})\) can be decomposed as a sum of two orthogonal components:
\[\sigma(I_{\mathrm{a}}u_{h})=\nabla\phi+\nabla\times\psi, \tag{1.45}\]
with \(\phi\in\mathscr{U}^{1,2}\), \(\psi\in\mathscr{U}^{1,2}\). \(\nabla\phi\in L^{2}\) is called the "curl-free" component, and \(\nabla\times\psi\in L^{2}\) is divergence-free in the weak sense, i.e., \(\int_{\mathbb{R}^{2}}\nabla\times\psi(x)\cdot\nabla v(x)\,\mathrm{d}x=0\).
Combining Theorem 3.4 and Lemma 3.3, we have
\[\eta^{2}(u_{h})\lesssim\|\mathsf{R}(I_{\mathrm{a}}u_{h})\|^{2}_{(\mathscr{U}^{ 1,2})^{*}}\lesssim\|\nabla\phi\|^{2}_{L^{2}}\leq\|\nabla\phi\|^{2}_{L^{2}}+\| \nabla\times\psi\|^{2}_{L^{2}}=\|\sigma(I_{\mathrm{a}}u_{h})\|^{2}_{L^{2}}.\]
Therefore, \(\|\sigma(I_{\mathrm{a}}u_{h})\|_{L^{2}}\) provides an upper bound for the approximation error, which is consistent with the result shown in [41, Theorem 3.7].
In (1.45), we can uniquely define \(\phi\) by \(\Delta\phi=\nabla\cdot\sigma(I_{\mathrm{a}}u_{h})\) (in the weak sense). On the other hand, we can choose an arbitrary divergence-free component \(\nabla\times\psi\) in \(\sigma(I_{\mathrm{a}}u_{h})\) to satisfy (1.44). Therefore, \(\sigma(I_{\mathrm{a}}u_{h})\) in the sense of (1.44) is not unique. Motivated by the stress tensor correction [41, Eq.(70)], we consider the following problem to obtain a uniquely defined QM stress tensor,
\[\bar{\psi}\in\arg\min_{\psi\in\mathscr{U}^{1,2}}\big{\{}\|\sigma(I_{\mathrm{a }}u_{h})\|_{L^{2}}=\|\nabla\phi+\nabla\times\psi\|_{L^{2}}\big{\}}. \tag{1.46}\]
A straightforward calculation and the orthogonality of the two components of Helmholtz-Hodge decomposition lead to \(\nabla\times\bar{\psi}=0\). Hence, we denote the corresponding uniquely defined QM stress tensor as
\[\sigma^{0}(I_{\mathrm{a}}u_{h}):=\nabla\phi,\quad\text{for}\;\phi\in\mathscr{U }^{1,2}. \tag{1.47}\]
As a result, we recover the equation (3.23) which is used in the a posteriori estimates in Section 3.2.1.
## Appendix B Proof of Theorem 3.4
First of all, we introduce the definition of truncation operator. To estimate the approximation error introduced by this discretization we particularly need to account for the truncation of the domain. We assume for the sake of technical convenience, that there exists a radius \(R_{\Omega}\) such that \(B_{R_{\Omega}}\subset\Omega\subset B_{2R_{\Omega}}\); that is, \(\Omega\) is approximately a ball. This allows us to define a simple truncation operator, following [12], \(T_{R_{\Omega}}:\dot{H}^{1}(\mathbb{R}^{d})\to H^{1}_{0}(\Omega)\),
\[T_{R_{\Omega}}v(x):=\eta(x)\big{(}v(x)-a_{R_{\Omega}}\big{)},\qquad a_{R_{ \Omega}}=\fint_{B_{R_{\Omega}}\setminus B_{R_{\Omega}/2}}v(x)\,\mathrm{d}x \tag{2.48}\]
where \(\eta\) is a \(C^{1}\) cut-off function; \(\eta=1\) in \(B_{R_{\Omega}/2}\), \(\eta=0\) in \(B_{R_{\Omega}}^{\mathrm{c}}\) and \(|\nabla\eta|\lesssim R_{\Omega}^{-1}\). Following [12], for \(v_{R_{\Omega}}=T_{R_{\Omega}}v\) and \(a_{R_{\Omega}}\) defined by (2.48) we readily obtain the estimates
\[\|v-v_{R_{\Omega}}-a_{R_{\Omega}}\|_{L^{2}(\Omega)} \lesssim R_{\Omega}\|\nabla v-\nabla v_{R_{\Omega}}\|_{L^{2}( \Omega\setminus B_{R_{\Omega}})},\qquad\text{and} \tag{2.50}\] \[\|\nabla v-\nabla v_{R_{\Omega}}\|_{L^{2}(\Omega)} \lesssim\|\nabla v\|_{L^{2}(\Omega\setminus B_{R_{\Omega}/2})}. \tag{2.49}\]
Proof.: First of all, we estimate the term \(\|\nabla\phi-\nabla\phi_{\mathrm{a}}\|_{L^{2}(\mathbb{R}^{2})}\). By (3.23), for \(v\in\dot{H}^{1}\), we have
\[\|\nabla\phi-\nabla\phi_{\mathrm{a}}\|_{L^{2}(\mathbb{R}^{2})} =\sup_{\|\nabla v\|_{L^{2}}=1}\int_{\mathbb{R}^{2}}\big{(}\nabla \phi\cdot\nabla v-\nabla\phi_{\mathrm{a}}\cdot\nabla v\big{)}\,\mathrm{d}x \tag{2.51}\] \[=\sup_{\|\nabla v\|_{L^{2}}=1}\int_{\mathbb{R}^{2}}\big{(}\widehat {\mathcal{F}}^{\mathrm{a}}\cdot v-\nabla\phi_{\mathrm{a}}\cdot\nabla v\big{)} \,\mathrm{d}x.\]
Let \(v_{R}:=T_{R_{\Omega}}v\) be defined by (2.48), then we split the residual into two parts,
\[\int_{\mathbb{R}^{2}}\big{(}\widehat{\mathcal{F}}^{\mathrm{a}} \cdot v-\nabla\phi_{\mathrm{a}}\cdot\nabla v\big{)}\,\mathrm{d}x =\int_{\mathbb{R}^{2}}\widehat{\mathcal{F}}^{\mathrm{a}}\cdot(v-v _{R})\,\mathrm{d}x-\int_{\mathbb{R}^{2}}\nabla\phi_{\mathrm{a}}\cdot(\nabla v- \nabla v_{R})\,\mathrm{d}x \tag{2.52}\] \[=:T_{1}+T_{2}.\]
To estimate the term \(T_{1}\), we have
\[T_{1}=\int_{\mathbb{R}^{2}}\widehat{\mathcal{F}}^{\mathrm{a}}\cdot(v-v_{R})\, \mathrm{d}x\lesssim\ R_{\Omega}\cdot\|\widehat{\mathcal{F}}^{\mathrm{a}}\|_{L^ {2}(\Omega\setminus B_{R_{\Omega}/2})}\cdot\|\nabla v\|_{L^{2}}. \tag{2.53}\]
where the last inequality follows from the estimate (2.50).
Similarly, for the term \(T_{2}\), we can obtain
\[T_{2}=\int_{\mathbb{R}^{2}}\nabla\phi_{\mathrm{a}}\cdot(\nabla v-\nabla v_{R})\, \mathrm{d}x\lesssim\|\nabla\phi_{\mathrm{a}}\|_{L^{2}(\Omega\setminus B_{R_{ \Omega}/2})}\cdot\|\nabla v\|_{L^{2}}. \tag{2.54}\]
Hence, combining (2.51), (2.53) and (2.54), one can acquire
\[\|\nabla\phi-\nabla\phi_{\mathrm{a}}\|_{L^{2}(\mathbb{R}^{2})}\lesssim R_{ \Omega}\cdot\|\widehat{\mathcal{F}}^{\mathrm{a}}\|_{L^{2}(\Omega\setminus B_ {R_{\Omega}/2})}+\|\nabla\phi_{\mathrm{a}}\|_{L^{2}(\Omega\setminus B_{R_{ \Omega}/2})}=:\rho_{\mathrm{tr}}(u_{h}). \tag{2.55}\]
According to Lemma 3.2, we already know that \(\|\mathsf{R}(I_{\mathrm{a}}u_{h})\|_{(\mathscr{U}^{1,2})^{*}}\simeq\|\nabla \phi\|_{L^{2}}\). Moreover, we can deduce that \(\|\nabla\phi\|_{L^{2}}=\|\widehat{\mathcal{F}}^{\mathrm{a}}\|_{(\dot{H}^{1})^ {*}}\). Since \(\phi\) and \(\phi_{\mathrm{a}}\) are the solutions of (3.23) and (3.26), respectively, we use Galerkin orthogonality to write
\[\|\nabla\phi\|_{L^{2}}^{2}-\|\nabla\phi_{\mathrm{a}}\|_{L^{2}}^{2}= \langle\nabla\phi-\nabla\phi_{\mathrm{a}},\nabla\phi-\nabla\phi_ {\mathrm{a}}\rangle+2\langle\nabla\phi-\nabla\phi_{\mathrm{a}},\nabla\phi_{ \mathrm{a}}\rangle \tag{2.56}\] \[= \|\nabla\phi-\nabla\phi_{\mathrm{a}}\|_{L^{2}}^{2}.\]
Hence, we have \(\|\nabla\phi\|_{L^{2}}^{2}\geq\|\nabla\phi_{\mathrm{a}}\|_{L^{2}}^{2}\), which immediately indicates that \(\eta(u_{h})\lesssim\|\mathsf{R}(I_{\mathrm{a}}u_{h})\|_{(\mathscr{U}^{1,2})^{*}}\).
To obtain an upper bound for \(\|\nabla\phi\|_{L^{2}}\) we use Cauchy's inequality to estimate
\[\|\nabla\phi\|_{L^{2}}^{2}=\|\nabla\phi_{\mathrm{a}}\|_{L^{2}}^{2}+\|\nabla \phi-\nabla\phi_{\mathrm{a}}\|_{L^{2}}^{2}\leq\big{(}\|\nabla\phi_{\mathrm{a} }\|_{L^{2}}+\|\nabla\phi-\nabla\phi_{\mathrm{a}}\|_{L^{2}}\big{)}^{2}.\]
We deduce \(\|\mathsf{R}(I_{\mathrm{a}}u_{h})\|_{(\mathscr{U}^{1,2})^{*}}\lesssim\eta(u_{ h})+\rho_{\mathrm{tr}}(u_{h})\), which finishes the stated result.
|
2309.13150 | Pixel-wise Smoothing for Certified Robustness against Camera Motion
Perturbations | Deep learning-based visual perception models lack robustness when faced with
camera motion perturbations in practice. The current certification process for
assessing robustness is costly and time-consuming due to the extensive number
of image projections required for Monte Carlo sampling in the 3D camera motion
space. To address these challenges, we present a novel, efficient, and
practical framework for certifying the robustness of 3D-2D projective
transformations against camera motion perturbations. Our approach leverages a
smoothing distribution over the 2D pixel space instead of in the 3D physical
space, eliminating the need for costly camera motion sampling and significantly
enhancing the efficiency of robustness certifications. With the pixel-wise
smoothed classifier, we are able to fully upper bound the projection errors
using a technique of uniform partitioning in camera motion space. Additionally,
we extend our certification framework to a more general scenario where only a
single-frame point cloud is required in the projection oracle. Through
extensive experimentation, we validate the trade-off between effectiveness and
efficiency enabled by our proposed method. Remarkably, our approach achieves
approximately 80% certified accuracy while utilizing only 30% of the projected
image frames. The code is available at
https://github.com/HanjiangHu/pixel-wise-smoothing. | Hanjiang Hu, Zuxin Liu, Linyi Li, Jiacheng Zhu, Ding Zhao | 2023-09-22T19:15:49Z | http://arxiv.org/abs/2309.13150v2 | # Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations
###### Abstract
In recent years, computer vision has made remarkable advancements in autonomous driving and robotics. However, it has been observed that deep learning-based visual perception models lack robustness when faced with camera motion perturbations. The current certification process for assessing robustness is costly and time-consuming due to the extensive number of image projections required for Monte Carlo sampling in the 3D camera motion space. To address these challenges, we present a novel, efficient, and practical framework for certifying the robustness of 3D-2D projective transformations against camera motion perturbations. Our approach leverages a smoothing distribution over the 2D pixel space instead of in the 3D physical space, eliminating the need for costly camera motion sampling and significantly enhancing the efficiency of robustness certifications. With the pixel-wise smoothed classifier, we are able to fully upper bound the projection errors using a technique of uniform partitioning in camera motion space. Additionally, we extend our certification framework to a more general scenario where only a single-frame point cloud is required in the projection oracle. This is achieved by deriving Lipschitz-based approximated partition intervals. Through extensive experimentation, we validate the trade-off between effectiveness and efficiency enabled by our proposed method. Remarkably, our approach achieves approximately 80% certified accuracy while utilizing only 30% of the projected image frames.
## 1 Introduction
Visual perception has been boosted in recent years by leveraging the power of neural networks, with broad applications in robotics and autonomous driving. Despite the success of deep learning based perception, robust perception suffers from external sensing uncertainty in real-world settings, e.g. glaring illumination [27; 24; 26; 25], sensor placement perturbation [22; 63; 39; 36], motion blurring, corruptions [55; 43; 33; 32], etc. Besides, the internal vulnerability of deep neural networks has been well studied for years, and rich literature reveals that deep neural networks can be easily fooled by adversarial examples. The victim perception models predict incorrect results under stealthy perturbation on pixels [15; 57; 62] or 2D semantic transformation, e.g. image rotation, scaling, translation, and other pixel-wise deformations of vector fields [47; 9; 20; 12; 19; 30; 38].
In response to such internal model vulnerability from \(\ell_{p}\)-bounded pixel-wise perturbations, in parallel to many empirical defense methods [42; 59; 41; 60], lots of recent work provide provable robustness guarantees and certifications for any bounded perturbations within certain \(\ell_{p}\) norm threshold [5; 58; 66; 6]. As for the challenging 2D semantic transformations including 2D geometric transformation, deterministic verification [2; 44; 50; 64; 21] and probabilistic certification methods [13; 34; 1; 16] are recently proposed to guarantee such robustness, which is more relevant and important to real-world applications.
However, rare literature focuses on the more commonly-seen 3D-2D projective transformation caused by external sensing uncertainty of camera motion perturbations, which commonly exist in real-world applications such as autonomous driving and robotics. The external perturbations may cause severe consequences in safety-critical scenarios if the perception models are fooled by certain translation or rotation changes of the onboard cameras. Recent work CMS [23] studies how the camera motions influence the perception models and presents a resolvable robustness certification framework for such motion perturbations through randomized smoothing in the parameterized camera motion space. However, although CMS [23] gives the tight certification as an upper bound due to the formulation of the resolvable projective transformation [34; 16], the high computational cost of "camera shaking" induced by the Monte Carlo sampling in the camera motion space and the requirement of the entire dense point cloud of the object model restricts its practical applications.
To this end, as shown in Figure 1, we introduce a new efficient robustness certification framework with pixel-wise smoothing against camera motion perturbations in a non-resolvable manner. We first construct smoothed classifiers over the pixel level, reducing the cost of redundant image projections in Monte Carlo sampling. Then we propose to use the uniform partitions technique with the consistent camera motion interval to fully cover the projection error based on the pixel-wise smoothed classifier. To further avoid the projection oracle where the whole dense point cloud must be required, we leverage the Lipschitz property of the projection function to approximate the upper bound of the partitioning intervals. This results in the successful certification given the projection oracle with knowing only one-frame point cloud, which is more convenient to obtain in many real-world applications. In addition to the theoretical contributions, we conduct extensive experiments to show the significant trade-off of required projection frames and certified accuracy compared to the resolvable baseline as an upper bound, validating the efficiency and effectiveness of our proposed method. Our contributions are summarized as follows:
* We propose a new efficient robustness certification framework against camera motion perturbations using pixel-wise smoothing based on the uniform partitioning of camera motion, avoiding the substantial expenses associated with projection sampling in camera motion space.
* We further derive a Lipschitz-based approximation for the upper bound of the partitioning interval and extend our theoretical framework to the general case with only the one-frame point cloud known in the projection oracle.
* Comprehensive comparison experiments show that our method is much more efficient on the MetaRoom dataset: it requires only 30% projected image frames to achieve 80% certified accuracy compared to the upper bound of the resolvable baseline.
## 2 Related Work
**Provable Defenses against \(\ell_{p}\)-bounded Attacks**. Compared to empirical defense approaches [42; 59; 41; 54; 60; 46] to train robust models against specific adversarial perturbations, defense with provable guarantees aims to guarantee the accuracy for all perturbations within some \(\ell_{p}\)-bounded attack radius [35; 37]. The complete certifications [31; 10] are NP-complete for deep neural networks [35; 67], though they guarantee finding existing attacks. Incomplete certifications are more practical as they provide relaxed guarantees to find non-certifiable examples, which can be categorized into
Figure 1: Overview of the robustness certification using pixel-wise smoothing (green) to avoid non-efficient sampling in the camera motion space with too many projected frames required (red).
deterministic and probabilistic ones [58; 61; 56; 6; 45; 65]. Deterministic certifications adopt linear programming [53; 66] or semi-definite programming [48; 49], but they cannot scale up well to large datasets. Probabilistic certifications [5] based on randomized smoothing show impressive scalability and promising advantages through adversarial training [51] and consistency regularization [29]. Lots of work show that the robustness certification can be boosted by improving robustness against Gaussian noise with some denoisers [52; 3].
**Semantic Transformation Robustness Certification**. The robustness of deep neural networks against real-world semantic transformations, e.g. translation, rotation, and scaling [47; 9; 20; 12; 19; 30; 38], presents challenges due to the complex optimization landscape involved in adversarial attacks. Recent literature aims to provide the robustness guarantee against semantic transformations [16; 34; 50; 1; 2], with either function relaxations-based deterministic guarantees [2; 44; 40; 50; 64; 21] or random smoothing based high-confident probabilistic guarantees [13; 34; 1; 4; 16]. However, the robustness against projective transformation induced by sensor movement is rarely studied in the literature, while we believe it is a commonly seen perturbation source in practical applications. Recent work [23] proposes camera motion smoothing to certify such robustness by leveraging smoothing distribution in the camera motion space. It is known to be tight as a resolvable transformation [34; 4; 16], which has the overlapped domain and support with smoothing distribution in the camera motion space. However, it requires high computational resources for Monte Carlo sampling with image projections as well as the expensive prior of the dense point cloud as an oracle. The above limitations motivate us to study a more efficient robustness certification method.
## 3 Background of Image Projection and Certification Goal
### 3D-2D Projective Transformation
Following the literature in computer vision and graphics [17], image projection can be obtained through the intrinsic matrix \(\mathbf{K}\) of the camera and the extrinsic matrix of camera pose \(\alpha=(\theta,t)\in\mathcal{Z}\) given dense 3D point cloud \(\mathbb{P}\subset\mathbb{R}^{3}\). In this way, each 3D point \(P\in\mathbb{P}\) can be projected to a 2D position on the image plane through the 3D-2D projection function \(\rho:\mathbb{P}\times\mathcal{Z}\to\mathbb{R}^{2}\) and pixel-wise depth function \(D:\mathbb{P}\times\mathcal{Z}\to\mathbb{R}\).
**Definition 3.1** (3D-2D position projection).: For any 3D point \(P=(X,Y,Z)\in\mathbb{P}\subset\mathbb{R}^{3}\) under the camera coordinate with the camera intrinsic matrix \(\mathbf{K}\), based on the camera motion of \(\alpha=(\theta,t)\in\mathcal{Z}\subset\mathbb{R}^{6}\) with rotation matrix \(\mathbf{R}=\exp(\theta^{\wedge})\in SO(3)\) and translation vector \(t\in\mathbb{R}^{3}\), define the projection function \(\rho:\mathbb{P}\times\mathcal{Z}\to\mathbb{R}^{2}\) and the depth function \(D:\mathbb{P}\times\mathcal{Z}\to\mathbb{R}\) as
\[[\rho(P,\alpha),1]^{\top}=\frac{1}{D(P,\alpha)}\mathbf{K}\mathbf{R}^{-1}(P-t),\quad D(P,\alpha)=[0,0,1]\mathbf{R}^{-1}(P-t) \tag{1}\]
As defined in [23], given the \(K\)-channel colored 3D point cloud \(V\in\mathcal{V}\), the 2D colored image \(x\in\mathcal{X}\) can be obtained through 3D-2D projective transformation \(O:\mathcal{V}\times\mathcal{Z}\to\mathcal{X}\), as shown in Figure 2. Based on this, define the 2D projective transformation \(\phi:\mathcal{X}\times\mathcal{Z}\to\mathcal{X}\) given the relative camera motion \(\alpha\in\mathcal{Z}\). Specifically, if only one coordinate of \(\alpha\) is non-zero, then it is denoted as one-axis relative camera motion \(\alpha\in\mathcal{S}\).
**Definition 3.2** (3D-2D \(K\)-channel pixel-wise projection).: Given the position projection function \(\rho:\mathbb{P}\times\mathcal{Z}\to\mathbb{R}^{2}\) and the depth function \(D:\mathbb{P}\times\mathcal{Z}\to\mathbb{R}\) with \(K\)-channel 3D point cloud \(V\in\mathcal{V}:\mathbb{P}\times[0,1]^{K}\), for camera motion \(\alpha\in\mathcal{Z}\), define the 3D-2D pixel projection function as \(O:\mathcal{V}\times\mathcal{Z}\to\mathcal{X}\) to return \(K\)-channel image \(x\in\mathcal{X}\), \(x=O(V,\alpha)\) where
\[x_{k,r,s}=O(V,\alpha)_{k,r,s}=V_{P^{*}_{\alpha},k},\quad\text{where }P^{*}_{ \alpha}=\operatorname*{argmin}_{\{P\in\mathbb{P}\|\rho(P,\alpha)\|=(r,s)\}}D( P,\alpha) \tag{2}\]
where \(\lfloor\cdot\rfloor\) is the floor function. Specifically, if \(x=O(V,0)\), given the relative camera pose \(\alpha\in\mathcal{Z}\), define the 2D projective transformation \(\phi:\mathcal{X}\times\mathcal{Z}\to\mathcal{X}\) as \(\phi(x,\alpha)=O(V,\alpha)\).
\begin{table}
\begin{tabular}{l l|l l} \hline Domain Symbols & Meanings & Function Symbols & Meanings \\ \hline \(\mathcal{X}\subset[0,1]^{K}\times\mathbb{Z}^{2}\) & 2D image with K channels & \(\rho:\mathbb{P}\times\mathcal{Z}(\text{or}\mathcal{S})\to\mathbb{R}^{2}\) & 3D-2D position projection function \\ \(\mathcal{Z}\subset\mathbb{R}^{6}\) & Parameterized camera motion space & \(D:\mathbb{P}\times\mathcal{Z}(\text{or}\mathcal{S})\to\mathbb{R}\) & Pixel-wise depth function \\ \(\mathcal{S}\subset\mathbb{R}\times\mathcal{O}^{5}\) & One-axis relative camera pose & \(\phi:\mathcal{X}\times\mathcal{Z}(\text{or}\mathcal{S})\to\mathcal{X}\) & 2D projective transformation function \\ \(\mathbb{P}\subset\mathbb{R}^{3}\) & 3D points & \(O:\mathcal{V}\times\mathcal{Z}(\text{or}\mathcal{S})\to\mathcal{X}\) & 3D-2D pixel projection function \\ \(\mathcal{V}\subset\mathbb{R}^{3}\times[0,1]^{K}\) & 3D points with K channels & \(p:\mathcal{Y}\mid\mathcal{X}\to\mathbb{R}\) & Score function in base classifier \\ \(\mathcal{Y}\subset\mathbb{Z}\) & Label space for classification & \(q:\mathcal{Y}\mid\mathcal{X};\varepsilon\to\mathbb{R}\) & Score function in \(\varepsilon\)-smoothed classifier \\ \hline \end{tabular}
\end{table}
Table 1: Table of notations for domain and function symbols
### Threat Model and Certification Goal
We formulate the camera motion perturbation as an adversarial attack for the image classifier \(g:\mathcal{X}\rightarrow\mathcal{Y}\), where there exists some camera pose \(\alpha\in\mathcal{Z}\) under which the captured image \(\phi(x,\alpha)\) fools the classifier \(g\), making a wrong prediction of object label. Specifically, if the whole dense point cloud \(V\) is known as prior, we can obtain the image projection \(\phi:\mathcal{X}\times\mathcal{Z}\rightarrow\mathcal{X}\) directly through 3D-2D projective transformation \(O:\mathcal{V}\times\mathcal{Z}\rightarrow\mathcal{X}\), i.e.
\[x=\phi(x,0)=O(V,0),\phi(x,\alpha)=O(V,\alpha) \tag{3}\]
The certification goal is to provide robustness guarantees for vision models against all camera motion perturbations within a certain radius in the camera pose space through 3D-2D projective transformation. We formulate the goal as given the 2D projective transformation \(\phi\), finding a set \(\mathcal{Z}_{\mathrm{adv}}\subseteq\mathcal{Z}_{\phi}\) for a classifier \(g\) such that,
\[g(x)=g(\phi(x,\alpha)),\forall\alpha\in\mathcal{Z}_{\mathrm{adv}}. \tag{4}\]
## 4 Certifying Camera Motion Perturbation using Pixel-wise Smoothing
### Pixel-wise Smoothed Classifier with Projection
To achieve the certification goal (4), we adopt the stochastic classifier based on the randomized smoothing [5] over pixel level to construct the pixel-wise smoothed classifier, instead of the expensive smoothed classifier using smoothing distribution in camera motion space [23]. Combining the image projection discussed above, we present the definition of smoothed classifier below, by adding zero-mean pixel-wise Gaussian noise to each projected image and taking the empirical mean as the smoothed prediction.
**Definition 4.1** (\(\varepsilon\)-smoothed classifier with 2D image projection).: Let \(\phi:\mathcal{X}\times\mathcal{Z}\rightarrow\mathcal{X}\) be a 2D projective transformation parameterized with the one-axis relative camera motion \(\alpha\in\mathcal{S}\subset\mathcal{Z}\) and \(\varepsilon\in\mathcal{X}\sim\mathcal{P}_{\varepsilon}\) as the smoothing distribution. Let \(x=\phi(x,0)\) be under the original camera pose and \(h:\mathcal{X}\rightarrow\mathcal{Y}\) be a base classifier \(h(x)=\operatorname*{argmax}_{y\in\mathcal{Y}}p(y\mid x)\). Under the one-axis relative camera pose \(\alpha\in\mathcal{S}\), we define the \(\varepsilon\)-smoothed classifier \(g:\mathcal{X}\rightarrow\mathcal{Y}\) as
\[g(x;\alpha;\varepsilon)=\operatorname*{argmax}_{y\in\mathcal{Y}}q(y\mid\phi(x,\alpha);\varepsilon),\text{where }q(y\mid\phi(x,\alpha);\varepsilon)=\mathbb{E}_{ \varepsilon\sim\mathcal{P}_{\varepsilon}}p(y\mid\phi(x,\alpha)+\varepsilon) \tag{5}\]
_Remark 4.2_.: We remark that the definition of smoothed classifier 4.1 is also applicable to the general 6-DoF translation and rotation of camera motion (\(\alpha\in\mathcal{Z}\)), but we mainly focus on the one-axis relative camera motion (\(\alpha\in\mathcal{S}\subset\mathcal{Z}\)) to make it consistent with the previous work [23].
Based on the pixel-wise smoothing, we then introduce the uniform partitions in camera motion space to fully cover all the pixel values within the camera motion perturbation radius. Specifically, we derive the upper bound of the partition interval (PwS) and its Lipschitz-based approximation (PsW-L), which is further used to relax the prior from the entire dense point cloud to the one-frame point cloud in the projection oracle (PwS-OF).
### PwS: Certification via Pixel-wise Smoothing with Prior of Entire Point Cloud
Given the entire dense colored point cloud \(V:\mathbb{P}\times[0,1]^{K}\) for the image projection, the projection function \(O(V,\alpha)\) at each pixel \((r,s)\in\mathbb{Z}^{2}\) is a piecewise constant function w.r.t \(\alpha\), as shown in Figure 3. This is because the projected pixel value is determined by the target 3D point \(P^{*}\in\mathbb{P}\) which has the least projected depth on the pixel \((r,s)\) for any camera pose within the motion interval \(\mathbb{U}_{P^{*},r,s}\), which is defined as _consistent camera motion interval_, showing the set of views for which the same 3D scene point projects to a given pixel.
**Definition 4.3** (Consistent camera motion interval).: Given the 3D points \(\mathbb{P}\subset\mathbb{R}^{3}\) and the projection function \(\rho:\mathbb{P}\times\mathcal{Z}\to\mathbb{R}^{2}\) and the depth function \(D:\mathbb{P}\times\mathcal{Z}\to\mathbb{R}\), for any \(P^{*}\in\mathbb{P}\) projected on \((r,s)\) with the least depth values, define the camera motion set \(\mathbb{U}_{P^{*},r,s}\subset\mathcal{Z}\) as the consistent camera motion interval,
\[\mathbb{U}_{P^{*},r,s}=\{\alpha\mid P^{*}=\operatorname*{argmin}_{\{P\in \mathbb{P}[|\rho(P,\alpha)|=(r,s)\}}D(P,\alpha)\} \tag{6}\]
Specifically, for one-axis consistent camera motion interval \(\mathbb{U}_{P,r,s}\subset\mathcal{S}\), in the absence of ambiguity, we regard \(\mathbb{U}_{P,r,s}\) as a subset of one-dimensional real number field based on non-zero coordinate in \(\mathcal{S}\) in the following mathematical notations.
Since all the intervals of the piecewise constant function \(O(V,\alpha)_{r,s}\) correspond to different 3D points projected to pixel \((r,s)\) as the camera motion \(\alpha\) varies within one-axis camera motion perturbation \(\mathcal{S}\), we introduce Lemma 4.4, which demonstrates that for any given pixel \((r,s)\), an upper bound \(\Delta^{r,s}\) exists for the _consistent camera motion interval_\(\mathbb{U}_{P,r,s}\), regardless of \(P\in\mathbb{P}\). This upper bound ensures that all 3D-2D projections will fall within the projections of the endpoints of \(\Delta^{r,s}\) for any camera motion that falls within \(\Delta^{r,s}\). In this scenario, we describe the projection function \(O(V,\alpha)_{r,s}\) as being _fully covered_ by this consistent interval upper bound \(\Delta^{r,s}\), as illustrated in Figure 3. The proof of Lemma 4.4 can be located in the Appendix Section B.
**Lemma 4.4** (Upper bound of fully-covered motion interval).: _Given the projection from entire 3D point \(V\in\mathcal{V}:\mathbb{P}\times[0,1]^{K}\) along one-axis translation or rotation and the consistent camera motion interval \(\mathbb{U}_{P,r,s}\) for any \(P\in\mathbb{P}\) projected on \((r,s)\), define the interval \(\Delta^{r,s}\) as,_
\[\Delta^{r,s}=\min_{P\in\mathbb{P}}\{\sup\mathbb{U}_{P,r,s}-\inf\mathbb{U}_{P, r,s}\} \tag{7}\]
_then for any projection on \((r,s)\) under camera motion \(u\in\bigcup_{P\in\mathbb{P}}\mathbb{U}_{P,r,s}\), we have \(\forall\ 0\leq\Delta^{*}\leq\Delta^{r,s}\)_
\[O(V,u+\Delta^{*})_{r,s}\in\{O(V,u)_{r,s},O(V,u+\Delta^{r,s})_{r,s}\} \tag{8}\]
Based on \(\Delta^{r,s}\) in Lemma 4.4 for each pixel, we adopt uniform partitions over one-axis camera motion space \(\mathcal{S}\subseteq\mathcal{Z}\), resulting in the robustness certification in Theorem 4.5. The key to the proof is to upper bound the projection error [34, 4] in Equation (9), which can be done through the fully-covered image partitions. The full proof of Theorem 4.5 can be found in the Appendix Section B.
**Theorem 4.5** (Certification with fully-covered partitions).: _For the image projection from entire 3D point cloud \(V\in\mathcal{V}:\mathbb{P}\times[0,1]^{K}\), let \(\phi\) be one-axis rotation or translation with parameters in \(\mathcal{S}\subseteq\mathcal{Z}\) and uniformly partitioned \(\{\alpha_{i}\}_{i=1}^{N}\subseteq\mathcal{S}\) with interval \(\Delta_{\alpha}\), where_
\[\Delta_{\alpha}\leq\min_{r,s}\Delta^{r,s}=\min_{r,s}\min_{P\in\mathbb{P}}\{ \sup\mathbb{U}_{P,r,s}-\inf\mathbb{U}_{P,r,s}\}\]
_Let \(y_{A},y_{B}\in\mathcal{Y}\), \(\varepsilon\sim\mathcal{N}(0,\sigma^{2}\mathbb{I}_{d})\) and suppose that for any \(i\), the \(\varepsilon\)-smoothed classifier defined by \(q(y\mid x;\varepsilon):=\mathbb{E}(p(y\mid x+\varepsilon))\) has class probabilities that satisfy with top-2 classes \(y_{A},y_{B}\) as \(q(y_{A}\mid\phi(x,\alpha_{i});\varepsilon)\geq p_{A}^{(i)}\geq p_{B}^{(i)}\geq \max_{y\neq y_{A}}q(y\mid\phi(x,\alpha_{i});\varepsilon)\), then it is guaranteed that \(\forall\alpha\in\mathcal{S}\colon y_{A}=\operatorname*{argmax}_{y}q(y\mid\phi( x,\alpha);\varepsilon)\) if_
\[\max_{1\leq i\leq N-1}\sqrt{\frac{1}{2}\sum_{k,r,s}(O(V,\alpha_{i})_{krs}-O(V, \alpha_{i+1})_{krs})^{2}}<\frac{\sigma}{2}\min_{1\leq i\leq N}\left(\Phi^{-1} \left(p_{A}^{(i)}\right)-\Phi^{-1}\left(p_{B}^{(i)}\right)\right). \tag{9}\]
### PwS-L: Certification via Pixel-wise Smoothing with Lipschitz-approximated Partitions
In more general cases without knowing the prior of the entire point cloud, the projected pixels within the _consistent camera motion interval_\(\mathbb{U}_{P,r,s}\) are easier to find compared to \(\mathbb{U}_{P,r,s}\) itself using Definition 4.3. In this case, we propose to approximate the upper bound of the fully-covered interval \(\Delta^{r,s}\) as \(\Delta^{r,s}_{L}\) by leveraging the Lipschitz property of projection oracle, as shown in Lemma 4.6.
**Lemma 4.6** (Approximated upper bound of fully-covered interval).: _For the projection from entire 3D point \(V\in\mathcal{V}:\mathbb{P}\times[0,1]^{K}\), given the one-axis monotonic position projection function \(\rho\) in \(l_{\infty}\) norm over camera motion space, if the Lipschitz-based interval \(\Delta^{r,s}_{L}\) is defined as_
\[\Delta^{r,s}_{L}=\min_{P\in\mathbb{P}}\frac{1}{L_{P}}\mid\rho(P,\sup\mathbb{U} _{P,r,s})-\rho(P,\inf\mathbb{U}_{P,r,s})\mid_{\infty} \tag{10}\]
_where \(L_{P}\) is the Lipschitz constant for projection function \(\rho\) given 3D point \(P\),_
\[L_{P}=\max\{\max_{\alpha\in\mathcal{S}}|\frac{\partial\rho_{1}(P,\alpha)}{ \partial\alpha}|,\max_{\alpha\in\mathcal{S}}|\frac{\partial\rho_{2}(P,\alpha )}{\partial\alpha}|\} \tag{11}\]
_then it holds that \(\Delta^{r,s}_{L}\leq\Delta^{r,s}=\min_{P\in\mathbb{P}}\{\sup\mathbb{U}_{P,r,s }-\inf\mathbb{U}_{P,r,s}\}\)._
Based on the monotonicity of projection over camera motion and Theorem 4.5, Theorem 4.7 below holds for more general cases but with a smaller approximated fully-covered interval \(\Delta^{r,s}_{L}\) and more uniform partitions \(N\) as a trade-off compared to Theorem 4.5. The full proof of Lemma 4.6 and Theorem 4.7 can be found in the Appendix Section C.
**Theorem 4.7** (Certification with approximated partitions).: _For the projection from entire 3D point cloud \(V\in\mathcal{V}:\mathbb{P}\times[0,1]^{K}\) through the monotonic position projection function \(\rho\) in \(l_{\infty}\) norm over camera motion space, let \(\phi\) be one-axis rotation or translation with parameters in \(\mathcal{S}\subseteq\mathcal{Z}\) and uniformly partitioned \(\{\alpha_{i}\}_{i=1}^{N}\subseteq\mathcal{S}\) with interval \(\Delta_{\alpha}\), where_
\[\Delta_{\alpha}\leq\min_{r,s}\Delta^{r,s}_{L}=\min_{r,s}\min_{P \in\mathbb{P}}\frac{1}{L_{P}}\mid\rho(P,\sup\mathbb{U}_{P,r,s})-\rho(P,\inf \mathbb{U}_{P,r,s})\mid_{\infty} \tag{12}\] \[L_{P}=\max\{\max_{\alpha\in\mathcal{S}}|\frac{\partial\rho_{1}( P,\alpha)}{\partial\alpha}|,\max_{\alpha\in\mathcal{S}}|\frac{\partial\rho_{2}(P, \alpha)}{\partial\alpha}|\} \tag{13}\]
_Let \(y_{A},y_{B}\in\mathcal{Y}\), \(\varepsilon\sim\mathcal{N}(0,\sigma^{2}\mathbb{I}_{d})\) and suppose that for any \(i\), the \(\varepsilon\)-smoothed classifier defined by \(q(y\mid x;\varepsilon):=\mathbb{E}(p(y\mid x+\varepsilon))\) has class probabilities that satisfy with top-2 classes \(y_{A},y_{B}\) as \(q(y_{A}\mid\phi(x,\alpha_{i});\varepsilon)\geq p_{A}^{(i)}\geq p_{B}^{(i)} \geq\max_{y\neq y_{A}}q(y\mid\phi(x,\alpha_{i});\varepsilon)\),_
_then it is guaranteed that \(\forall\alpha\in\mathcal{S}\colon y_{A}=\operatorname*{argmax}_{y}q(y\mid \phi(x,\alpha);\varepsilon)\) if_
\[\max_{1\leq i\leq N-1}\sqrt{\frac{1}{2}\sum_{k,r,s}(O(V,\alpha_{i})_{krs}-O(V, \alpha_{i+1})_{krs})^{2}}<\frac{\sigma}{2}\min_{1\leq i\leq N}\left(\Phi^{-1} \left(p_{A}^{(i)}\right)-\Phi^{-1}\left(p_{B}^{(i)}\right)\right).\]
### PwS-OF: Certification via Pixel-wise Smoothing with One-Frame Point Cloud as Prior
In this section, we extend our discussion to a more generalized scenario where only a one-frame dense point cloud is known. Such data can be conveniently acquired through various practical methods such as stereo vision [14], depth cameras [28], or monocular depth estimation [11]. Before delving into the Lipschitz properties of the projection oracle based on a single-frame point cloud, we begin by defining the image projection derived from the one-frame point cloud with \(\delta\)-convexity, showing how close the pixels projected from entire 3D points are to the pixels projected from the one-frame point cloud, as shown in Figure 4.
**Definition 4.8** (3D projection from one-frame point cloud with \(\delta\)-convexity).: Given a one-frame point cloud \(\mathbb{P}_{1}\subset\mathbb{R}^{3}\), define the \(K\)-channel image \(x\in\mathcal{X}:[0,1]^{K}\times\mathbb{Z}^{2}\) as
\[x_{k,r,s}=O(V,0)_{k,r,s}=V_{P_{1},k},\text{ where }P_{1}\in\mathbb{P}_{1} \text{ s.t. }[\rho(P_{1},\alpha)]=(r,s) \tag{14}\]
Define \(\delta\)-convexity that with the minimal \(\delta>0\), for any new point \(P\notin\mathbb{P}_{1}\) and any \(\alpha\in\mathcal{S}\), there exists \(P^{\prime}\in\mathbb{P}_{1}\) where \(\mid\rho(P,\alpha)-\rho(P^{\prime},\alpha)\mid_{\infty}\leq\delta\), it holds that \(D(P,\alpha)\geq D(P^{\prime},\alpha)\).
Following the Lipschitz-based approximation of the upper bound of the fully-covered interval in Section 4.3, we further approximate the fully-covered upper bound of the interval based on \(\delta\)-convexity point cloud in Lemma 4.9, followed by the certification in Theorem 4.10. Note that the finite constant \(C_{\delta}\) in Lemma 4.9 and Theorem 4.10 can be expressed in the closed form in the Appendix Lemma D.2 together with the full proof in the Appendix Section D.
**Lemma 4.9** (Approximated upper bound of interval from one-frame point cloud).: _Given the projection from one-frame 3D point cloud \(V_{1}\in\mathcal{V}_{1}:\mathbb{P}_{1}\times[0,1]^{K}\) and unknown entire point cloud \(V\in\mathcal{V}:\mathbb{P}\times[0,1]^{K}\) with \(\delta\)-convexity, if the one-axis position projection function \(\rho\) is monotonic in \(l_{\infty}\) norm over camera motion space, there exists a finite constant \(C_{\delta}\geq 0\) such that if the interval \(\Delta_{OF}^{r,s}\) is defined as,_
\[\Delta_{OF}^{r,s}=\min_{P_{1}\in\mathbb{P}_{1}}\frac{1}{L_{P_{1}}+C_{\delta}} \min_{P_{1}\in\mathbb{P}_{1}}(\mid\rho(P_{1},\sup\mathbb{U}_{P_{1},r,s})-\rho (P_{1},\inf\mathbb{U}_{P_{1},r,s})\mid_{\infty}-2\delta) \tag{15}\]
_where \(L_{P_{1}}\) is the Lipschitz constant for projection function \(\rho\) given 3D point \(P_{1}\),_
\[L_{P_{1}}=\max\{\max_{\alpha\in\mathcal{S}}|\frac{\partial\rho_{1}(P_{1}, \alpha)}{\partial\alpha}|,\max_{\alpha\in\mathcal{S}}|\frac{\partial\rho_{2}( P_{1},\alpha)}{\partial\alpha}|\} \tag{16}\]
_then for any \(P\in\mathbb{P}\), it holds that \(\Delta_{OF}^{r,s}\leq\Delta^{r,s}=\min_{P\in\mathbb{P}}\{\sup\mathbb{U}_{P,r,s }-\inf\mathbb{U}_{P,r,s}\}\)_
**Theorem 4.10** (Certification with approximated partitions from one-frame point cloud).: _For the projection from one-frame 3D point cloud \(V_{1}\in\mathcal{V}_{1}:\mathbb{P}_{1}\times[0,1]^{K}\) and unknown entire point cloud \(V\in\mathcal{V}:\mathbb{P}\times[0,1]^{K}\) with \(\delta\)-convexity, let \(\phi\) be the one-axis rotation or translation with parameters in \(\mathcal{S}\subseteq\mathcal{Z}\) with the monotonic projection function \(\rho\) in \(l_{\infty}\) norm over \(\mathcal{S}\), we have uniformly partitioned \(\{\alpha_{i}\}_{i=1}^{N}\subseteq\mathcal{S}\) under interval \(\Delta_{\alpha}\) with finite constant \(C_{\delta}\geq 0\) where_
\[\Delta_{\alpha}\leq\min_{r,s}\Delta_{OF}^{r,s}=\min_{r,s}\min_{P_ {1}\in\mathbb{P}_{1}}\frac{1}{L_{P_{1}}+C_{\delta}}\min_{P_{1}\in\mathbb{P}_{ 1}}(\mid\rho(P_{1},\sup\mathbb{U}_{P_{1},r,s})-\rho(P_{1},\inf\mathbb{U}_{P_{1 },r,s})\mid_{\infty}-2\delta)\] \[L_{P_{1}}=\max\{\max_{\alpha\in\mathcal{S}}|\frac{\partial\rho_ {1}(P_{1},\alpha)}{\partial\alpha}|,\max_{\alpha\in\mathcal{S}}|\frac{\partial \rho_{2}(P_{1},\alpha)}{\partial\alpha}|\} \tag{17}\]
_Let \(y_{A},y_{B}\in\mathcal{Y}\), \(\varepsilon\sim\mathcal{N}(0,\sigma^{2}\mathbb{I}_{d})\) and suppose that for any \(i\), the \(\varepsilon\)-smoothed classifier defined by \(q(y\mid x;\varepsilon):=\mathbb{E}(p(y\mid x+\varepsilon))\) has class probabilities that satisfy with top-2 classes \(y_{A},y_{B}\) as \(q(y_{A}\mid\phi(x,\alpha_{i});\varepsilon)\geq p_{A}^{(i)}\geq p_{B}^{(i)} \geq\max_{y\neq y_{A}}q(y\mid\phi(x,\alpha_{i});\varepsilon)\),_
_then it is guaranteed that \(\forall\alpha\in\mathcal{S}\colon y_{A}=\operatorname*{argmax}_{y}q(y\mid\phi( x,\alpha);\varepsilon)\) if_
\[\max_{1\leq i\leq N-1}\sqrt{\frac{1}{2}\sum_{k,r,s}(\phi(x,\alpha_{i})_{krs}- \phi(x,\alpha_{i+1})_{krs})^{2}}<\frac{\sigma}{2}\min_{1\leq i\leq N}\left( \Phi^{-1}\left(p_{A}^{(i)}\right)-\Phi^{-1}\left(p_{B}^{(i)}\right)\right).\]
## 5 Experiments
In this section, we aim to answer two questions: the first one is can we avoid too much "camera shaking" in the certification -- prevent sampling in camera motion space through the proposed method with much fewer projected frames required? The second question we want to answer is what is the trade-off of the proposed method regarding efficiency and effectiveness compared to the resolvable baseline as an upper bound? We first get started with the experimental setup to answer these questions. More details about experiments can be found in the Appendix Section E.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Num. of Required & & PwS \& PwS-L \& & PwS-OF \& \\ Projected Frames & & PwS-Diffusion & PwS-L-Diffusion & PwS-OF-Diffusion \\ \hline \(T_{z}\), 10mm radius & 10k / 100\% & 3.5k / 35\% & 5.1k / 51\% & 5.9k / 59\% \\ \(R_{y}\), 0.25\({}^{\circ}\) radius & 10k / 100\% & 3.4k / 34\% & 5.0k / 50\% & 6.1k / 61\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of numbers of required projection frames and the percent ratio w.r.t. the resolvable baseline (100%) for the same 10k Monte Carlo sampling.
### Experimental Setup
**Dataset and Smoothed Classifiers.** We adopt the MetaRoom dataset [23] for the experiment, which contains camera poses associated with the entire dense point cloud. The dataset contains 20 different indoor objects for classification with camera motion perturbations of translation and rotation along x, y, and z axes (\(T_{z},T_{x},T_{z},R_{z},R_{x},R_{y}\)). The default perception models are based on ResNet-50 and ResNet-101 is used for different model complexity. To enhance the robustness against pixel-wise perturbation, the base classifiers are fine-tuned with both zero-mean Gaussian data augmentation [5] and pre-trained diffusion-based denoiser [3], whose notation is with _-Diffusion_. The default variance \(\sigma^{2}\) of pixel-wise smoothing is with \(\sigma=0.5\), while results for \(\sigma=0.25,0.75\) are also shown in the ablation study. With these pixel-wise smoothed classifiers, the results of Theorem 4.5, 4.7, 4.10 are denoted as _PwS_, _PwS_-L and _PwS_-_OF_, respectively. All the experiments are conducted on an Ubuntu 20.04 server with NVIDIA A6000 and 512G RAM.
**Baseline and Evaluation Metrics.** We adopt the camera motion smoothing (CMS) method [23] as the baseline on the MetaRoom dataset, which is based on the _resolvable_ projection with the tightest certification as an upper bound. The base classifiers are kept the same for fair comparisons. Note that the diffusion-based denoiser [3], designed for pixel-wise Gaussian denoising, is not applicable to the CMS baseline whose input is without Gaussian noise. We use the _number of required projected frames_ as a hardware-independent metric to evaluate certification efficiency, as image capture is the most resource-intensive part of certifying against camera motion. To assess certification effectiveness, we report _certified accuracy_ - the ratio of the test images that are both correctly classified and satisfy the guarantee condition in the certification theorems [34; 4; 23], fairly showing how well the certification goal is achieved under the given radius for different certification methods.
### Efficient Certification Requiring Fewer Projected Frames
We first show that our proposed certification is efficient with much fewer projected frames required. Frome Table 2, it can be seen that for the 10k Monte Carlo sampling to certify z-axis translation \(T_{z}\) and y-axis rotation \(R_{y}\), the baseline CMS [23] needs 10k projected frames to construct the camera motion smoothed classifier, while our proposed certification only needs 30% - 60% projected frames as the partitioned images with pixel-wise smoothing of 10k Monte Carlo sampling. Besides, since the uniform partitioning can be done in the offline one-time manner, the proposed methods successfully avoid the impractical "camera shaking" dilemma in the robustness certification against camera motion.
We can also find that the certified accuracy of _PwS_, _PwS_-_L_ and _PwS_-_OF_ are very close in Table 3 and 4, because the only difference is that they use different numbers of partitioned images as shown in Table 2. This is consistent with the theoretical analysis that our proposed three certification theorems
are theoretically supposed to have the same certified accuracy performance but with different numbers of projected frames as partitions.
### The trade-off of Certified Accuracy and Efficiency
In this section, we present the trade-off regarding certified accuracy and the number of required projected frames compared to the tightest certification CMS [23] as an upper bound as the resolvable case [34; 16]. In Table 4, although our proposed methods with data augmentation are looser than CMS due to the pixel-wise smoothing with much fewer projected frames and better efficiency, the diffusion-based _PwS_ can remarkably boost the certified accuracy to achieve about 80% of the upper bound with only 30% of image projections in Table 2. Therefore, a significant trade-off can be seen by slightly sacrificing the certification effectiveness but saving a huge amount of image projection frames in certifications.
### Ablation Study
**Influence of the variance of the smoothing distribution.** As shown in Table 5, for all translation and rotation axes, the certified accuracy with \(\sigma=0.5\) is higher than that with \(\sigma=0.75\), showing the accuracy/robustness trade-off [5]. However, the performance with \(\sigma=0.25\) is poor because the pixel-wise smoothing is too weak to cover the projection errors in the certification condition (9).
**Performance under different model complexity.** In Table 6, we compare the certified accuracy performance of different model complexity. We can find larger model complexity will be less certifiably robust with lower certified accuracy under various variances and perturbation radii, which is consistent [23] with previous work due to overfitting.
**Performance with different partitions.** In this ablation, we discuss the influence of partition numbers corresponding to the theory in empirical experiments under different models. As shown in Table 7, it can be seen that as the partition number goes up, the certification performance tends to converge, where the partitioned interval is within the upper bound of the motion interval and is consistent with Lemma 4.4 and Table 2.
## 6 Conclusion and Limitations
In this work, we propose an efficient robustness certification framework against camera motion perturbation through pixel-wise smoothing. We introduce a new partitioning method to fully cover the projection errors through the pixel-wise smoothed classifier. Furthermore, we adopt the Lipschitz property of projection to approximate the partition intervals and extend our framework to the case of only requiring the one-frame point cloud. Extensive experiments show a significant trade-off of using only 30% projected frames but achieving 80% certified accuracy compared to the upper bound baseline. Potential limitations of this method include the assumption of Lipschitz continuity, which may not hold in all scenarios, and the reliance on a single-frame point cloud, potentially limiting its applicability to complex scenes with dynamic objects. Regarding negative social impact, improved robustness in visual perception models might lead to increased surveillance capabilities and potential misuse, infringing on individual privacy and raising ethical concerns.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Radii along z-axis translation, \(T_{z}\) & \multicolumn{2}{c}{10mm} & \multicolumn{2}{c}{100mm} \\ Smoothing variance \(\sigma^{2}\), PwS & \(\sigma=0.5\) & \(\sigma=0.75\) & \(\sigma=0.5\) & \(\sigma=0.75\) \\ \hline ResNet50 & 0.223 & 0.140 & 0.142 & 0.091 \\ ResNet101 & 0.150 & 0.140 & 0.108 & 0.042 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Certified accuracy of different model complexity
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Number of Partitions & \multirow{2}{*}{1000} & \multirow{2}{*}{2000} & \multirow{2}{*}{3000} & \multirow{2}{*}{4000} & \multirow{2}{*}{5000} & \multirow{2}{*}{6000} & \multirow{2}{*}{7000} \\ Radius 0.25\({}^{\circ}\), y-axis rotation \(R_{y}\) & & & & & & \\ \hline ResNet50 & 0.083 & 0.117 & 0.142 & 0.142 & 0.150 & 0.150 & 0.150 \\ ResNet101 & 0.092 & 0.092 & 0.117 & 0.125 & 0.125 & 0.125 & 0.125 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Certified accuracy with different numbers of partitioned images in pixel-wise smoothing |
2301.00068 | Inconsistencies in Masked Language Models | Learning to predict masked tokens in a sequence has been shown to be a
helpful pretraining objective for powerful language models such as PaLM2. After
training, such masked language models (MLMs) can provide distributions of
tokens in the masked positions in a sequence. However, this paper shows that
distributions corresponding to different masking patterns can demonstrate
considerable inconsistencies, i.e., they cannot be derived from a coherent
joint distribution when considered together.
This fundamental flaw in MLMs can lead to self-contradictory behaviors during
inference. On various benchmark datasets including MMLU, MLMs can give
different predictions to the same input question. From BERT-base to UL2-20B, we
show that such inconsistencies exist ubiquitously in MLMs of diverse sizes and
configurations. In light of our observations, we further propose an
inference-time strategy for MLMs called Ensemble of Conditionals. It jointly
considers a selected range of inconsistent conditionals directly produced by
the MLM for the final prediction, which often leads to considerable accuracy
improvement. | Tom Young, Yunan Chen, Yang You | 2022-12-30T22:53:25Z | http://arxiv.org/abs/2301.00068v3 | # On the Inconsistencies of Conditionals Learned by Masked Language Models
###### Abstract
Learning to predict masked tokens in a sequence has been shown to be a powerful pretraining objective for large-scale language models. After training, such masked language models can provide distributions of tokens conditioned on bidirectional context.
In this short draft, we show that such bidirectional conditionals often demonstrate considerable inconsistencies, i.e., they can not be derived from a coherent joint distribution when considered together. We empirically quantify such inconsistencies in the simple scenario of bigrams for two common styles of masked language models: T5-style and BERT-style 1. For example, we show that T5 models often confuse its own preference regarding two similar bigrams.
Footnote 1: [https://github.com/tomyoung903/MLM_](https://github.com/tomyoung903/MLM_) inconsistencies
Such inconsistencies may represent a theoretical pitfall for the research work on sampling sequences based on the bidirectional conditionals learned by BERT-style MLMs. This phenomenon also means that T5-style MLMs capable of infilling will generate discrepant results depending on how much masking is given, which may represent a particular trust issue.
## 1 Introduction
Pretraining objectives of large language models can be roughly divided into two categories. First, vanilla next token prediction Brown et al. (2020) aims to learn the distribution of the next token in a sequence given the context to the left. Second, the masked language modeling (MLM) objective Devlin et al. (2018); Raffel et al. (2020), which masks out a portion of the tokens in a sequence and asks the model to predict them, aims to learn the distribution of one or more tokens given bidirectional context.
While the major breakthrough, aka, GPT3 Brown et al. (2020) was demonstrated using vanilla next token prediction, recent work Tay et al. (2022); Zeng et al. (2022); Bavarian et al. (2022) has hinted that incorporating the masked language modeling objective may be highly beneficial. In addition, Tay et al. (2022) has demonstrated that such bidirectional conditionals provide strong infilling capabilities.
One may notice that, unlike the unidirectional conditional distributions that vanilla next token prediction learns, the bidirectional conditionals that MLMs learn are overly abundant in terms of representing a coherent joint distribution. Therefore, they are not guaranteed to be self-consistent (see Chapter 2).
A very simple example for such inconsistencies is shown in Figure 1. In this example, we obtain the bidirectional conditional distributions that the T5 model learned using two input masked sequences. The two similar sequences are designed with a small difference, in order to examine if the resulting conditionals satisfy a basic law of probabilities (hold consistency). Results clearly show otherwise. We design experiments to quantify such inconsistencies in Chapter 3.
One interesting line of research in the literature focused on whether and how the bidirectional conditionals that BERT-style MLMs provide can be used to construct the joint probability of a sequence in a principled manner Goyal et al. (2021); Ghazvininejad et al. (2019); Wang et al. (2019), just like vanilla next token prediction models. But the numerous papers on this topic have overlooked the concern of inconsistencies. Yamakoshi et al. (2022) stated that "any deviations (supposedly) tend to be negligible with large datasets". The experiments shown in Chapter 4 demonstrate that this is not the case at all. We thus posit that addressing the consistency issue should be treated as the first step in modeling the joint distribution with BERT-style
MLMs.
## 2 Why inconsistencies can occur in MLMs
For a set of conditional distributions to be self-consistent, they need to be able to be derived from a single coherent joint distribution.
One essential reason for the inconsistencies to occur among the conditionals provided by a trained MLM is that the number of conditionals it can calculate far _exceeds_ the number of degrees of freedom of a joint distribution.
Consider a sequence of length \(L\) and with vocabulary \(V\), the joint distribution of the tokens in such a sequence is defined by \(|V|^{L}\) probabilities that sum to 1. Therefore, the number of degrees of freedom (\(D\)) of such a joint distribution is given by:
\[D_{joint}=|V|^{L}-1, \tag{1}\]
Vanilla next token prediction models or MLMs essentially learn conditionals that predict some tokens in the sequence given others. Such conditional probabilities and probabilities from the joint distribution can be linearly derived from each other. Therefore, each free conditional that the language model is capable of specifying provides an additional constraint on the joint distribution. One can easily verify that a vanilla next token prediction based language model provides \(|V|^{L}-1\) free conditionals 2 to just exactly determine the joint distribution. Therefore, a vanilla next token prediction model (no matter how it is trained, or even untrained) would never suffer from self-inconsistencies.
Footnote 2: A single softmax operation over \(V\) essentially gives \(|V|-1\) free conditionals. Here we call conditionals free when they can be assigned any values decided by an underlying neural network.
MLMs, which can provide distributions of masked tokens given bidirectional context, could specify far more free conditionals.
Even for the simplest case, where the MLM predicts the distribution of only 1 (masked) token given \(L-1\) other (unmasked) tokens in the sequence, the total number of free conditionals (\(N\)) is
\[N_{mlm}(1)=L\times(|V|^{L}-|V|^{L-1}), \tag{2}\]
Just \(N_{mlm}(1)\) is already far larger than \(D_{joint}\). We leave the discussions for \(N_{mlm}(k)\) for later work. This fact sets up room for there to be inconsistencies among the conditionals an MLM provides.
We explain our strategies and quantification methods for diagnosing T5-style and BERT-style MLMs in the next 2 sections.
## 3 Diagnosing T5-style MLMs
T5-style MLMs are capable of modeling the distribution of segments of variable length in a given bidirectional context. Here we use the simple bigram scenario to expose the inconsistencies that exist among such distributions. Consider two bigrams \(x_{1}x_{21}\) and \(x_{1}x_{22}\) that share a same token \(x_{1}\) in the first position, the conditional distributions concerning such two bigrams should satisfy
\[\frac{p(x_{21}|x_{1})}{p(x_{22}|x_{1})}=\frac{p(x_{1}x_{21})}{p(x_{1}x_{22})} \tag{3}\]
The left hand side can be obtained by only masking the second token, leaving \(x_{1}\) in the context. While the right hand side can be obtained by masking the whole bigram. For the example in Figure 1, "chicken" corresponds to \(x_{1}\). "Salad" and "breast" correspond to \(x_{21}\) and \(x_{22}\).
We automatically build such a dataset of bigram pairs in a given context by running BART Lewis et al. (2019) on a portion of the C4 dataset Raffel et al. (2020) to generate another plausible bigram alternative to an existing one. We then use the two sequences to test T5's inconsistencies regarding Equation 3 3.
Footnote 3: We focus on plausible bigrams in this draft because they are most relevant in practice but Equation 3 should hold for all bigrams in all sentences in all corpora in a self-consistent MLM.
We can use relative difference (\(d_{r}\)) of the left and right hand side of Equation 3 to quantify the inconsistency.
\[d_{r}=\frac{|\mathtt{lhs}_{(3)}-\mathtt{rhs}_{(3)}|}{\mathtt{lhs}_{(3)}} \tag{4}\]
\(d_{r}\) is expected to be 0 for a self-consistent MLM. Table 1 shows that \(d_{r}\) is typically very large for the T5 family, although scaling up the model has a markable effect on reducing it.
Another way to quantify the inconsistency regarding the two bigrams is to count how often a severe case happens where the MLM disagrees with itself on which bigram it prefers. I.e., sometimes \(\mathtt{lhs}_{(3)}>1\) and \(\mathtt{rhs}_{(3)}<1\), or \(\mathtt{lhs}_{(3)}<1\)
and \(\texttt{rhs}_{(3)}>1\). Figure 1 shows such a case, where t5-11b prefers "chicken salad" over "chicken breast" when considering the conditionals provided in \(\texttt{rhs}_{(3)}\), yet its preference flips when considering \(\texttt{lhs}_{(3)}\). Table 1 shows that disagreement on comparison happens with considerable frequency, but scaling up models helps reduce it.
## 4 Diagnosing BERT-style MLMs
Ever since the success of BERT Devlin et al. (2018), there has been research effort Goyal et al. (2021); Wang et al. (2019); Yamakoshi et al. (2022) on sampling sequences from it by modeling its implicitly specified joint distribution one way or another. For example, Goyal et al. (2021) views it as an energy-based model defined using the bidirectional conditionals of the masked tokens. Such research effort is based on the intuition that bidirectional conditionals could be more robust than auto-regressive (unidirectional) conditionals Goyal (2021).
This line of research operates based on the assumption that the overly abundant bidirectional conditionals that the BERT-style MLMs provide are self-consistent. Yamakoshi et al. (2022) based on Heckerman et al. (2000); Neville and Jensen (2007) stated that "any deviations (supposedly) tend to be negligible".
We demonstrate in this section that this is not the case at all. There are considerable inconsistencies that exist among the bidirectional conditionals that a trained BERT-style model provides. Figure 2 demonstrates such an example. Again we use bigrams as the simplest example to expose the inconsistencies. Because BERT-style MLMs cannot directly model the distribution of multiple tokens together (local joint distribution), we consider 4 bigrams this time: \(x_{11}x_{21}\), \(x_{11}x_{22}\), \(x_{12}x_{21}\) and \(x_{12}x_{22}\). \(x_{11}\) and \(x_{12}\) are two possible tokens that the first position can take. \(x_{21}\) and \(x_{22}\) the second. One can easily verify 4 that the 8 conditional distributions concerning such four bigrams should theoretically satisfy
Footnote 4: Clue: converting each term to local joint distributions.
\[\begin{split}\frac{p(x_{21}|x_{11})}{p(x_{22}|x_{11})}\times\frac {p(x_{11}|x_{22})}{p(x_{12}|x_{22})}=\\ \frac{p(x_{11}|x_{21})}{p(x_{12}|x_{21})}\times\frac{p(x_{21}|x_{ 12})}{p(x_{22}|x_{12})}\end{split} \tag{5}\]
One way to test the inconsistencies among the 8 conditionals is to try to solve one using the other 7 and compare the solved conditional with the original (inferred by model) one. We show the solved conditionals in the example in Figure 2. It clearly demonstrates that the probabilities given by
Figure 1: A simple bigram example that exposes the inconsistencies in the T5 model. The conditional probabilities that the model learned (quoted from t5-11b fed with the shown masked sequences) contradict each other greatly. Not only are the ratios unbalanced, the model confuses its own preference of the two bigrams.
a BERT-style MLM can be in serious inconsistencies with each other. We build a testing dataset with 4 such plausible bigrams for each context and quantify consistencies in using difference of log probabilities:
\[|\log p_{solved}-\log p_{inferred}| \tag{6}\]
Table 2 shows the results.
## 5 Summary
This draft demonstrates and naively quantifies the inconsistencies that exist in large MLMs in the simple scenario of bigrams. Such inconsistencies originate from the fact that the number of bidirectional conditionals MLMs can learn far exceeds what is needed for constructing the joint distribution. Given the recent evidence that MLM-based pretraining might be a powerful paradigm, we think that resolving the its consistency issue could be a necessary step for future work.
## Acknowledgements
We would like to thank Fuzhao Xue for the useful discussions.
|
2309.03343 | Collective dynamics of swarmalators with higher-order interactions | Higher-order interactions shape collective dynamics, but how they affect
transitions between different states in swarmalator systems is yet to be
determined. To that effect, we here study an analytically tractable swarmalator
model that incorporates both pairwise and higher-order interactions, resulting
in four distinct collective states: async, phase wave, mixed, and sync states.
We show that even a minute fraction of higher-order interactions induces abrupt
transitions from the async state to the phase wave and the sync state. We also
show that higher-order interactions facilitate an abrupt transition from the
phase wave to the sync state by bypassing the intermediate mixed state.
Moreover, elevated levels of higher-order interactions can sustain the presence
of phase wave and sync state, even when pairwise interactions lean towards
repulsion. The insights gained from these findings unveil self-organizing
processes that hold the potential to explain sudden transitions between various
collective states in numerous real-world systems. | Md Sayeed Anwar, Gourab Kumar Sar, Matjaz Perc, Dibakar Ghosh | 2023-09-06T19:54:40Z | http://arxiv.org/abs/2309.03343v1 | # Collective dynamics of swarmalators with higher-order interactions
###### Abstract
Higher-order interactions shape collective dynamics, but how they affect transitions between different states in swarmalator systems is yet to be determined. To that effect, we here study an analytically tractable swarmalator model that incorporates both pairwise and higher-order interactions, resulting in four distinct collective states: async, phase wave, mixed, and sync states. We show that even a minute fraction of higher-order interactions induces abrupt transitions from the async state to the phase wave and the sync state. We also show that higher-order interactions facilitate an abrupt transition from the phase wave to the sync state by bypassing the intermediate mixed state. Moreover, elevated levels of higher-order interactions can sustain the presence of phase wave and sync state, even when pairwise interactions lean towards repulsion. The insights gained from these findings unveil self-organizing processes that hold the potential to explain sudden transitions between various collective states in numerous real-world systems.
## I Introduction
The dual interplay between swarming and synchronization is in the heart of _swarmalation_ phenomena which is commonly used to delineate the collective behaviors of entities called _swarmalators_. With ever-growing advancements and discoveries in multi-agent system studies, it has been observed that there are numerous systems where entities aggregate in space and synchronize over time. Such systems, both natural and man-made, are appropriate examples of swarmalator systems that exhibit simultaneous swarming and synchronization effects. Japanese tree frogs [1], magnetic domain walls [2], swarming robots [3], magnetotactic bacteria [4], vinegar peels [5], Quincke rollers [6], Janus particles [7] are few examples where swarmalation effects are encountered.
Although the two fields synchronization [8; 9] and swarming [10; 11; 12] have been extensively explored over the last few decades, studies on their combined effect began not very long ago. In the seminal work of Vicsek et. al [13], particles moved inside a bounded region, and their directions were influenced by the neighboring particles lying inside a unit radius. Despite being novel to illustrate the phase transition from a desynchronized to a synchronized state, the Vicsek model places little focus on the spatial position and structures of the particles. Later, depending on the spatial movement of the particles, synchronization phenomena were studied by the introduction of mobile agents or moving oscillators [14; 15; 16]. Here also, the effect of spatial position and internal dynamics is unidirectional: the position of the particles influences their internal dynamics, but not the other way round. The pivotal works that laid the platform for the swarmalator systems were carried out by Tanaka et al. [17] and Isawa et al. [18; 19] while studying the movement and dynamics of chemotactic oscillators. The movements of these oscillators are mediated by the surrounding chemical. In 2017, O'Keeffe et al. [20] proposed a simple mathematical model of swarmalators where they move in the two-dimensional plane with Kuramoto-like oscillator dynamics. Five long-term collective states for position aggregation and phase synchronization, viz., _static sync_, _static phase wave_, _splintered phase wave_, _active phase wave_, and _static async_ were reported in this study. The spatial attraction between two swarmalators was affected by their relative phase, and the spatial distance between them influenced the phase coupling. Adopting this central idea of mutual influence of the spatial position and phase, this model has further been studied with different interaction functions [21; 22; 23; 24], coupling schemes [25; 26; 27], external forcing [28], large particle limit [29; 30], etc. [31; 32] A plethora of new collective states has been found which in turn made the researchers interested in delving deeper into the study of such systems. However, mathematical analysis in terms of the solvability of the models or analytical properties of the emerging states was lacking in most of the cases.
To address this problem, O'Keeffe et al. [33; 34] came up with a 1D swarmalator model which looked like a pair of coupled Kurmaoto equations,
\[\dot{x}_{i}=v_{i}+\frac{J}{N}\sum\limits_{j=1}^{N}\sin(x_{j}-x_{i})\cos(\theta _{j}-\theta_{i}), \tag{1a}\] \[\dot{\theta}_{i}=\omega_{i}+\frac{K}{N}\sum\limits_{j=1}^{N}\sin(\theta_{j}- \theta_{i})\cos(x_{j}-x_{i}), \tag{1b}\]
where \(x_{i}\) and \(\theta_{i}\) represent the position on a 1D ring and the phase of the \(i^{th}\) swarmalator, respectively for
\(i=1,2,\ldots,N\). \(v_{i}\), \(\omega_{i}\) are the velocity and internal frequency, and \(J,K\) are the inter-element coupling strengths. Being similar to the Kuramoto model, this model was analytically tractable. Analyses of the emerging states had been possible when the model is studied with nonidentical velocities and frequencies [34], distributed couplings [35], random pinning [36; 37], thermal noise [38], phase lags [39], etc.
All these prior research on swarmalators has predominantly focused on a thorough exploration of their behavior within the framework of pairwise interactions among the constituent entities of the system. More precisely, the spatial and phase dynamics that dictate the interactions among swarmalators have been exclusively governed by the presence of pairwise connections that link them together. Nevertheless, the reliance on a hypothesis rooted solely in pairwise interactions proves inadequate in capturing a wide array of pertinent scenarios [40; 41; 42]. An example includes the group interactions among microorganisms in microbial communities [40]. This limitation becomes especially apparent in cases where the interplay of phase and spatial dynamics among individual swarmalators is not solely influenced by pairwise connections among swarmalators but rather by the simultaneous influence of multiple interconnected swarmalators. This intricate interdependence cannot be adequately decomposed solely into the framework of pairwise connections and thus necessitates the introduction of higher-order (group) interactions [43; 44; 45; 46; 40].
Recent advances in physics and other communities have drawn specific attention to the significance of interactions among dynamic units that extend beyond the pairwise realm. Notably, three- and four-way interactions have come to the forefront, revealing their pivotal role in shaping collective behaviors [47; 48; 49]. As a result, the field of network science has shifted its focus toward comprehending higher-order structures to more accurately capture the diverse interactions that exist beyond conventional pairwise connections [43; 44; 45; 46]. These intricate interactions are frequently encoded within simplicial complexes [50; 51; 52], delineating various levels of simplex structures within the network. An assemblage of 1-simplices (edges/links), 2-simplices (filled triangles), and so on constitute the intricate framework of the simplicial complex, reflecting the essence of these higher-order interactions.
In this context, the impact of higher-order interactions on the domain of synchronization has been the subject of thorough investigation in recent years [53; 54; 55; 56; 57; 58; 59; 60; 61; 62]. These studies have unveiled that the incorporation of higher-order interactions among dynamic units has the potential to give rise to a plethora of new collective phenomena. However, the influence of higher-order interactions on the realm of swarmalators, which is indissolubly linked to the field of synchronization, remains an unexplored territory to date, and therefore, it is imperative to investigate this uncharted territory.
Motivated by this, in this paper, we propose a model of swarmalators that encompasses both pairwise and higher-order interactions, notably three-body interactions among the swarmalators. These interactions are intricately woven into a simplicial complex framework at the microscopic level. Our proposed model extends the 1D swarmalator model [Eq. (1)] on a ring to the framework that incorporates higher-order interactions among the phase and space dynamics of the swarmalators and thus analytically tractable using the generalized Ott-Antonsen (OA) ansatz [63] in the thermodynamic (\(N\to\infty\)) limit. Similar to the pairwise model, the present model also displays a diverse range of dynamics, featuring four distinct collective states: async, phase wave, mixed, and sync states. We aim to understand how higher-order interactions influence the formation and characteristics of these distinct collective states. Due to the introduction of higher-order interactions, several significant phenomena arise that are absent when swarmalator interactions are limited to only pairwise connections. Our observations highlight that the inclusion of higher-order interactions leads to abrupt transitions from the async state to the phase wave state and the sync state, contingent on the specific configurations of coupling strengths. We also observe that stronger higher-order interactions can give rise to the persistence of phase wave and sync states, even in cases where pairwise couplings are negative (i.e., repulsive). Furthermore, our findings also reveal that substantial higher-order couplings can facilitate a direct emergence of the synchronized state from the phase wave state, bypassing the intermediate mixed state.
## II Model
We consider an ensemble of \(N\) swarmalators subjected to two- and three-body interactions, embedded in a simplicial complex at the microscopic level, where the instantaneous position and phase of the \(i^{th}\) swarmalator are represented by \((x_{i},\theta_{i})\in(\mathbb{S}^{1},\mathbb{S}^{1})\). When decoupled, each swarmalator is characterized by a set of natural velocity and frequency \((v_{i},\omega_{i})\), drawn from a specific distribution \(g_{v,\omega}\). For the sake of pedagogy, we here choose the intrinsic frequencies to be drawn from a Lorentzian distribution, \(g_{v,\omega}(x)=\frac{\Delta_{v,\omega}}{\pi(x^{2}+\Delta_{v,\omega}^{2})}\), with zero mean and half-width \(\Delta_{v,\omega}\). The evolution of the swarmalators under the impression of pairwise and triadic interactions is then given by,
\[\dot{x}_{i}=v_{i}+\frac{J_{1}}{N}\sum\limits_{j=1}^{N}\sin(x_{j}-x_{i})\cos( \theta_{j}-\theta_{i})+\frac{J_{2}}{N^{2}}\sum\limits_{j=1}^{N}\sum\limits_{k=1 }^{N}\sin(2x_{j}-x_{k}-x_{i})\cos(2\theta_{j}-\theta_{k}-\theta_{i}), \tag{2a}\] \[\dot{\theta}_{i}=\omega_{i}+\frac{K_{1}}{N}\sum\limits_{j=1}^{N}\sin(\theta_{j}- \theta_{i})\cos(x_{j}-x_{i})+\frac{K_{2}}{N^{2}}\sum\limits_{j=1}^{N}\sum \limits_{k=1}^{N}\sin(2\theta_{j}-\theta_{k}-\theta_{i})\cos(2x_{j}-x_{k}-x_{ i}), \tag{2b}\]
where \((J_{1},K_{1})\) and \((J_{2},K_{2})\) are the pairwise and triadic coupling strengths associated with the spatial and phase interactions, respectively. Note that when \((J_{2},K_{2})=(0,0)\), it coincides with the conventional evolution equation of swarmalators over a ring [34]. Therefore, analogous to the pairwise swarmalators model over a ring, Eq. (2b) incorporates position-dependent synchronization in swarmalators. To achieve synchronization, we employ the well-known Kuramoto sine terms [48; 64] which minimize the pairwise and triadic phase differences between the swarmalators, and the associated distance-dependent cosine terms amplify the coupling intensity among the interacting swarmalators. On the other hand, Eq. (2a) serves as a counterpart of Eq. (2b) and captures phase-dependent swarming behavior. In this case, the sine terms minimize the distances between the swarmalators, leading them to aggregate or swarm together, while the cosine terms, similar to Eq. (2b), bolster the coupling between the swarmalators, but this time based on their phase similarity. One can also interpret both Eqs. (2a) and (2b) as models that represent synchronization on the unit torus with both pairwise and three-body interactions. Hence, the model provides a more general framework for swarmalators by considering beyond pairwise interactions.
In order to simplify the model, we convert the trigonometric function to complex exponentials and introduce new variables \(\xi_{i}=x_{i}+\theta_{i}\) and \(\eta_{i}=x_{i}-\theta_{i}\), which eventually provide,
\[\dot{\xi}_{i}=v_{i}+\omega_{i}+\frac{1}{2\mathrm{i}}[H_{1}^{+}e^ {-\mathrm{i}\xi_{i}}-(H_{1}^{+})^{*}e^{\mathrm{i}\xi_{i}}]\\ +\frac{1}{2\mathrm{i}}[H_{2}^{-}e^{-\mathrm{i}\eta_{i}}-(H_{2}^{ -})^{*}e^{\mathrm{i}\eta_{i}}], \tag{3a}\] \[\dot{\eta}_{i}=v_{i}-\omega_{i}+\frac{1}{2\mathrm{i}}[H_{1}^{-}e ^{\mathrm{i}\xi_{i}}-(H_{1}^{-})^{*}e^{\mathrm{i}\xi_{i}}]\\ +\frac{1}{2\mathrm{i}}[H_{2}^{+}e^{-\mathrm{i}\eta_{i}}-(H_{2}^{+ })^{*}e^{\mathrm{i}\eta_{i}}], \tag{3b}\]
where \(\mathrm{i}=\sqrt{-1}\) and
\[H_{1}^{\pm} =J_{1}^{\pm}Z_{1}^{+}+J_{2}^{\pm}Z_{2}^{+}(Z_{1}^{+})^{*}, \tag{4}\] \[H_{2}^{\pm} =J_{1}^{\pm}Z_{1}^{-}+J_{2}^{\pm}Z_{2}^{-}(Z_{1}^{-})^{*},\]
with \(J_{m}^{\pm}=\frac{J_{m}\pm K_{m}}{2}\) (\(m=1,2\)) and
\[Z_{m}^{\pm}=\sum\limits_{j=1}^{N}e^{m\mathrm{i}(x_{j}\pm\theta_{j})}=S_{m}^{ \pm}e^{\mathrm{i}\psi_{m}^{\pm}}. \tag{5}\]
Here \(Z_{1}^{\pm}(S_{1}^{\pm})\) indicates the order parameters associated with the conventional swarmalator model that quantifies the space-phase order of the system [34; 20; 20]. When the correlation between phase and space is perfect (i.e., \(x_{i}=\pm\theta_{i}+C_{0}\), for some constant \(C_{0}\)), the value of the order parameter \(S_{1}^{\pm}\) is equal to 1. While, when \(\theta_{i}\) and \(x_{i}\) are uncorrelated, the value of \(S_{1}^{\pm}\) is 0. Therefore the order parameters \(S_{1}^{\pm}\) measure the degree of correlation between the space (\(x_{i}\)) and phase (\(\theta_{i}\)) variables, with \(S_{1}^{\pm}\) ranging from 0 (no correlation) to 1 (perfect correlation). On the other hand, \(Z_{2}^{\pm}(S_{2}^{\pm})\) can be interpreted as new order parameters that come up as a result of higher-order interactions, analogous to the higher-order Kuramoto phase models [48; 65]. In our present study, we focus only on the evolution of conventional order parameters \(Z_{1}^{\pm}(S_{1}^{\pm})\).
## III Results
Now, the coupling dependency of order parameters \(Z_{1}^{\pm}(S_{1}^{\pm})\) in Eqs. (3) and (5) indicate that depending on the values of coupling strengths, several combinations for order parameters \((S_{1}^{+},S_{1}^{-})\) can be achieved, which eventually leads to the emergence of different collective states. We, therefore, start by integrating the Eqs. (2a) and (2b) for the calculation of the order parameters \(S_{1}^{+},S_{1}^{-}\) with \(N=10^{5}\) swarmalators and the half-widths of Lorentzian distribution \(\Delta_{v}=\Delta_{\omega}=1\). The results show that depending on the values of coupling strengths, four distinct stable states emerge in the system, which can be classified by the dyad \((S_{1}^{+},S_{1}^{-})\) as follows:
1. The first state is referred to as the "Async" state, denoted by \((S_{1}^{+},S_{1}^{-})=(0,0)\) state. In this state, the swarmalators are uniformly distributed in phase and space, as demonstrated in Fig. 1(a). There is no space-phase order among the swarmalators and so \((S_{1}^{+},S_{1}^{-})\approx(0,0)\) [see Fig. 1(e)].
2. "Phase waves" or \((S,0)\) or \((0,S)\) state [Figs. 1(b) and 1(f)]: In this state the swarmalators develop a band or phase wave, where the spatial positions
\(x_{i}\) and phase angles \(\theta_{i}\) are related as \(x_{i}\approx\mp\theta_{i}\), depending on whether it is \((S,0)\) or \((0,S)\) state, respectively. In the \((\xi,\eta)\) coordinate system, the swarmalators are partially locked in either \(\xi_{i}\) or \(\eta_{i}\) and drift in the other variable.
3. The third state is referred to as the "Mixed state", denoted by \((S_{1}^{+},S_{1}^{-})=(S^{\prime},S^{\prime\prime})\), where \(S^{\prime}\neq S^{\prime\prime}\neq 0\) (Fig. 1(g)). In this state, the swarmalators form a band where clusters of correlated swarmalators are observed to move together, as depicted in Fig. 1(c).
4. The fourth state is known as "Sync" state, denoted by \((S,S)\), where \(S\neq 0\) [Fig. 1(h)]. In this state, the swarmalators are partially locked in both \(\xi_{i}\) and \(\eta_{i}\). For most initial conditions, two clusters of locked swarmalators emerge spontaneously, as depicted in Fig. 1(d). However, one can also observe a single cluster of locked swarmalators for some initial conditions.
Next, in order to analyze all these four states, we employ the Ott-Antonsen (OA) ansatz [63] in the thermodynamic (\(N\to\infty\)) limit and derive the expressions for order parameters in each state. In the \(N\to\infty\) limit, the collective states of the swarmalators can be defined by a continuous function \(\rho(v,\omega,\xi,\eta,t)\) as,
\[\rho\equiv\frac{1}{N}\sum_{j=1}^{N}\delta(v-v_{j})\delta(\omega-\omega_{j}) \delta(\xi-\xi_{j})\delta(\eta-\eta_{j}), \tag{6}\]
where \(\rho(v,\omega,\xi,\eta,t)\) is the probability to have a swarmalator at time \(t\) with intrinsic frequency \(\omega\), intrinsic velocity \(v\), and coordinates \(\eta\) and \(\xi\). Differentiating (6) with respect to \(t\), one can obtain the continuity equation
\[\frac{\partial\rho}{\partial t}+\frac{\partial}{\partial\xi}(\dot{\xi}\rho)+ \frac{\partial}{\partial\eta}(\dot{\eta}\rho)=0. \tag{7}\]
Given that our model is a higher-order Kuramoto model occurring on a torus, we are in pursuit of a "torodoidal" OA ansatz [63; 34; 67], which can be described as a multiplication of Poisson kernels,
\[\rho(v,\omega,\xi,\eta,t)=\frac{1}{4\pi^{2}}g_{v}(v)g_{\omega}( \omega)\bigg{[}1+\sum_{p=1}^{\infty}\alpha^{p}e^{ip\xi}+\mathrm{c.c}\bigg{]}\\ \times\bigg{[}1+\sum_{q=1}^{\infty}\beta^{q}e^{iq\eta}+\mathrm{c. c}\bigg{]}, \tag{8}\]
where \(\alpha(v,\omega,t)\) and \(\beta(v,\omega,t)\) are undetermined and need to be solved in a self-consistent manner, and "c.c" refers
Figure 1: **Distinct collective states.** (a)-(d) Scatter plots of all the four states in \((x,\theta)\) plane. (e)-(f) Time evolution of order parameters \(S_{1}^{+}(S_{1}^{-})\), depicted in blue (red). (a) and (e) represent the async state for a choice of coupling strengths \((J_{1},K_{1},J_{2},K_{2})=(7,-1,8,9)\), (b) and (f) correspond to the phase wave state with \((J_{1},K_{1},J_{2},K_{2})=(1,6.5,5,9)\), (c) and (g) are associated with the mixed state for \((J_{1},K_{1},J_{2},K_{2})=(9,1.8,6.5,5.5)\), and the sync state is delineated in (d), (h) for \((J_{1},K_{1},J_{2},K_{2})=(9,5,6.5,5.5)\). All the results are generated by integrating Eqs. (2a) and (2b) using Julia Tsit5 adaptive differential equation solver [66] with \(N=10^{5}\) swarmalators whose intrinsic velocity and frequency have been drawn from the Lorentzian distribution with zero mean and half-width \(\Delta_{\omega}=\Delta_{v}=1\).
to the complex conjugate of its preceding terms. Now plugging the expression for \(\rho\) [given by Eq. (8)] into the continuity Eq. (7), we obtain that the Fourier modes \(\alpha(v,\omega,t)\) and \(\beta(v,\omega,t)\) are subsequently constrained to adhere to the identical conditions for all harmonics \(p\) and \(q\), which leads to the fulfillment of a coupled complex-valued differential equation as follows,
\[\dot{\alpha}=-\mathrm{i}(v+\omega)\alpha+\frac{1}{2}[(H_{1}^{+})^{ *}-H_{1}^{+}\alpha^{2}]\\ +\frac{\alpha}{2}[(H_{2}^{-})^{*}\beta^{*}-H_{2}^{-}\beta], \tag{9a}\] \[\dot{\beta}=-\mathrm{i}(v-\omega)\beta+\frac{1}{2}[(H_{2}^{+})^{ *}-H_{2}^{+}\beta^{2}]\\ +\frac{\beta}{2}[(H_{1}^{-})^{*}\alpha^{*}-H_{1}^{-}\alpha], \tag{9b}\]
in the submanifold \(\|\alpha\|=1=\|\beta\|\). Subsequently, the order parameters \(Z_{m}^{\pm}\) (\(m=1,2\)) become
\[Z_{m}^{+}=\int_{-\infty}^{\infty}dv\int_{-\infty}^{\infty}d\omega g_{v}(v)g_{ \omega}(\omega)\alpha^{*^{m}}(v,\omega,t), \tag{10a}\] \[Z_{m}^{-}=\int_{-\infty}^{\infty}dv\int_{-\infty}^{\infty}d\omega g_{v}(v)g_{ \omega}(\omega)\beta^{*^{m}}(v,\omega,t). \tag{10b}\]
Equations (9a)-(10b) contain a set of self-consisting equations for the order parameters \(Z_{m}^{\pm}\) in the \(N\to\infty\) limit.
### Stability of the async state
In the async state, the order parameters \(Z_{m}^{\pm}\) are zero. When \(Z_{m}^{\pm}=0\), Eqs. (9a) and (9b) give the solutions \(\alpha_{0}(v,\omega,t)=\exp\left[-\mathrm{i}(v+\omega)t\right]\) and \(\beta_{0}(v,\omega,t)=\exp\left[-\mathrm{i}(v-\omega)t\right]\). Clearly, these solutions are self-consistent, as substituting them into the Eqs. (10a) and (10b) one can obtain \(Z_{m}^{\pm}=\exp\left[-\mathrm{i}(\Delta_{v}+\Delta_{\omega})t\right]\), which converges to zero in the \(t\to\infty\) limit.
Now, to investigate the stability of the async state, we introduce a small perturbation around the solutions \(\alpha_{0}\) and \(\beta_{0}\) given by
\[\alpha_{1}(v,\omega,t) =\alpha(v,\omega,t)-\alpha_{0}(v,\omega,t), \tag{11}\] \[\beta_{1}(v,\omega,t) =\beta(v,\omega,t)-\beta_{0}(v,\omega,t).\]
This eventually gives the perturbed order parameters as
\[Z_{11}^{+} =\int_{-\infty}^{\infty}dv\int_{-\infty}^{\infty}d\omega g_{v}(v )g_{\omega}(\omega)\alpha_{1}^{*}(v,\omega,t), \tag{12}\] \[Z_{11}^{-} =\int_{-\infty}^{\infty}dv\int_{-\infty}^{\infty}d\omega g_{v}(v )g_{\omega}(\omega)\beta_{1}^{*}(v,\omega,t),\]
where we assume that \(1\gg\|\alpha_{1}(v,\omega,t)\|\), \(\|\beta_{1}(v,\omega,t)\|\) and \(\left\|Z_{11}^{\pm}\right\|\). Substituting the expressions for \(\alpha_{1}\), \(\beta_{1}\) and \(Z_{11}^{\pm}\) into the Eqs. (9a) and (9b), and considering the terms up to first order, we obtain the following set of evolution equations
\[\dot{\alpha}_{1}=-\mathrm{i}(v+\omega)\alpha_{1}+\frac{1}{2}[J_{1 }^{+}(Z_{11}^{+})^{*}-J_{1}^{+}Z_{11}^{+}\alpha_{0}^{2}]\\ +\frac{\alpha_{0}}{2\beta_{0}}[J_{1}^{-}(Z_{11}^{-})^{*}-J_{1}^{ -}Z_{11}^{-}\beta_{0}^{2}], \tag{13a}\] \[\dot{\beta}_{1}=-\mathrm{i}(v-\omega)\beta_{1}+\frac{1}{2}[J_{1 }^{+}(Z_{11}^{-})^{*}-J_{1}^{+}Z_{11}^{-}\beta_{0}^{2}]\\ +\frac{\beta_{0}}{2\alpha_{0}}[J_{1}^{-}(Z_{11}^{+})^{*}-J_{1}^{ -}Z_{11}^{+}\alpha_{0}^{2}]. \tag{13b}\]
One can notice that the terms with \(\alpha_{0}\) and \(\beta_{0}\) as a function of \(v\) and \(\omega\) oscillate rapidly for \(t\gg 1\). These rapidly oscillating terms barely contribute when integrating over \(v\) and \(\omega\), and thus to approximate the integrals, we can neglect these small terms. Now, introducing a new variable \(\tau=v+\omega\), \(\alpha\) and \(\beta\) can be expressed as \(\alpha(v,\omega,t)=\alpha(y,t)\), and \(\beta(v,\omega,t)=\beta(y,t)\), respectively. Then, integrating the Eqs. (13a) and (13b) over \(v\) and \(\omega\), we eventually obtain
\[\frac{dZ_{11}^{+}}{dt}=\int_{-\infty}^{\infty}dv\int_{-\infty}^{\infty}d\omega g _{v}(v)g_{\omega}(\omega)\frac{d\alpha_{1}^{*}}{dt}=\int_{-\infty}^{\infty}d \tau\mathcal{G}(\tau)\frac{d\alpha_{1}^{*}(\tau)}{dt}=\frac{J_{1}^{+}-2( \Delta_{v}+\Delta_{\omega})}{2}Z_{11}^{+}, \tag{14a}\] \[\frac{dZ_{11}^{-}}{dt}=\int_{-\infty}^{\infty}dv\int_{-\infty}^{\infty}d\omega g _{v}(v)g_{\omega}(\omega)\frac{d\beta_{1}^{*}}{dt}=\int_{-\infty}^{\infty}d \tau\mathcal{G}(\tau)\frac{d\beta_{1}^{*}(\tau)}{dt}=\frac{J_{1}^{+}-2( \Delta_{v}+\Delta_{\omega})}{2}Z_{11}^{-}, \tag{14b}\]
where
\[\mathcal{G}(\tau)=\int_{-\infty}^{\infty}dv\int_{-\infty}^{\infty}d\omega g_{v} (v)g_{\omega}(\omega)\delta[\tau-(v\pm\omega)]=\frac{\Delta_{v}+\Delta_{ \omega}}{\pi[\tau^{2}+(\Delta_{v}+\Delta_{\omega})^{2}]}. \tag{15}\]
To evaluate the integration, we use the fact that it has a residue \(\tau=\mathrm{i}(\Delta_{v}+\Delta_{\omega})\) in the upper half plane where \(\alpha_{0}^{*}(\tau)\) and \(\beta_{0}^{*}(\tau)\) are analytic. Now, the async state becomes stable when the perturbed order parameters
die out in time. Hence, from Eqs. (14a) and (14b), one can conclude that the async state becomes unstable when \(J_{1}^{+}-2(\Delta_{v}+\Delta_{\omega})>0\), or in other words the async state sustains its stability for \(J_{1}^{+}-2(\Delta_{v}+\Delta_{\omega})<0\). Therefore, the curve satisfying
\[J_{1}^{+}=2(\Delta_{v}+\Delta_{\omega}), \tag{16}\]
signifies the critical curve above which the async state loses its stability, and the swarmalators form a phase wave state. Eq. (16) reveals that the transition from async state to phase wave state depends solely on the pairwise interactions by means of the pairwise coupling strengths \((J_{1},K_{1})\).
### Analysis of phase wave state
In the phase wave state, swarmalators develop a phase wave or band with \(x_{i}=\mp\theta_{i}\) for \((S,0)\) and \((0,S)\) states, respectively. Therefore, we seek a solution to Eqs. (9a)-(10b) that satisfies \(\dot{\alpha}=0\), \(\dot{\beta}\neq 0\), \(Z_{m}^{+}\neq 0\), and \(Z_{m}^{-}=0\), \((m=1,2)\). Substituting these relations into the Eqs. (9a) and (9b), we have
\[0=-\mathrm{i}(v+\omega)\alpha+\frac{1}{2}[(H_{1}^{+})^{*}-H_{1}^{+}\alpha^{2}], \tag{17a}\] \[\dot{\beta}=-\mathrm{i}(v-\omega)\beta+\frac{\beta}{2\alpha}[(H_{1}^{-})^{*}- H_{1}^{-}\alpha^{2}]. \tag{17b}\]
Notice that \(\alpha(v,\omega,t)\) depends on \(v+\omega\), which is distributed according to a Lorentzian distribution with spread \(\Delta_{v}+\Delta_{\omega}\). This allows us to evaluate the integral for the order parameter \(Z_{m}^{+}\) explicitly as \(Z_{1}^{+}=\alpha^{*}(i\Delta_{v}+i\Delta_{\omega},t)\) using the Cauchy's residue theorem by closing the contour to an infinite-radius semicircle in the upper half-plane. Similarly, we can obtain \(Z_{2}^{+}=\alpha^{*^{2}}(i\Delta_{v}+i\Delta_{\omega},t)=(Z_{1}^{+})^{2}\). Substituting the relation between \(Z_{1}^{+}\) and \(Z_{2}^{+}\) into the Eqs. (17a) and (17b), and assuming \(\psi_{1}^{+}=0\), we obtain the expressions for \(\alpha\) and \(\beta\) as follows,
\[\alpha(v,\omega)=\mathcal{F}\bigg{(}\frac{v+\omega}{S_{1}^{+}(J_{1}^{+}+(S_{1 }^{+})^{2}J_{2}^{+})}\bigg{)}, \tag{18}\]
and
\[\beta(v,\omega,t)=e^{(-rt)}, \tag{19}\]
where we introduce a function \(\mathcal{F}\) as
\[\mathcal{F}(y)=-\mathrm{i}y+\sqrt{1-y^{2}}, \tag{20}\]
and the term \(r\) is given by,
\[r=\mathrm{i}(v-\omega)-\frac{1}{2\alpha}[(H_{1}^{-})^{*}-H_{1}^{-}\alpha^{2} ]=\mathrm{i}\bigg{[}\frac{J_{1}K_{1}}{J_{1}^{+}+(S_{1}^{+})^{2}J_{2}^{+}} \bigg{(}\frac{v}{J_{1}}-\frac{\omega}{K_{1}}\bigg{)}+\frac{(S_{1}^{+})^{2}J_{2 }K_{2}}{J_{1}^{+}+(S_{1}^{+})^{2}J_{2}^{+}}\bigg{(}\frac{v}{J_{2}}-\frac{ \omega}{K_{2}}\bigg{)}\bigg{]}. \tag{21}\]
Equation (19) results in \(Z_{m}^{-}=0\), as anticipated. On the other hand, Eq. (18) implies that
\[S_{1}^{+}=\mathcal{F}^{*}\bigg{(}\frac{\mathrm{i}\Delta_{v}+\mathrm{i}\Delta _{\omega}}{S_{1}^{+}(J_{1}^{+}+(S_{1}^{+})^{2}J_{2}^{+})}\bigg{)}. \tag{22}\]
Solving Eq. (22) for \(S_{1}^{+}\) gives the expression of \(S_{1}^{+}\) as
\[S_{1}^{+}=\sqrt{\frac{J_{2}^{+}-J_{1}^{+}\pm\sqrt{\left(J_{1}^{+}+J_{2}^{+} \right)^{2}-8J_{2}^{+}\Delta}}{2J_{2}^{+}}}, \tag{23}\]
where \(\tilde{\Delta}=\Delta_{v}+\Delta_{\omega}\). The plus and minus sign inside the square root corresponds to the stable and unstable solutions if they exist. Equation (23) suggests the presence of a bistable region upon the selection of coupling strengths. In this scenario, a particular combination of coupling values leads to the coexistence of asynchronous and phase wave states depending on the initial conditions chosen. Saying differently, with varying coupling strengths, there exist two different transition scenarios: one corresponds to the transition from the async state to the phase wave state, referred to as forward transition, and the other is backward transition, which is associated with the transition from phase wave state to async state. For the forward transition case, \(S_{1}^{+}\) bifurcates from \(0\) (i.e., async state) at
\[J_{1,f}^{+}=2\tilde{\Delta}=2(\Delta_{v}+\Delta_{\omega}), \tag{24}\]
which is consistent with Eq. (16), where forward transition from async state to phase wave state emerges and only the stable branch of \(S_{1}^{+}\) exists. This once again guarantees that the transition from async state to phase wave state is solely dependent on the pairwise coupling strengths. On the other hand, in the case of backward transition, both the stable and unstable branches coexist for a region of parameter values called the hysteresis region. But as soon as the backward critical coupling condition \(J_{1}^{+}=J_{1,b}^{+}\) is reached, both the stable and unstable branches clash and destroy each other. As a result, the stability of the phase wave is completely shattered, and \(S_{1}^{+}=0\) is the sole viable solution. From Eq. (23), we obtain that both the stable and unstable branches of \(S_{1}^{+}\) exist if the following coupling condition satisfies,
\[J_{1,b}^{+}=2\sqrt{2J_{2}^{+}\tilde{\Delta}}-J_{2}^{+}. \tag{25}\]
Eq. (25) thus corresponds to the critical curve for the
backward transition.
To illustrate this, we numerically integrate the Eqs. (2a) and (2b) with \(N=10^{5}\) swarmalators for the calculation of \(S_{1}^{\pm}\). In Fig. 2(a) we plot the variation of order parameter \(S_{1}^{1\pm}\) as a function of pairwise coupling strength \(K_{1}\). \(K_{1}\) is first increased adiabatically from \(K_{1}=0\) to an adequately large value and then decreased back with the other couplings fixed at \(J_{1}=1\), \(J_{2}=5\), and \(K_{2}=9\). The results reveal that \(S_{1}^{-}\) remains zero all the time, while an abrupt transition from \(S_{1}^{+}\approx 0\) to \(S_{1}^{+}\approx 0.7\) occurs at \(K_{1}=7\) [obtained from Eq. (24)] as the coupling strength \(K_{1}\) is increased. Another abrupt transition from \(S_{1}^{+}\approx 1\) to \(S_{1}^{+}\approx 0\) occurs at \(K_{1}=6\) [obtained from Eq. (25)] as \(K_{1}\) is decreased starting from the phase wave state. Therefore, the system supports bistability behavior (where both phase wave and async states are stable) for \(K_{1}\in[6,7]\). This is further substantiated by our theoretical predictions regarding the order parameters (illustrated using continuous and dashed magenta lines for the stable and unstable branches, respectively), demonstrating a commendable concurrence with the numerical findings (represented by solid circles). To emphasize the bistability nature, in Fig. 3 we demonstrate the async (upper row) and phase wave (lower row) states, respectively, for \(K_{1}=6.5\) while keeping the other couplings fixed at specific values as earlier.
The outcomes given above highlight a significant finding that the introduction of higher-order interactions can cause a sudden transition from the async state to phase wave state and vice-versa, which was not the case with only pairwise interactions among the swarmalators [34]. Thereafter, to better understand the effect of higher-order interactions in promoting bistability behavior, we plot the complete stability profile for the system in Fig. 2(b). It shows that for adequately small values of higher-order coupling (\(J_{2}^{+}<4\)), the transition from async state (\(I\)) to phase wave state (\(III\)) is continuous and takes place through a supercritical pitchfork bifurcation at \(J_{1}^{+}=4\). However, for larger values of higher-order coupling (\(J_{2}^{+}>4\)), the pitchfork bifurcation at \(J_{1}^{+}=4\) becomes subcritical and a saddle-node bifurcation emerges at a lower value of \(J_{1}^{+}\), given by Eq. (25) (depicted in blue). These two bifurcations correspond to the sudden transition observed in Fig. 2(a), and the region bounded by them refers to the region of bistability (\(II\)) between async and phase wave state. Figure 2(b) also reveals an interesting observation that for relatively larger values of higher-order coupling (\(J_{2}^{+}\geq 16\)), the region of bistability stretches into the negative region \(J_{1}^{+}<0\), demonstrating the fact that higher-order interactions can stabilize the phase wave state even when the pairwise interactions are repulsive.
### Analysis of sync state
In the sync state we have \((S_{1}^{+},S_{1}^{-})=(S,S)\), with \(S\neq 0\). Therefore, we seek solutions to the Eqs. (9a)-(10b)
Figure 2: **Abrupt transition from the async state to the phase wave state.** (a) Order parameter \(S_{1}^{\pm}\) as a function of pairwise coupling strength \(K_{1}\) for \(J_{1}=1\) and fixed three-body coupling strengths \(J_{2}=5\), \(K_{2}=9\). Solid and dashed magenta curves represent the stable and unstable solutions given by Eq.(23), respectively. The dashed vertical lines correspond to the critical couplings for forward (in red) and backward (in black) transitions obtained from Eqs. (24) and (25), respectively. Solid circles depict the result obtained from the direct simulation of Eqs. (2a) and (2b) for \(N=10^{5}\) oscillators with half widths of the Lorentzian distribution \(\Delta_{v}=\Delta_{\omega}=1\). The results reveal an abrupt transition from async (\(I\)) state to phase wave (\(III\)) state, which results in a bistable domain (\(II\)) where both the referred states are stable. (b) The comprehensive stability diagram illustrating the states of async (\(I\)), phase wave (\(III\)) and bistability (\(II\)) as a function of pairwise coupling \(J_{1}^{+}=\frac{J_{1}+K_{1}}{2}\) and higher-order coupling \(J_{2}^{+}=\frac{J_{2}+K_{2}}{2}\). Two distinct types of bifurcations, saddle-node and pitchfork, are represented by blue and red curves, respectively. These two curves intersect each other at \((J_{1}^{+},J_{2}^{+})=(4,4)\). When \(J_{2}^{+}<4\), the pitchfork bifurcation is deemed supercritical, whereas when \(J_{2}^{+}>4\), the pitchfork bifurcation becomes subcritical.
such that \(Z_{m}^{\pm}\neq 0\)\((m=1,2)\). We will here analyze the sync state in two cases: one when \(J_{1}=K_{1}\), \(J_{2}=K_{2}\) and the other for an arbitrary combination of pairwise and higher-order couplings, i.e., for a generic case.
#### iii.2.1 Specific case: \(J_{1}=k_{1}\) and \(J_{2}=k_{2}\)
In this case, the coupled complex-valued differential Eqs. (9a) and (9b) become decoupled as follows
\[\dot{\alpha}=-\mathrm{i}(v+\omega)\alpha+\frac{1}{2}[(H_{1}^{+})^{*}-H_{1}^{+} \alpha^{2}], \tag{26a}\] \[\dot{\beta}=-\mathrm{i}(v-\omega)\beta+\frac{1}{2}[(H_{2}^{+})^{*}-H_{2}^{+} \beta^{2}], \tag{26b}\]
and consequently the phases \(\xi\) and \(\eta\) develop independently analogous to typical higher-order Kuramoto model [48]. From Eqs. (26a) and (26b) one can observe that the functions \(\alpha\) and \(\beta\) are dependent on \((v+\omega)\) and \((v-\omega)\), respectively, which are distributed in accordance with a Lorentzian distribution characterized by a spread of \((\Delta_{v}+\Delta_{\omega})\). This allows us to evaluate the integrals for the order parameters \(Z_{1}^{\pm}\) explicitly as \(Z_{1}^{+}=\alpha^{*}(i\Delta_{v}+i\Delta_{\omega},t)\) and \(Z_{1}^{-}=\beta^{*}(i\Delta_{v}+i\Delta_{\omega},t)\) using the residue theorem. Similarly the other order parameters \(Z_{m}^{\pm}\) can be obtained explicitly as \(Z_{2}^{+}=\alpha^{*^{2}}(i\Delta_{v}+i\Delta_{\omega},t)=(Z_{1}^{+})^{2}\) and \(Z_{2}^{-}=\beta^{*^{2}}(i\Delta_{v}+i\Delta_{\omega},t)=(Z_{1}^{+})^{2}\). Substituting these into Eqs. (26a) and (26b) give
\[\dot{Z_{1}}^{\pm}=-(\Delta_{v}+\Delta_{\omega})Z_{1}^{\pm}+\frac{1}{2}\bigg{[} \bigg{\{}K_{1}Z_{1}^{\pm}+K_{2}(Z_{1}^{\pm})^{2}(Z_{1}^{\pm})^{*}\bigg{\}}\]
\[-\bigg{\{}K_{1}(Z_{1}^{\pm})^{*}+K_{2}(Z_{1}^{\pm})^{*^{2}}(Z_{1}^{\pm})\bigg{\}} (Z_{1}^{\pm})^{2}\bigg{]}. \tag{27}\]
As anticipated, the above equations are similar to those of the typical higher-order Kuramoto model [48] with Lorentzian natural frequency having half width \((\Delta_{v}+\Delta_{\omega})\). Assuming \(\psi_{1}^{\pm}\) to be zero, we can obtain the steady state solution for the order parameters corresponding to the sync state \(S_{1}^{+}=S_{1}^{-}=S\) as
\[S=\sqrt{\frac{K_{2}-K_{1}\pm\sqrt{\left(K_{1}+K_{2}\right)^{2}-8K_{2}\Delta}}{ 2K_{2}}}, \tag{28}\]
where plus and minus signs correspond to the stable and unstable solutions if they exist. Eq. (28) resembles the solution for phase wave state, given by Eq. (23) when \(J_{1}=K_{1}\) and \(J_{2}=K_{2}\). This suggests that the transition from async \((0,0)\) state to sync \((S,S)\) state can emerge without experiencing an intermediate phase wave \((S,0)\) or \((0,S)\) state and the forward transition occurs at the same critical coupling given by
\[J_{1}=K_{1}=2(\Delta_{v}+\Delta_{\omega}). \tag{29}\]
To illustrate this, in Fig. 4(a) we plot the order parameters \(S_{1}^{\pm}\) as a function of \(K_{1}\) with fixed higher-order coupling strengths \(J_{2}=K_{2}=9\). The solid and dashed magenta curves depict the analytical predictions provided by Eq. (28) for the stable and unstable branches, respectively, which are in good agreement with the outcomes obtained through direct simulation (represented by circles). With increasing \(K_{1}\), we observe a sudden transition from \(S_{1}^{\pm}\approx 0\) (async state) to \(S_{1}^{\pm}\approx 1\) (sync state) at \(K_{1}=4\) [given by Eq. (29)]. Another abrupt transition from \(S_{1}^{\pm}\approx 1\) to \(S_{1}^{\pm}\approx 0\) emerges at \(K_{1}=3\) [obtained from Eq. (25) for \(J_{1}=K_{1}\) and \(J_{2}=K_{2}\)] as \(K_{1}\) is decreased adiabatically starting from the sync state. Thus, within the domain bounded by these two sudden transitions, the system exhibits a bistable nature where both the sync and async states are achievable for any specific value of coupling strength. To highlight the presence of bistability, we showcase the two distinct states for \(K_{1}=3.5\) in Fig. 5: asynchronous state (upper row) and synchronous state (lower row). Besides, in Fig. 4(b) we plot the stability diagram for the system, which shows
Figure 3: **Bistability between async and phase wave state**. Scatter plot for async \((0,0)\) and phase wave \((S,0)\) states at \(K_{1}=6.5\), \(J_{1}=1\), \(J_{2}=5\), and \(K_{2}=9\) [drawn from the region \((II)\) in Fig. 2(a)] is depicted in the upper and lower row, respectively. The order parameter \(S_{1}^{+}(S_{1}^{-})\) is indicated by the black (magenta) circle in the left column, where the values of \(S_{1}^{+}\) and \(S_{1}^{-}\) are represented by the length of the line joining the center with the respective circles. Clearly, in the upper row, both the values of order parameters are almost zero, indicating the async \((0,0)\) state, while in the lower row, the value of \(S_{1}^{+}\) is non-zero and \(S_{1}^{-}\) is zero, characterizing the phase wave \((S,0)\) state. In the right panel, the corresponding scatter plots in the \((x,\theta)\) plane are displayed. Swarmalators are uniformly distributed in both phase and space, characterizing the async state (in the upper row), whereas in the lower row, swarmalators display a correlation between phase and space and thus correspond to the phase wave state.
that beyond \(K_{2}=4\), a bistable dynamics emerges in the system due to the interplay between saddle-node and subcritical pitchfork bifurcations, and the associated region of bistability (\(II\)) is bounded by the curves of these bifurcations. On the other hand, for \(K_{2}<4\), a continuous transition from async state (\(I\)) to sync state (\(III\)) occurs at \(K_{1}=4\) through a supercritical pitchfork bifurcation. The stretch of the bistability region (\(II\)) in the regime \(K_{1}<0\) reveals the fact that for sufficiently larger higher-order coupling strengths, the system can achieve a stable sync state even when the pairwise interactions are repulsive (i.e., \(K_{1}=J_{1}<0\)).
#### iii.2.2 General case: \(J_{1}\neq K_{1}\) and \(J_{2}\neq K_{2}\)
In contrast to the earlier scenario, in this case, the coupled complex-valued differential equations (9a) and (9b) cannot be separated from each other. As a result, it's not possible to directly express the order parameters \(Z_{1}^{\pm}\) in terms of \(\alpha\) and \(\beta\), respectively. Therefore, we look for a solution of Eqs. (9a) and (9b) that satisfy \(\dot{\alpha}\neq 0,\dot{\beta}\neq 0,Z_{m}^{\pm}\neq 0\). Assuming \(\psi_{m}^{\pm}=0\), we find
\[\alpha(v,\omega)=\mathcal{F}\bigg{[}\frac{A_{1}+A_{2}(S_{1}^{-})^{2}}{S_{1}^{ +}\{C({S_{1}^{+}}^{2}+{S_{1}^{-}}^{-2})+2(D+ES_{1}^{+2}{S_{1}^{-}}^{-2})\}} \bigg{]}, \tag{30a}\] \[\beta(v,\omega)=\mathcal{F}\bigg{[}\frac{B_{1}+B_{2}(S_{1}^{+})^{2}}{S_{1}^{ -}\{C({S_{1}^{+}}^{2}+{S_{1}^{-}}^{-2})+2(D+ES_{1}^{+2}{S_{1}^{-}}^{-2})\}} \bigg{]}, \tag{30b}\]
where
\[\begin{array}{l}A_{1,2}=2(vK_{1,2}+\omega J_{1,2}),\;B_{1,2}=2(vK_{1,2}- \omega J_{1,2}),\\ C=J_{1}K_{2}+J_{2}K_{1},\;D=J_{1}K_{1},\;E=J_{2}K_{2}.\end{array} \tag{31}\]
Now, we solve the integrals for the order parameters given by Eqs. (10a) and (10b) using residue theorem. Further, considering the fact that in the sync state \(S_{1}^{+}=S_{1}^{-}=S\), we obtain
\[S=\mathcal{F}^{*}\bigg{[}\frac{\Delta_{1}+\Delta_{2}\mathcal{S}^{2}}{S\{2CS^{ 2}+2(D+ES^{4})\}}\bigg{]}, \tag{32}\]
where \(\Delta_{1,2}=2(\Delta_{v}K_{1,2}+\Delta_{\omega}J_{1,2})\). Eq. (32) eventually provides the expression for the order parameter, which can be acquired by solving the following equation for \(S\)
\[Ey^{3}+(C-E)y^{2}+(\Delta_{2}-C+D)y+(\Delta_{1}-D)=0, \tag{33}\]
where \(y=S^{2}\). From Eq. (33), it is clear that \(S\) bifurcates from zero at \(\Delta_{1}=D\), i.e.,
\[2\bigg{(}\frac{\Delta_{v}}{J_{1}}+\frac{\Delta_{v}}{K_{1}}\bigg{)}=1. \tag{34}\]
Notice that for \(J_{1}=K_{1}\), the condition (34) coincides with the critical coupling condition (29) and thus once
Figure 4: **Abrupt transition from async to sync state.** In (a), the behavior of the order parameter \(S_{1}^{\pm}\) is presented in relation to the pairwise coupling strength \(K_{1}\). With a fixed setting of \(J_{1}=K_{1}\) and constant three-body coupling strengths \(J_{2}=K_{2}=9\), both stable and unstable solutions (obtained from Eq. (28)) are showcased through solid and dashed magenta curves, respectively. The dashed vertical lines correspond to critical coupling values for forward (red) and backward (black) transitions. These values are derived from Eqs. (29) and (25), with \(J_{1}=K_{1},J_{2}=K_{2}\). The outcome obtained from direct simulations of Eqs.(2a) and (2b) is depicted in solid circles for \(N=10^{5}\) swarmalators with Lorentzian distribution half-widths \(\Delta_{\omega}=\Delta_{v}=1\). The findings distinctly reveal an abrupt transition from the async (\(I\)) state to the sync (\(III\)) state, establishing a bistable domain (\(II\)) where both states are stable. Moving to (b), a comprehensive stability diagram is presented, illustrating the states of async (\(I\)), phase wave (\(III\)), and bistability (\(II\)) as functions of pairwise coupling \(K_{1}\) and higher-order coupling \(K_{2}\). The diagram encompasses two distinct types of bifurcations: the blue curve representing a saddle-node bifurcation and the red curve depicting a pitchfork bifurcation. Notably, these curves intersect at \((K_{1},K_{2})=(4,4)\). When \(K_{2}<4\), the pitchfork bifurcation takes a supercritical nature, whereas, for \(K_{2}>4\), the pitchfork bifurcation becomes subcritical.
again guarantees that the sync state can bifurc directly from the async state without passing through any intermediate state [Fig. 4]. However, in general circumstances (i.e., \(J_{1}\neq K_{1},J_{2}\neq K_{2}\)), the transition from async state to sync state is not direct but occurs by passing through intermediate states, namely phase wave state and mixed state. To describe this, in Fig. 6 we plot the order parameters \(S_{1}^{\pm}\) for varying \(K_{1}\) while keeping the other couplings fixed at nominal values \(J_{1}=9,J_{2}=6.5,\text{ and }K_{2}=5.5\). The solid magenta and blue lines correspond to the analytical predictions of stable solutions of phase and sync states obtained from Eqs. (23) and (33), respectively, which are in favorable alignment with the results obtained through direct simulation (shown in solid and void markers) for the finite-size system. As observed, with increasing \(K_{1}\), the system passes from the async state (\(I\)) to the sync state (\(V\)) through the intermediate phase wave state (\(III\)) and mixed state (\(IV\)). Here, the evolution of the order parameters \((S_{1}^{+},S_{1}^{-})\) shows that with increasing \(K_{1}\), the system passes from \((0,0)\) (i.e., async) state to \((S,0)\) (i.e., phase wave) state through an abrupt transition, resulting into an interval of bistability (\(II\)) where both the async and phase wave states are stable. On the other hand, the system goes through a continuous transition from \((S,0)\) (i.e., phase wave) state to \((S,S)\) (i.e., sync) state, resulting in the intermediate mixed state \((S^{\prime},S^{\prime\prime})\) where \(S^{\prime}>S^{\prime\prime}\).
### Mixed State
In the mixed state we have \((S_{1}^{+},S_{1}^{-})=(S^{\prime},S^{\prime\prime})\), with \(S^{\prime}\neq S^{\prime\prime}\). This state resides as an intermediary between the phase wave and sync states and bifurcates from the \((S,0)\), or \((0,S)\) state when \(S^{\prime}>S^{\prime\prime}\) or \(S^{\prime\prime}>S^{\prime}\), through a continuous transition to the \((S,S)\) state. In Fig. 6, the shaded black region (\(IV\)) depicts the interval of \(K_{1}\) (keeping the other coupling fixed at a nominal value) for which the system passes through the mixed state. Unfortunately, we are unable to find the analytical boundaries in terms of the coupling strengths for the mixed state, and thus obtain the domain of mixed state through the numerical investigations by evaluating the order parameters \(S_{1}^{\pm}\) with \(S_{1}^{+}>S_{1}^{-}\). The distinctive characteristic of the mixed state lies in the fact that while both \(S^{\prime}\) and \(S^{\prime\prime}\) remain time-independent, the functions \(\alpha\) and \(\beta\) exhibit time dependency [34]. This contrasts with the time-independent equations (18) and (30) associated with the phase wave and sync states, respectively.
### Abrupt transition from phase wave state to sync state
As discussed above, in general, with increasing coupling strengths, a continuous transition from the phase wave state to the sync state emerges in the system through an intermediate mixed state. This smooth transition phenomenon is consistent with previously acquired results for only pairwise interactions between the swarmalators [34]. When the higher-order coupling strengths are relatively small, a comparable transition phenomenon is still evident in our present system with higher-order interactions [Fig. 6]. Conversely, when sufficiently significant higher-order couplings come into play, a critical observation comes to light. In this scenario, the influence of higher-order interactions leads to an abrupt and noteworthy transition from the \((S,0)/(0,S)\) state to the \((S,S)\) state, bypassing the intermediate mixed state. To illustrate this, in Fig. 7 we plot the order parameters \((S_{1}^{+},S_{1}^{-})\) as a function of \(K_{1}\) for adequately large higher-order couplings \(J_{2}=8\) and \(K_{2}=9\), keeping \(J_{1}\) fixed at \(J_{1}=7\). The solid magenta and blue lines represent the analytically predicted stable solutions for the phase and synchronized states, respectively, derived from Eqs. (23) and (33). Impressively, these analytical predictions closely match the outcomes obtained through direct simulations on the finite size system, depicted by the solid
Figure 5: **Bistabilty between async and sync state.** Scatter plot for async \((0,0)\) and sync \((S,S)\) states at \(K_{1}=J_{1}=3.5\), and \(K_{2}=J_{2}=9\) [drawn from the region (\(II\)) in Fig. 4(a)] is depicted in the upper and lower row, respectively. The order parameter \(S_{1}^{+}(S_{1}^{-})\) is indicated by the black (magenta) circle in the left column, where the values of \(S_{1}^{+}\) and \(S_{1}^{-}\) are represented by the length of the line joining the center with the respective circles. Clearly, in the upper row, the values of both order parameters are almost zero, indicating the async \((0,0)\) state, while in the lower row, both the order parameters take non-zero equal value, characterizing the sync \((S,S)\) state. In the right panel, the corresponding scatter plots in the \((x,\theta)\) plane are displayed. Swarmalators are uniformly distributed in both phase and space, characterizing the async state (in the upper row), whereas in the lower row, swarmalators are locked in both phase and space and thus correspond to the sync state.
and void markers. As \(K_{1}\) is being first adiabatically increased to a large value and then decreased, we observe two different abrupt transitions, resulting in two distinct bistable domains. The first one corresponds to a sudden transition from the \((0,0)\) state (\(I\)) to the \((S,0)\) state (\(III\)), inducing a region of bistability (\(II\)), where both the async and phase wave states are stable (shaded grey region). This bistable phenomenon has already been discussed in Sec. III.2. The second one is associated with an abrupt transition from \((S,0)\) state (\(III\)) to \((S,S)\) state (\(V\)), giving rise to a bistable domain (\(IV\)), where both stable phase wave and sync state can emerge (shaded red region). To elucidate this bistability nature, in Fig. 8, we demonstrate the two distinct states: phase wave state (upper row) and sync state (lower row) for fixed \(K_{1}=1.5\), drawn from the region of bistability (\(IV\)).
## IV Discussions
Summing up, here we have introduced an analytically tractable model of swarmalators that incorporates both pairwise and higher-order interactions (specifically three-body interactions), embedded in a simplicial complex at the microscopic level. This proposed model exhibits a high degree of complexity with four distinct collective states, namely async, phase wave, mixed, and sync states. The higher-order interactions introduce supplementary layers of nonlinearity into the behavior of the microscopic system dynamics. As a result, a few pivotal phenomena emerge that remain absent when interactions between the swarmalators are confined to only pairwise connections and lack the influence of higher-order interactions. We observe that the inclusion of higher-order interactions leads to abrupt transitions from the async state to either the phase wave state or the sync state, depending on the specific coupling strength configurations. Our observations also reveal that when the higher-order interactions are sufficiently strong, phase wave and sync state can emerge and persist, even in scenarios where pairwise couplings are repulsive. This implies that despite the potential decay of specific coupling types, the existence of alternative forms of coupling can play a pivotal role in upholding the regimes of bistability between async and phase wave (sync) states. Furthermore, our findings extend to the discovery that substantial higher-order couplings can facilitate a direct emergence of the sync state from the phase wave state without passing through the mixed state. This distinct behavior stands in contrast to situations involving solely pairwise interactions, where the synchronized state bifurcates from the phase wave state through the intermediate mixed state.
Thus, our study makes a substantial contribution to understanding the influence of higher-order interactions on shaping the collective dynamics of swarmalators, although numerous avenues for further exploration remain open. In this context, we specifically focus on higher-order interactions up to the third order, employing an all-to-all coupling arrangement. Consequently, investigating interactions beyond the three-body and exploring localized coupling configurations becomes a particularly
Figure 6: **Transition from async to sync state through intermediate mixed state**. Order parameter \(S_{1}^{+}\) as a function of pairwise coupling strength \(K_{1}\) for \(J_{1}=9\) and fixed three-body coupling strengths \(J_{2}=6.5\), \(K_{2}=5.5\). Solid magenta and blue curves represent the stable solutions for phase wave and sync states, given by Eqs.(23) and (33), respectively. Solid circles depict the result obtained from the direct simulation of Eqs. (2a) and (2b) for \(N=10^{5}\) oscillators with half widths of the Lorentzian distribution \(\Delta_{v}=\Delta_{\omega}=1\). An abrupt transition from the async (\(I\)) to phase wave (\(III\)) state is observed, which creates a domain of bistability (\(II\)), where both the states are stable. On the other hand, from the phase wave state (\(III\)) to sync state (\(V\)), a smooth (continuous) transition occurs, resulting in the shaded region (\(IV\)), which corresponds to (\(S^{\prime},S^{\prime\prime}\)) state, with \(S^{\prime}>S^{\prime\prime}\), and called as mixed state.
intriguing prospect for future research. It is also worth noting that our current investigation is limited to analyzing the effect of higher-order interactions solely within a 1D swarmalator model on a ring. As such, an equally captivating avenue for future exploration lies in examining how higher-order interactions impact the collective behavior within 2D and other higher-dimensional swarmalator systems.
## Acknowledgements
M.S.A. and G.K.S. would like to thank Kevin O'Keeffe for his valuable comments. M.P. is supported by the Slovenian Research Agency (Grant P1-0403).
|
2309.11747 | MarkNerf:Watermarking for Neural Radiance Field | A watermarking algorithm is proposed in this paper to address the copyright
protection issue of implicit 3D models. The algorithm involves embedding
watermarks into the images in the training set through an embedding network,
and subsequently utilizing the NeRF model for 3D modeling. A copyright verifier
is employed to generate a backdoor image by providing a secret perspective as
input to the neural radiation field. Subsequently, a watermark extractor is
devised using the hyperparameterization method of the neural network to extract
the embedded watermark image from that perspective. In a black box scenario, if
there is a suspicion that the 3D model has been used without authorization, the
verifier can extract watermarks from a secret perspective to verify network
copyright. Experimental results demonstrate that the proposed algorithm
effectively safeguards the copyright of 3D models. Furthermore, the extracted
watermarks exhibit favorable visual effects and demonstrate robust resistance
against various types of noise attacks. | Lifeng Chen, Jia Liu, Yan Ke, Wenquan Sun, Weina Dong, Xiaozhong Pan | 2023-09-21T03:00:09Z | http://arxiv.org/abs/2309.11747v1 | # MarkNerf:Watermarking for Neural Radiance Field
###### Abstract
A watermarking algorithm is proposed in this paper to address the copyright protection issue of implicit 3D models. The algorithm involves embedding watermarks into the images in the training set through an embedding network, and subsequently utilizing the NeRF model for 3D modeling. A copyright verifier is employed to generate a backdoor image by providing a secret perspective as input to the neural radiation field. Subsequently, a watermark extractor is devised using the hyperparameterization method of the neural network to extract the embedded watermark image from that perspective. In a black box scenario, if there is a suspicion that the 3D model has been used without authorization, the verifier can extract watermarks from a secret perspective to verify network copyright. Experimental results demonstrate that the proposed algorithm effectively safeguards the copyright of 3D models. Furthermore, the extracted watermarks exhibit favorable visual effects and demonstrate robust resistance against various types of noise attacks.
Neural radiation field,3D watermark,Robustness,Black Box Watermark
## I Introduction
Neural Radiation Field (NeRF) [1] is a deep learning model for modeling three-dimensional implicit spaces [2]. It uses neural networks to learn a continuous function that maps spatial coordinates to density and color, achieving the synthesis of new perspectives. 3D rendering technology based on implicit representation has become one of the most popular fields of computer vision research. It can be predicted that in the near future, people will share 3D content online, just like sharing 2D images and videos, and protecting implicit 3D data will become an important issue in the future.
Before NeRF, 3D data was mostly represented in the form of point clouds, voxels, or triangular meshes. The methods for copyright protection of these forms of 3D data were usually divided into three categories: embedding watermarks directly by translating, rotating, scaling 3D shapes [3-8], modifying 3D model parameters to embed watermarks [9-12], and using deep learning techniques to embed watermarks [13-16]. However, unlike traditional 3D models, NeRF is a function that represents the color and density of each point in a 3D scene. There is no specific visible data such as the 3D model and 3D model parameters, so it is not possible to translate, rotate, or embed watermarks in 3D model parameters. Moreover, the neural radiation field is trained through the MLP network structure, and adjusting the network structure to embed watermarks can affect the effectiveness of 3D rendering.
In 2022, Li et al.[17] established the connection between information hiding and NeRF for the first time and proposed the StegaNeRF scheme for information hiding. This scheme first trains a standard NeRF model, and then treats it as a new perspective generator. While training the message extraction network, the NeRF network is retrained to ensure that the 2D images rendered from the new StegaNeRF network can accurately extract messages.
In this letter, we propose a watermarking scheme for NeRF, which first embeds watermarks into the images in the training set through an embedding network. Next, use the NeRF model for 3D model modeling. The copyright verifier generates an image from a secret perspective as the input of the neural radiation field, and then uses the hyperparameterization method of the neural network to design a watermark extractor to obtain the embedded watermark from that perspective. In the black box scenario [18-25], once it is suspected that the 3D model has been used without authorization, the verifier can extract the watermark from a secret perspective to verify network copyright. Compared with StegaNeRF, this method does not require secondary training of NeRF, ensuring the quality of 3D model rendered images.
This letter made the following contributions:
* We have used parameterized methods to extract watermarks from rendered images from specific perspectives by training a simple network.
* We use the perspective input of the NeRF model and use the secret perspective as the key information for watermark extraction. Due to the continuity of the new perspective synthesis and the huge key space, the security of the watermark information is guaranteed.
* We simulated the noise processing of implicit expression networks by introducing a noise layer on the training image, making the method robust.
## II Proposed Method
The framework proposed in this article is shown in Figure 1, which mainly includes four stages: (1) embedding watermark,
(2) adding noise, (3) training NeRF model, and (4) extracting watermark.
### _Embedded network_
The embedded network, denoted as E, takes the watermark information with the size of 3xWxH and the image k from the Lego dataset as inputs. The watermark image w is then passed through a Conv layer with an output channel of 32, resulting in the feature map A with the size of 32xWxH. Furthermore, the image k is fed through another Conv layer with an output channel of 32 to obtain the feature map B, also with the size of 32xWxH.
\[\left\{\begin{array}{c}A=Conv_{3->32}(\mathrm{a})\\ B=Conv_{3->32}(\mathrm{k})\\ C=Conv_{32+32->32}(Cat(A,B))\\ D=Conv_{32+32+32->32}(Cat(A,B,C))\\ \mathrm{k^{\prime}}=Conv_{32+32+32+32->3}(Cat(A,B,C,D))\end{array}\right. \tag{1}\]
Within the embedded network, we utilize a convolutional kernel with a size of 3x3, a step size of 1, and padding of 1. The ReLU activation function is employed, and a BN (batch normalization) layer is applied to normalize the data. The specific architecture of the embedded network is illustrated in Figure 4.
### _Noise layer_
Upon exposure to noise, the neural radiation field may result in image distortion under the secret view [1]. To enhance the robustness of the proposed method, a noise layer is introduced for model processing. This study posits two approaches to incorporate noise into the implicitly represented 3D model: one involves adding noise to the network model parameters themselves, as demonstrated in reference [1], while the other entails introducing noise to the input data to simulate alterations in the network parameters themselves. For the purpose of simulating NeRF's susceptibility to noise, this study adopts
Fig. 1: Model Overview. The embedding network E takes the watermark image w and an image k from the NeRF training dataset (taking the Lego dataset as an example) as inputs, outputs the image k ’with embedded watermark information, and then processes the images 1 to i from the Lego dataset with noise layer processing and corresponding perspective information to input into the MLP network M to train the neural radiation field N. Then, a parameterized method is used to train an extraction network D, Finally, give the neural radiation field N an arbitrary perspective as input to generate a new perspective image s. If the perspective m is the secret perspective used during training, a clear watermark image w ’can be extracted, otherwise the watermark image cannot be extracted.
Fig. 3: Watermark embedding network structure.
Fig. 2: Application Scenario Process.
the latter approach. Specifically, Gaussian noise, salt and pepper noise, speckle noise, and Poisson noise are utilized to emulate various types of distortion. Refer to Figure 5 for the visual representation of the noise image.
### _Neural radiation field_
The neural radiation field is a methodology employed for the generation of 3D scenes, entailing the utilization of a neural network model to establish a mapping relationship between pixel space and scene space. This mapping enables the prediction of radiation transmission within the scene. In this study, NeRF is trained using camera parameters associated with a confidential angle, facilitating the generation of a clandestine view image.
1. _NeRF network structure:_ The NeRF model takes as input the 3D coordinate position, denoted as x=(x, y, z), and the direction, represented as d=(\(\theta\), \(\phi\)), of the point in the input space. The output space comprises the color c=(r, g, b) of the corresponding point and the density \(\theta\) of the corresponding voxel position.In the specific implementation, the position information of x and d is firstly encoded, followed by the input of x into the MLP network. This results in the output of \(\theta\) and a 256-dimensional intermediate feature. The intermediate feature and d are then inputted into a fully connected layer to predict the color. Finally, 2D images are generated through volume rendering, which is a process transforming 3D data into 2D representation.Volume rendering involves obtaining the final pixel value of a 2D image by generating pixel samples along a ray in the viewing direction and performing a weighted sum of the pixel values c and volume density \(\theta\) obtained from the 3D points reconstructed by 3D reconstruction. This process is expressed mathematically as shown in formula (1). \[\begin{array}{l}C(r)=\int_{t_{a}}^{t_{f}}T(t)\sigma(r(t))c(r(t),d)dt,\\ whereT(t)=\exp(-\int_{t_{a}}^{t_{f}}\sigma(r(s))ds)\end{array}\] (2) Here, a ray, denoted as \(r(t)\), is defined as \(r(t)=o+td\), where (o) represents the photocenter position of the camera and (d) represents the directional vector. The cumulative transmittance of the ray along the path from the near end \(t_{n}\) to the far end boundary \(t_{f}\) is denoted as \(T(t)\).During the training process, the rendering of the overall scene is achieved by obtaining the image pixels and comparing them with the original perspective using the mean square error, also known as \(\mathcal{L}_{2}\) loss or quadratic loss \(\mathcal{L}_{2}\). The network is optimized based on this loss function. Additionally, a hierarchical representation rendering approach is adopted to improve rendering efficiency. This involves the simultaneous optimization of both the coarse network \(C_{c}(r)\) and the fine network \(C_{f}(r)\). By doing so, the weight distribution of the coarse network is utilized to better allocate samples in the fine network. \[L=\sum_{r\in R}\left[\left\|\hat{C}_{c}(r)-C(r)\right\|_{2}^{2}+\left\|\hat{C }_{f}(r)-C(r)\right\|_{2}^{2}\right]\] (3) In formula (2), R represents all the rays in the input view, \(C(r)\) represents the actual pixel value of the input view, and by \(\hat{C}(r)\) represents the predicted pixel value.
2. _Camera parameter:_ NeRF takes a finite set of discrete images and camera parameters corresponding to the viewing angle to generate a continuous static 3D scene. It is capable of generating a new view image from an infinite number of angles. In this study, leveraging this feature of NeRF, we select the camera parameter (m) for a specific angle as the key. Since there are infinitely many angle options, the key space is infinite, ensuring the algorithm's security. Camera parameters are categorized into internal parameters and external parameters. The external parameters determine the camera's position and orientation, and their role is to transform points from the camera coordinate system to the world coordinate system. The camera parameter determines the projection property by mapping 3D coordinates in the camera coordinate system to the 2D image plane. In this study, we select camera parameters as the key to obtain the secret view image, ensuring image uniqueness. Moreover, the camera parameters consist only of two matrices, resulting in small key information.
### _Extraction network_
The extraction network employs an over-parameterized approach to extract watermark information. Similar to the embedded network, the extraction network incorporates the concept of dense connections to enhance the transfer of features.
1. _overparameterization:_ Neural networks often encounter overfitting issues due to the abundance of parameters. However, this paper leverages this drawback as an advantage. Specifically, an extraction network with a substantial number of parameters is employed, where the secret view image and the original watermark image serve as inputs, while the output is the watermark image. Notably, when the secret-angle image is provided as an input, the watermark image can be accurately obtained. Conversely, if any other angle image is provided, the watermark image cannot be retrieved, thereby ensuring the algorithm's security within this study.
2. _Extract network structure:_ The network takes in an image s of size 3xWxH, which is then extracted and processed through a convolutional layer with an output channel of 32 to obtain the feature map E. Subsequently, the feature map F is obtained through an additional
Fig. 4: Example of noise layer.
convolution on E. By concatenating E, F, and G and performing another convolution, the feature map G is obtained, resulting in a clear watermark image w'. The basic procedure is depicted in formulas (8) to (11).
\[\left\{\begin{array}{c}E=Conv_{3->32}(s)\\ F=Conv_{32->32}(E)\\ G=Conv_{32+32->32}(Cat(E,F))\\ w^{\prime}=Conv_{32+32+32->D}(Cat(E,F,G))\end{array}\right. \tag{4}\]
The convolutional kernel settings of the extraction network remain consistent with those of the embedded network, and a 3xWxH watermark image is outputted after the final convolutional layer. Once the extraction network is trained, the model parameters are saved. Importantly, the watermark image cannot be extracted from input images generated by other NeRF perspectives. For a comprehensive illustration of the extraction network's architecture, refer to Figure 6.
### _Loss function_
The loss function primarily comprises two components: carrier image content \(\mathcal{L}e\) and watermark image content \(\mathcal{L}d\). \(\mathcal{L}e\) represents the content loss of the carrier image, which is calculated as the Mean Square Error (MSE) between the carrier image and the image containing the watermark, as specified in Equation (12).
\[\mathcal{L}e=\alpha\frac{1}{3\times H\times W}||k-k^{\prime}||_{2}^{2} \tag{5}\]
Here, \(\alpha\) denotes the weight parameter, k represents the carrier image, and k' represents the image containing the watermark. The watermark image content loss, denoted as \(\mathcal{L}d\), is defined by formula (13).
\[\mathcal{L}d=\beta\frac{1}{1\times H\times W}||w-w^{\prime}||_{2}^{2}+\gamma( 1-\mathcal{L}_{ssim})+\mu(1-\mathcal{L}_{mssim}) \tag{6}\]
In the equation, \(\beta\), \(\gamma\), and \(\mu\) represent the weight parameters, w denotes the original watermark image, and w' denotes the extracted watermark image. \(\mathcal{L}_{\mathrm{ssim}}\) refers to the loss evaluated by Structural SIMilarity (SSIM), while \(\mathcal{L}_{\mathrm{mssim}}\) corresponds to the loss evaluated by Multi-Scale Structural SIMilarity (MS-SSIM [26]). The computation formulas for SSIM and MS-SSIM are presented in formulas (14) and (15), respectively.
\[SSIM(x,y)=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}{(\mu_{x}^{2}+\mu_{ y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})} \tag{7}\]
\[MS-SSIM(x,y)=\prod\limits_{\mathrm{m}=1}^{M}{(\frac{2\mu_{x}\mu_{y}+C_{1}}{ \mu_{x}^{2}+\mu_{y}^{2}+C_{1}})^{\beta_{m}}(\frac{2\sigma_{xy}+C_{2}}{\sigma_ {x}^{2}+\sigma_{y}^{2}+C_{2}})^{\gamma_{m}}} \tag{8}\]
In the aforementioned equation, \(\mu_{x}\) and \(\sigma_{x}\) denote the mean and variance of image x, while \(\mu_{y}\) and \(\sigma_{y}\) represent the mean and variance of image y, respectively. Additionally, \(\sigma_{xy}\) denotes the covariance of images X and Y. The variable M symbolizes the different scales, and \(\beta_{m}\) and \(\gamma_{m}\) represent the relative importance between images. For this study, \(\beta_{m}\) and \(\gamma_{m}\) are both set to 1. Constants C1 and C2 are defined, with a value of k1 = 0.01 and k2 = 0.03, respectively. Notably, L refers to the dynamic range of pixel values, with a specified value of 255.
## III Experimental results and analysis
### _Experimental setup_
For this experiment, the NeRF model is trained using the NeRFsynthetic dataset, which includes images of Lego, Chair, Ship, and other objects, while the ImageNet dataset is utilized for selecting random images as watermark images for testing purposes. The NeRFsynthetic dataset encompasses images captured from various angles of the model, accompanied by the respective camera parameters. Notably, the ImageNet dataset is a considerable dataset comprising 14,197,122 images, of which 100 images are specifically chosen as watermark images for this study. The experiment was conducted on a computer running Windows 11 with a capacity of 64GB. The embedded network and extraction network were optimized using the Adam optimizer. The input image size was uniformly adjusted to 256\(\times\)256 pixels, with a learning rate of 0.0001 and a total of 50,000 epochs. The parameters of the loss function were set as follows: \(\alpha\) = 0.3, \(\beta\) = 0.3, \(\gamma\) = 0.5, and \(\mu\) = 0.5.
### _imperceptibility_
Imperceptibility, also referred to as perceptual transparency, implies that there should be no perceivable difference between the embedded watermark image and the original image. In general, Mean Squared Error (MSE) is employed to evaluate the quality of image alterations, Peak Signal-to-Noise Ratio (PSNR) is used to assess the compression quality of the image, and Structural SIMilarity (SSIM) measures the similarity between the embedded watermark image and the original image. Additionally, LPIPS [27] was employed to evaluate the perception loss and image quality during the evaluation process. PSNR is calculated based on MSE. A higher PSNR and SSIM are indicative of better quality, while a lower LPIPS is desirable. The calculation formulas for MSE and PSNR are presented in equations (16) and (17), respectively.
\[MSE=\frac{1}{W\times H}\sum\limits_{i=1}^{W}{\sum\limits_{j=1}^{H}{(X_{i,j}-Y_{ i,j})}}^{2} \tag{9}\]
\[PSNR=20\times\lg(sc)-10\times\lg(MSE) \tag{10}\]
In the context, X and Y represent two images with dimensions of WxH, respectively. The variable sc indicates the scaling factor, which is typically assigned a value of 2.
Fig. 5: W attemark Extraction Network Structure.
1. _Image quality:_ The experimental results of this algorithm are presented in the table. The PSNR, SSIM, and LPIPS values reported in the table for both the embedded networks and the extraction networks represent their respective mean experimental results. As evident from the table, the embedded network and extraction network exhibit high PSNR and SSIM values, while demonstrating low LPIPS values. These findings indicate that the algorithm proposed in this paper generates embedded watermark images that possess high imperceptibility, as reflected by the extracted watermark images.
2. _Image effect:_In this experiment, an image was embedded in each of the Lego, Chair, Ship, Drums, Hotdog, and Materials datasets. Subsequently, these images, along with others, underwent training using the NeRF algorithm. Following the training, watermark information was then extracted from the rendered new visual images. The visual outcomes of the embedded watermark image and the extracted watermark image are depicted in Figure 7 and Figure 8, respectively.
2. _Normalized correlation coefficient:_ To quantitatively assess the similarity between the extracted watermark image and the original watermark image, the normalization coefficient is employed as the evaluation criterion. Its definition is presented in equation (19). \[NC(w,w^{\prime})=\frac{\sum\limits_{i=1}^{M}\sum\limits_{i=1}^{N}w(x,y)w^{ \prime}(x,y)}{\sqrt{\sum\limits_{x=1}^{M}\sum\limits_{y=1}^{N}\sum\limits_{y= 1}^{N}w(x,y)^{2}}\sqrt{\sum\limits_{x=1}^{N}\sum\limits_{y=1}^{N}w^{\prime}(x,y )^{2}}}\] (12) Here, w(x,y) refers to the original watermark image, w'(x,y) represents the extracted watermark image, M and N denote the image resolution. The normalized correlation coefficient (NC) signifies the similarity between the original image and the extracted watermark image, ranging from 0 to 1. A higher value closer to 1 indicates better robustness.In this experiment, varied noise types were employed to attack the watermark-embedded image, followed by the extraction of the watermark image and calculation of the corresponding NC. The normalized correlation coefficient of the extracted watermark image following the attack is depicted in Figure 9. Upon analyzing the findings presented in the figure, it is evident that the proposed watermarking scheme in this study exhibits commendable anti-interference capability.
### _The effectiveness of the secret perspective_
To evaluate the efficacy of the hidden perspective, this study investigates the impact of images generated from different viewpoints on the watermark extraction performance. For this experiment, a specific image from the Lego dataset was selected as the host image for watermark embedding, while an elephant image from the image dataset was chosen as the watermark image. Images rendered using the NeRF technique under various perspectives were inputted into the extractor for evaluation. The experimental outcomes are depicted in Figure 10.
The experimental results indicate that as the rotation angle increases, the extracted watermark image progressively becomes more blurred, eventually reaching a point where the watermark image extraction is no longer feasible. To quantitatively analyze the impact of the rotation angle on watermark extraction, the values of PSNR, SSIM, and LPIPS are also provided, representing the comparison between the extracted watermark and the original image. These results are presented in the table. The angle variation specified in the table corresponds to the rotation angle around the central z-axis.
Based on the experimental data, it is observed that smaller rotation angles yield higher PSNR and SSIM values for the extracted watermark, demonstrating superior similarity to the original watermark image. Additionally, the lower LPIPS values further support the notion of the proposed algorithm's efficacy in safeguarding model copyright.
## IV Conclusion
This paper introduces the novel concept of neural implicit representation watermarking, deploying neural network watermarking technology to protect neural networks with implicit data representations. A watermarking algorithm specific to neural radiation fields is devised. Initially, the embedded network is employed to imbue watermarking onto images within the training dataset. Then, NeRF is utilized to generate 3D models and render images from novel perspectives, thereby enabling the watermark extraction process. Experimental results demonstrate the algorithm's security, as any viewing angle within the infinite viewing angle space is considered a key. The watermark extraction not only manifests superior visual quality, but also exhibits robustness. In future work, enhancements to the extraction network structure can be pursued to diminish the likelihood of extracting watermark information from adjacent view images, further fortifying the algorithm's security.
|
2309.09108 | Neural Network-based Fault Detection and Identification for Quadrotors
using Dynamic Symmetry | Autonomous robotic systems, such as quadrotors, are susceptible to actuator
faults, and for the safe operation of such systems, timely detection and
isolation of these faults is essential. Neural networks can be used for
verification of actuator performance via online actuator fault detection with
high accuracy. In this paper, we develop a novel model-free fault detection and
isolation (FDI) framework for quadrotor systems using long-short-term memory
(LSTM) neural network architecture. The proposed framework only uses system
output data and the commanded control input and requires no knowledge of the
system model. Utilizing the symmetry in quadrotor dynamics, we train the FDI
for fault in just one of the motors (e.g., motor $\# 2$), and the trained FDI
can predict faults in any of the motors. This reduction in search space enables
us to design an FDI for partial fault as well as complete fault scenarios.
Numerical experiments illustrate that the proposed NN-FDI correctly verifies
the actuator performance and identifies partial as well as complete faults with
over $90\%$ prediction accuracy. We also illustrate that model-free NN-FDI
performs at par with model-based FDI, and is robust to model uncertainties as
well as distribution shifts in input data. | Kunal Garg, Chuchu Fan | 2023-09-16T22:59:09Z | http://arxiv.org/abs/2309.09108v1 | # Neural Network-based Fault Detection and Identification for Quadrotors using Dynamic Symmetry
###### Abstract
Autonomous robotic systems, such as quadrotors, are susceptible to actuator faults, and for the safe operation of such systems, timely detection and isolation of these faults is essential. Neural networks can be used for verification of actuator performance via online actuator fault detection with high accuracy. In this paper, we develop a novel model-free fault detection and isolation (FDI) framework for quadrotor systems using long-short-term memory (LSTM) neural network architecture. The proposed framework only uses system output data and the commanded control input and requires no knowledge of the system model. Utilizing the symmetry in quadrotor dynamics, we train the FDI for fault in just one of the motors (e.g., motor \(\#2\)), and the trained FDI can predict faults in any of the motors. This reduction in search space enables us to design an FDI for partial fault as well as complete fault scenarios. Numerical experiments illustrate that the proposed NN-FDI correctly verifies the actuator performance and identifies partial as well as complete faults with over \(90\%\) prediction accuracy. We also illustrate that model-free NN-FDI performs at par with model-based FDI, and is robust to model uncertainties as well as distribution shifts in input data.
## I Introduction
Safety-critical systems are those where violation of safety constraints could result in loss of lives, significant property damage, or damage to the environment. In real-life applications, many cyber-physical control systems are safety-critical, including autonomous cars, unmanned aerial vehicles (UAVs), and aircraft, where safety pertains to keeping the autonomous agent in a predefined safe set away from obstacles and other agents in its environment. In this context, safe control requires finding a control policy that keeps the system within the safe region at all times. As autonomous systems become more complex (thus increasing the likelihood of faults [1]), it becomes necessary to explicitly consider the possibility of faults in their actuators which can make it difficult (or even impossible in certain cases) to keep the system safe. Many real-world flight incidents have been attributed to actuator failures such as runaway, sticking, and floating [2]. Such failures have been studied in the field of fault-tolerant control (FTC), which has been applied extensively to applications such as aircraft [3, 4, 5, 6], and spacecraft attitude controls [7, 8].
There is a long history of work in control theory dealing with adaptation to system faults and verification of safe controllers. Due to space limits, we only discuss common FTC techniques for actuator faults. Classical methods include robust control [9]; more recent works include robust MPC [10] and quantitative resilience [11]. In this paper, we focus on safe fault-tolerant control, where the salient issue is in ensuring that the system will avoid entering an unsafe set despite actuator faults such as loss of control authority or unknown input disturbances. FTC methods are, in general, classified into two categories: active and passive. Active FTC uses detection techniques and a supervisory system to detect the fault and modify the control structure accordingly. Passive FTC relies on a robust compensator to reduce the effects of faults. For a more in-depth explanation of the passive and active FTC theory, see [2]. Many FTC approaches have been presented in the literature to accommodate actuator faults. Fuzzy logic control [12] and data-driven approaches such as [1] are used for compensating unknown nonlinear dynamics and actuator faults in multi-agent systems. Feedback linearizing control [13, 5], model predictive control [3, 10], sliding mode control [4], and adaptive sliding mode control [6] have been implemented on various nonlinear systems under faults such as a quadrotor subject to one or more motor failures. Adaptive control [7] and robust adaptive control [8] were studied for linear systems under actuator fault, model uncertainty, and external disturbance. Adaptive fuzzy FTC is presented in [14] for actuator faults in Markov jump systems. However, FTC-based approaches can be conservative without accurate identification of the source of the fault. Thus, it is essential to design a highly reliable fault detection and identification (FDI) framework that can predict a fault with high accuracy.
There is a plethora of work on FDI; we refer interested readers to the survey articles [15, 16, 17] that discuss various approaches of FDI used in the literature. In particular, the residual-based method has been used very commonly in prior work, where the expected state or output (under the commanded input and a known system model) and the actual state or output of the system are compared for fault detection. The authors in [18] study FDI for a linear parameter-varying (LPV) system and use an observer-based FDI that uses the residual data. Such _residual_ information requires the knowledge of the system model, and thus, is model-dependent. Another example of a model-based approach is [19] where model-based output-residual are used instead of state-residuals. The work in [20] is capable of handling partial faults but not complete faults in an actuator. Most of the work on adaptation-based FDI uses a linearized model for the system [21, 22].
Neural network (NN)-based verification and system monitoring have been successfully used for FDI. The architecture of the considered NN is very important for such verification problems. Fault detection using system trajectory data can
be interpreted as anomaly detection in time-series data, and hence, long-short-term memory (LSTM)-based NNs become a natural choice for FDI as they can effectively handle time-series data [23, 24, 25]. There is some work on using LSTM-based FDI, e.g., [26, 27], but it is limited to a very narrow class of faults. As noted in [28], prior work on neural network-based model-free FDI relies on reconstruction of the model using artificial neural networks (e.g., [29]), or generating the residual information using Kalman filtering (see e.g., [30]). The method in [28] also estimates a reduced-order model of the system as an intermediate step. The main disadvantage of model-based FDI methods is that their performance can degrade significantly due to model uncertainties or imperfections in the used model for designing the FDI mechanism and the actual system model.
To overcome this limitation, in this paper, we design a _model-free_ FDI mechanism that only uses the output of the system and the commanded input to the system, and does not use the residual information. The paper's contributions are summarized below:
* We present a _truly_ model-free approach, where we do not require to either learn the system model or create a reduced-order representation of the model. Instead, we use the system output and the commanded input as the features of a neural network, which directly predicts whether there is an actuator fault.
* We consider a variety of partial fault scenarios and leverage the symmetry in quadrotor dynamics to reduce the search space for training and design an NN-FDI trained on the failure of just one motor that is capable of predicting fault in any of the quadrotor motors.
* We illustrate through numerical experiments that the model-free FDI mechanism performs at par (and even better in some cases than) the model-based mechanisms.
* We also illustrate the robustness of the proposed method against modeling uncertainties and demonstrate through numerical examples that while the performance of the model-based FDI mechanism drops significantly under model uncertainties, the performance of the designed model-free approach remains the same.
## II Problem formulation
We start by presenting the quadrotor dynamics. The quadrotor dynamics can be written compactly as:
\[\dot{x} =f(x)+g(x)u, \tag{1a}\] \[y =\rho(x), \tag{1b}\]
for state \(x\in\mathcal{X}\), control input \(u\in\mathcal{U}\), and state and control sets \(\mathcal{X}\subset\mathbb{R}^{n}\) and \(\mathcal{U}\subset\mathbb{R}^{m}\), respectively. Here, \(\rho:\mathbb{R}^{12}\rightarrow\mathbb{R}^{6}\) is the output map consisting of the position and the attitude vector of the quadrotor, i.e., \(y=(p_{x},p_{y},p_{z},\phi,\theta,\psi)\). Such an output model is realized using a 6DOF Inertial Measurement Unit (IMU) output. The 6-DOF quadrotor dynamics are given in [31] with \(x\in\mathbb{R}^{12}\) consisting of positions, velocities, angular positions and angular velocities, and \(u\in\mathbb{R}^{4}\) consisting of the thrust at each of four motors :
\[\dot{p}_{x} =\big{(}c(\phi)c(\psi)s(\theta)+s(\phi)s(\psi)\big{)}w\] \[\quad-\big{(}s(\psi)c(\phi)-c(\psi)s(\phi)s(\theta)\big{)}v+uc( \psi)c(\theta) \tag{2a}\] \[\dot{p}_{y} =\big{(}s(\phi)s(\psi)s(\theta)+c(\phi)c(\psi)\big{)}v\] \[\quad-\big{(}c(\psi)s(\phi)-s(\psi)c(\phi)s(\theta)\big{)}w+us( \psi)c(\theta)\] (2b) \[\dot{p}_{z} =w\ c(\psi)c(\phi)-u\ s(\theta)+v\ s(\phi)c(\theta)\] (2c) \[\dot{u} =r\ v-q\ w+g\ s(\theta)\] (2d) \[\dot{v} =p\ w-r\ u-g\ s(\phi)c(\theta)\] (2e) \[\dot{w} =q\ u-p\ v+\frac{U_{1}}{m}-g\ c(\theta)c(\phi)\] (2f) \[\dot{\phi} =r\frac{c(\phi)}{c(\theta)}+q\frac{s(\phi)}{c(\theta)}\] (2g) \[\dot{\theta} =q\ c(\phi)-r\ s(\phi)\] (2h) \[\dot{\psi} =p+r\ c(\phi)t(\theta)+q\ s(\phi)t(\theta)\] (2i) \[\dot{r} =\frac{1}{I_{zz}}\big{(}U_{2}-pq(I_{yy}-I_{xx})\big{)}\] (2j) \[\dot{q} =\frac{1}{I_{yy}}\big{(}U_{3}-pr(I_{xx}-I_{zz})\big{)}\] (2k) \[\dot{p} =\frac{1}{I_{xx}}\Big{(}U_{4}+qr(I_{zz}-I_{yy})\Big{)} \tag{2l}\]
where \(m,I_{xx},I_{yy},I_{zz},k_{r},k_{t}>0\) are system parameters, \(g=9.8\) is the gravitational acceleration, \(c(\cdot),s(\cdot),t(\cdot)\) denote \(\cos(\cdot),\sin(\cdot),\tan(\cdot)\), respectively, \((p_{x},p_{y},p_{z})\) denote the position of the quadrotor, \((\phi,\theta,\psi)\) its Euler angles and \(u=(U_{1},U_{2},U_{3},U_{4})\) the input vector consisting of thrust \(U_{1}\) and moments \(U_{2},U_{3},U_{4}\).
The relation between the vector \(u\) and the individual motor speeds is given as
\[\begin{bmatrix}U_{1}\\ U_{2}\\ U_{3}\\ U_{4}\end{bmatrix}=\begin{bmatrix}C_{T}&C_{T}&C_{T}&C_{T}\\ -dC_{T}\sqrt{2}&-dC_{T}\sqrt{2}&dC_{T}\sqrt{2}&dC_{T}\sqrt{2}\\ -dC_{T}\sqrt{2}&dC_{T}\sqrt{2}&dC_{T}\sqrt{2}&-dC_{T}\sqrt{2}\\ -C_{D}&C_{D}&-C_{D}&C_{D}\end{bmatrix}\begin{bmatrix}\omega_{1}^{2}\\ \omega_{2}^{2}\\ \omega_{3}^{2}\\ \omega_{4}^{2}\end{bmatrix}, \tag{3}\]
where \(\omega_{i}\) is the angular speed of the \(i-\)th motor for \(i\in\{1,2,3,4\}\), \(C_{D}\) is the drag coefficient and \(C_{T}\) is the thrust coefficient. These parameters are given as: \(I_{xx}=I_{yy}=1.395\times 10^{-5}\) kg-m\({}^{2}\), \(I_{zz}=2,173\times 10^{-5}\) kg-m\({}^{2}\), \(m=0.0299\) kg, \(C_{T}=3.1582\times 10^{-10}\) N/pm\({}^{2}\), \(C_{D}=7.9379\times 10^{-12}\) N/pm\({}^{2}\) and \(d=0.03973\) m (see [31]).
In this paper, we consider an actuator fault occurring at some unknown time \(t_{f}\geq 0\):
\[u(t,x)=\begin{cases}\pi(t,x)&\text{if}\quad t\leq t_{F};\\ \text{diag}(\Theta)\ \pi(t,x)&\text{if}\quad t>t_{F},\end{cases}, \tag{4}\]
where \(\Theta=[0,1]^{m}\in\mathbb{R}^{m}\) is the vector denoting whether an actuator is faulty or not, and \(\text{diag}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m\times m}\) maps a vector in \(\mathbb{R}^{m}\) to a diagonal matrix in \(\mathbb{R}^{m\times m}\). If the \(i-\)th actuator is faulty, then \(\Theta_{i}\in[0,1)\) and the rest of the elements of \(\Theta\) are 1. The problem statement is to design an NN-based FDI \(\Theta_{NN}\) that correctly predicts and identifies which actuator has a fault and what is the degree of the fault.
## III Neural Fault-detection and Isolation
### _Model-free FDI_
The faults must be detected correctly and promptly for the safe recovery of the system. We use a learning-based approach to design a fault-detection mechanism. Let \(\Theta\in[0,1]^{m}\) denote the fault vector, where \(\Theta_{i}<1\) indicates that \(i-\)th actuator is faulty, while \(\Theta_{i}=1\) denotes it is not faulty. Let \(\Theta_{NN}:\mathbb{Y}\times\mathbb{U}\rightarrow\mathbb{R}^{m}\) be the _predicted_ fault vector, parameterized as a neural network. Here, \(\mathbb{Y}=\{x(\cdot)\mid y(\cdot)=\rho(x(\cdot))\}\) is a function space consisting of trajectories of the state vector, and \(\mathbb{U}=\{u(\cdot)\mid u(\cdot)\in\mathcal{U}\}\) is a function space consisting of input signals. To generate the residual data, the knowledge of the system model is essential, which makes the residual-based approach model dependent. This is the biggest limitation of this approach, as modeling errors can lead to severe performance issues in fault detection due to model uncertainties. To overcome this, we propose a model-free NN-based FDI mechanism that only uses \((y,u)\) as the feature data, i.e., it does not require the model-based residual information. For a given time length \(T>0\), at any given time instant \(t\geq T\), the NN function \(\Theta_{NN}\) takes a finite trace of the system trajectory \(x(t-\tau)|_{\tau=0}^{T}\) and the _commanded_ input signal \(u(t-\tau)_{\tau=0}^{T}\) as input, and outputs the vector of predicted faults.
Using the symmetry of the quadrotor (i.e., \(I_{xx}=I_{yy}\)), it is possible to only learn the fault-detector for one of the faulty actuators and detect which motor is faulty using rotational invariance. Let us define color-coding for the four motors in the original configuration (i.e., case 1):
\[\#1\rightarrow\text{Black}\quad\#2\rightarrow\text{Green}\quad\#3 \rightarrow\text{Red}\quad\#4\rightarrow\text{Blue}\]
During the training, without loss of generality, we assume that the green motor is faulty. Now, if instead, another motor is faulty, then a state-transformation map can be defined as
\[\Phi(n)=\begin{bmatrix}R_{\theta}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&R_{\theta}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&R_{\theta}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&R_{\theta}\end{bmatrix}\text{, with }\theta=\frac{\pi}{2},\pi,\frac{3\pi}{2}\text{ for n = 3,}\]
4 and 1, respectively, where \(R_{\theta}=\begin{bmatrix}\cos(\theta)&\sin(\theta)&0\\ -\sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{bmatrix}\). Thus, for case #4, the black motor acts as motor #2 in the original configuration (see Figure 1). As a result, if the black motor is faulty, and the fault-predictor is trained to detect that motor #2 is faulty (i.e., green motor in the original configuration), then case 4 will give the correct prediction (see Figure 3).
### _Model-based FDI_
For model-based FDI mechanisms, the residual data is also required as an additional feature to the NN. The error vector \(\tilde{y}\) (commonly known as residual in the FDI literature) is defined as the stepwise error between the actual state of the system with potentially faulty actuators and the state of the system assuming no faults, i.e., \(\tilde{y}(t)=y(t)-\bar{y}(t)\) where \(y\) is the output of the actual system (1) with faulty input and \(\bar{y}\) is the output of reference model without fault \(\hat{x}=f(\bar{x})+g(\bar{x})u,\bar{y}(t)=\rho(\bar{x}(t))\), with \(\bar{y}(k\tau)=y(k\tau))\), \(k=0,1,2,\dots\), where \(\tau>0\) is sampling period for data collection. In most of the prior literature, either \(\tilde{x}\) (in state-based methods) or \(\tilde{y}\) (in output-based methods) is used for designing FDI. However, that requires the availability of the model for computing the residuals, which makes such approaches very restrictive.
### _Training data_
For training, the trajectory data is collected where actuator \(\#2\) is partially faulty with \(\theta_{2}\in\{0,0.1,0.2,\cdots,0.9,1\}\). Let \(d=11\) denote the number of faulty scenarios for motor \(\#2\). At each time instant \(t\geq T\), it is possible that only a portion of the trajectory is generated under a faulty actuator. That is, the possible input to the system is \(u(t-\tau,T_{f})_{\tau=0}^{T}\coloneqq[u(t-T),u(t-T+1),\cdots,u_{f}(t-T+T_{f}),u _{f}(t-T+T_{f}+1),\cdots,u(t)]\) (with the corresponding system output \(y(t-\tau,T_{f})_{\tau=0}^{T}\) and the residual \(\tilde{y}(t-\tau,T_{f})_{\tau=0}^{T}\), where \(T_{f}\in[0,T]\) dictates the time instant when the fault occurs. Thus, the NN for fault prediction must be trained on all possible combinations of occurrences of fault. Hence, our training data includes \(\bigcup\limits_{T_{f}\in 0}^{T}\big{(}y(t-\tau,T_{f})_{\tau=0}^{T},\tilde{y}(t- \tau,T_{f})_{\tau=0}^{T},u(t-\tau,T_{f})_{\tau=0}^{T}\big{)}\) (see Figure 2). In every training iteration, we generate \(N_{traj}=N_{1}\times d\times T_{f}\) trajectories, so that we have \(N_{1}>0\) trajectories for faults in each of the \(d\) faults in motor \(\#2\) with all possible lengths of trajectories under one faulty actuator in \([0,T_{f}]\). In particular, we consider discrete fault values \(\Theta_{2}\in\{0,0.1,0.2,\cdots,0.9\}\) and thus, \(d=11\) with 10 values for faults and 1 for non-faulty cases. The training data is generated by randomly choosing initial conditions \(\{x(0)\}_{1}^{N_{1}}\) and rolling out \(d\) trajectories for each
Fig. 1: The four cases used in fault-prediction. Each next case is generated through a 90 degrees anti-clockwise axes rotation from the previous model (resulting in a 90-degree clockwise configurational rotation).
Fig. 2: General neural-network architecture for failure prediction. The training data includes all possible trajectories with different lengths of faulty input (the violet color represents the portion of the trajectory with faulty input).
of the sampled conditions under \(d\) possible \(\Theta_{2}\) values. This enables the NN to distinguish between various kinds of fault scenarios since it obtains trajectories under all considered fault scenarios with the same initialization.
The loss function for model-free FDI training is defined as
\[\mathcal{L}_{\Theta}^{MF}=\frac{1}{N_{traj}}\sum_{j=1}^{N_{traj}} \Big{[}\|\Theta_{j,NN}\big{(}y_{j}(\cdot),u_{j}(\cdot)\big{)}-\Theta_{j}\|- \epsilon\Big{]}_{+}, \tag{5}\]
while model-based FDI is given as:
\[\mathcal{L}_{\Theta}^{MB}=\frac{1}{N_{traj}}\sum_{j=1}^{N_{traj}} \Big{[}\|\Theta_{j,NN}\big{(}y_{j}(\cdot),u_{j}(\cdot),\tilde{y}_{j}(\cdot) \big{)}-\Theta_{j}\|-\epsilon\Big{]}_{+}, \tag{6}\]
where \(\Theta_{j}\in\{1\}\times[0,1]\times\{1\}\times\{1\}\) is the fault vector used for generating the data for the \(j-\)th trajectory and \(0<\epsilon\ll 1\). In each training epoch, we generate \(N=200\times 11\times 100\) trajectories of length \(T_{f}\) and maintain a buffer of \(1.5\) M trajectories. We train the NN until the loss reduces to \(10^{-3}\). We use a Linear-Quadratic Regulator (LQR) input to generate the training data. In our experiments, we illustrate that the trained NN is highly robust to the kind of input used for trajectory generation and can predict fault with the same accuracy for the trajectories generated by Control Barrier Functions (CBF)-based quadratic programs (QPs) which are commonly used for maintaining safety [32]. During training, we optimize the loss function using stochastic gradient descent, and we train the pre- and post-fault networks separately. The number of trajectories in the buffer is capped at \(N_{buf}\) so that once the maximum number of trajectories are collected, the earlier trajectories are dropped from the buffer. The training is performed either till the number of iterations reaches \(N_{M}>0\), or the loss drops below \(10^{-3}\) after at least \(N_{m}<N_{M}\) training epochs. During each training epoch, we use a batch size of 50000 trajectories and perform 500 iterations of training on all the buffer data. The learning algorithm is summarized in Algorithm 1
```
Data:\(iter_{M},N,N_{bs},N_{buf},N_{m}\) Result:\(\Theta_{NN}\) Initialize \(\Theta_{NN}\) as NNs; /* LSTM-NN */ \(\{y,u,\tilde{y}\}_{buf}=\emptyset\) while\(iter\leq iter_{M}\) or (loss \(<10^{-3}\) and \(iter\geq N_{m}\)do Sample \(\{x_{0}\}_{1}^{N_{1}}\) for\(i\in N_{1}\)do Roll out \(d\) trajectories under \(\Theta_{2}\in[0,1]\) \(\{y,u,\tilde{y}\}_{buf}=\{y,u,\tilde{y}\}_{buf}\cup\{y,u,\tilde{y}\}_{1}^{N_{ 1}}\) end for \(\{z\}=\{y,u,\tilde{y}\}_{buf}\) \(\{z\}_{train}=\{z\}[-N_{buf}:]\) while\(\{z\}_{train}\neq\emptyset\)do \(\{z\}_{train,bs}\) = Sample \(N_{bs}\) trajectories from \(\{z\}_{train}\) loss = \(\mathcal{L}_{\Theta}^{MF}\) /* \(\mathcal{L}_{\Theta}^{MB}\) for model-based */ train \(\Theta_{NN}\) \(\{z\}_{train}=\{z\}_{train}\setminus\{z\}_{train,bs}\) end while end while
```
**Algorithm 1**Learning framework for FDI
## IV Numerical evaluations
The primary objective of our numerical experiments is to evaluate the effectiveness of our method in terms of fault detection. We consider an experimental case study involving the Crazyflie quadrotor with a fault in motor \(\#2\). First, we evaluate the correctness of the fault prediction by the 4 cases explained in Figure 1. A fault is predicted if \(\min\limits_{i}\Theta_{i,NN}(\Phi_{n}(x))<\Theta_{tol}\), where \(\Theta_{tol}=0.2\). In this case, we only consider the case when \(\Theta_{i}=0\). The predicted faulty actuator is given by the \(\arg\min\) of \(\Theta_{NN}(\Phi_{n^{*}}(x))\), where \(n^{*}\) is the rotation index for which \(\Theta_{i,NN}\) is below the tolerance. The experiments are run to check the prediction accuracy of the NN-based FDI mechanism for various lengths of data with failed actuators between 0 and \(T_{f}=100\). We compute the prediction accuracy when there is a fault as well as when there is no fault. We sample 10000 initial conditions randomly from the safe set \(\mathcal{X}_{safe}\) to generate trajectories for test data, where 2000 trajectories are generated for each of the faults and 2000 trajectories are generated without any fault. Each trajectory is generated for 200 epochs with fault occurring at \(t=100\). We feed the moving trajectory data \((x(k-100,k),u(k-100,k),\tilde{x}(k-100,k)\) to the trained NN-based FDI starting from \(k=100\). For a given \(k\in[100,200]\), the portion of trajectory data with faulty actuator is \(k-100\). Figure 3 illustrates that the
Fig. 3: Failure prediction accuracy for faults in different motors: **Top-left**: Failure in motor #1, **Top-right**: Failure in motor #2, **Bottom-right**: Failure in motor #3 and **Bottom-left**: Failure in motor #4. This illustrates that with one trained FDI, it is possible to predict failure in any of the actuators with high prediction accuracy.
prediction accuracy for correct fault prediction for each of the cases is above 80%. This illustrates that it is possible to effectively predict faults in all motors with an FDI trained with faults in just one of the motors.
Next, we evaluate prediction accuracy for different fault values \(\Theta_{2}\in[0,1]\). For this experiment, we compare the performance of two different types of NN architectures for model-free FDIs, namely, a multi-layer perceptron (MLP) with 1 input layer, 4 hidden layers, and 1 output layer and a long-short-term memory (LSTM) where the LSTM layer is followed by 2 linear layers. In each of the NN architectures, we use \(N\times\)128 as the size of the input layer with \(N\) being the size of the features, hidden layer(s) of size 128\(\times\)128 followed by a hidden layer of size 128\(\times\)64 and an output layer of size 64\(\times m\). Note that \(N=(2p+m)\times T_{f}\) for the FDI with all the data, \(p\times T_{f}\) for the FDI with just the residual data, and \(N=(p+m)\times T_{f}\) for the model-free FDI mechanism. Figure 4 plots the prediction accuracy for various values of \(\Theta_{2}\) for the two considered NN architectures. It can also be observed that LSTM-based NN FDI can accurately identify each of the faults while MLP-based FDI has a very low prediction accuracy.
We use an LQR input to generate the training data since solving a CBF-based QP is relatively slower for collecting a sufficient amount of training data. In our experiments, we illustrate that the trained NN is highly robust to the kind of input used for trajectory generation and can predict fault with the same accuracy for the trajectories generated by CBF-based QPs. For this experiment, we compare the prediction accuracy of the model-free NN-FDI (\(\theta_{NN}(y,u)\)) and the model-based FDIs (\(\theta_{NN}(y,u,\tilde{y})\)). Figure 5 shows the prediction accuracy of the model-based FDIs. It can also be seen that the model-free FDI mechanism can perform at par (even better) than the model-based FDI mechanism with features (\((y,u,\tilde{y})\)). Based on this observation, we can infer that a model-free FDI mechanism can be used with very high confidence. In this case, it is crucial to note that the trained fault predictor is highly robust with respect to the input data. In particular, to accelerate the learning process, a very simple LQR controller is used, where the nonlinear system dynamics are linearized about the origin, and a constant LQR gain is used. However, as can be seen from Figure 5, the prediction generalizes to the CBF-based QP controller just as well and has a similar high prediction accuracy.
Finally, we study the effect of change in model parameters (such as the inertia matrix, etc.) on the prediction accuracy of the FDI mechanisms. For this experiment, we changed the system parameters by more than 40% (see Table I). As can be seen from Figure 6, the prediction accuracy of the model-free FDI mechanism is unaffected by a change in the model parameters, while that of the model-based FDI mechanism drops significantly. Thus, in the scenarios when a correct system model is not known or the system dynamics undergo changes during operation, a model-based FDI mechanism might not remain reliable. Figures 5 and 6 illustrate that the proposed model-free NN-FDI is agnostic to the type of input used for data generation as well as to perturbations in model parameters.
## V Conclusion
In this paper, we propose a learning method for effectively learning a model-free output-based FDI for the prediction of a variety of partial losses in actuation for quadrotors. The proposed NN-based FDI can verify the actuator performance with very high accuracy and correctly predicts a variety of faults. The numerical experiments demonstrated that the applicability of a model-based FDI mechanism is very limited, while that of the proposed model-free is quite broad and general. Additionally, the robustness to out-of-distribution input data illustrates that the proposed model-free mechanism can be easily trained on simple input data (e.g., LQR input), does not require the model information and generalizes to both out-of-distribution input data as well as changes in model parameters (or modeling uncertainties).
As part of future work, we will explore methods that can incorporate more general fault models where the faulty actuator can take any arbitrary signal, and more than one actuator can undergo failure simultaneously. We will also explore applications of this framework to resilient control of networked and distributed control systems, which introduce additional notions of system failure, including loss of entire nodes or communication links in addition to input disturbances and loss of control authority.
|
2309.17336 | Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal
Feature Augmentation | This paper presents a novel framework for robust 3D object detection from
point clouds via cross-modal hallucination. Our proposed approach is agnostic
to either hallucination direction between LiDAR and 4D radar. We introduce
multiple alignments on both spatial and feature levels to achieve simultaneous
backbone refinement and hallucination generation. Specifically, spatial
alignment is proposed to deal with the geometry discrepancy for better instance
matching between LiDAR and radar. The feature alignment step further bridges
the intrinsic attribute gap between the sensing modalities and stabilizes the
training. The trained object detection models can deal with difficult detection
cases better, even though only single-modal data is used as the input during
the inference stage. Extensive experiments on the View-of-Delft (VoD) dataset
show that our proposed method outperforms the state-of-the-art (SOTA) methods
for both radar and LiDAR object detection while maintaining competitive
efficiency in runtime. Code is available at
https://github.com/DJNing/See_beyond_seeing. | Jianning Deng, Gabriel Chan, Hantao Zhong, Chris Xiaoxuan Lu | 2023-09-29T15:46:59Z | http://arxiv.org/abs/2309.17336v3 | # See Beyond Seeing: Robust 3D Object Detection from Point Clouds via Cross-Modal Hallucination
###### Abstract
This paper presents a novel framework for robust 3D object detection from point clouds via cross-modal hallucination. Our proposed approach is agnostic to either hallucination direction between LiDAR and 4D radar. We introduce multiple alignments on both spatial and feature levels to achieve simultaneous backbone refinement and hallucination generation. Specifically, spatial alignment is proposed to deal with the geometry discrepancy for better instance matching between LiDAR and radar. The feature alignment step further bridges the intrinsic attribute gap between the sensing modalities and stabilizes the training. The trained object detection models can deal with difficult detection cases better, even though only single-modal data is used as the input during the inference stage. Extensive experiments on the View-of-Delft (VoD) dataset show that our proposed method outperforms the state-of-the-art (SOTA) methods for both radar and LiDAR object detection while maintaining competitive efficiency in runtime.
## I Introduction
Robust recognition and localization of objects in 3D space is a fundamental computer vision task and an essential capability for intelligent systems. In the context of autonomous driving, accurate 3D object detection is vital for safe motion planning, especially in a complex urban environment. Due to accurate depth measurement in long-range and robustness to illumination conditions, ranging sensors such as LiDAR and radar have attracted increasing attention recently and in turn, make the point clouds from them one of the most commonly used data representations for 3D object detection.
Despite advancements in LiDAR-based [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and radar-based 3D object detection [11, 12], each exhibits inherent drawbacks. LiDAR excels at producing dense point clouds but lacks per-point semantic information. Even though some LiDAR sensors can output point-level intensity values [13, 14], they are often inconsistent across semantic labels as lidar intensity is complexly determined by the distances between objects and LiDAR, rather than the object type alone [15]. Conversely, the emerging 4D radars, while prone to sparse data, present valuable semantic information per point, such as the Radar Cross Section (RCS) and Doppler velocity. Unlike LiDAR intensity, RCS is a _distance-independent_ measurement uniquely determined by the object material and reflection angle. The Doppler velocity measures the moving speed of a detected point relative to the ego vehicle. Benefiting from the rich semantic information, fairly reasonable 3D object performance can be achieved using 4D radars in complex urban environments [12], even if some objects have less than 10 radar points.
Given the complementary nature of these sensors, we explore if one can assimilate the characteristics of the other while enhancing independent detection capacities. Indeed, due to the cost consideration, many low-end vehicles are only equipped with either radars or LiDARs. It is thus valuable to train a _single-modal_ detection model from the _multimodal_ data collected by some pilot vehicles equipped with both sensors, yet dispatch the trained models on low-end vehicles equipped with only one of them. Addressing this necessitates delving into cross-modal learning. Prior arts achieve this via either hallucination for feature augmentation [16, 17, 18, 19, 20] or knowledge distillation for backbone refinement [21, 22, 23, 24]. However, the learning direction in these prior works is _unidirectional_ passing from the strong modality to the weak modality (e.g., RGB camera \(\rightarrow\) depth camera), insufficient to meet our goal.
Unlike prior arts, in this work, we aim to realize a new cross-modal hallucination framework that can work in agnostic learning directions (i.e., LiDAR \(\leftrightarrow\) radar). To bridge the intrinsic sensing gap between two modalities, we introduce a novel selective matching module to enable feature-level alignment in the shared latent space. Our design enhances
Fig. 1: Fig. 0(a) illustrates the proposed method with the 4D radar input as an example. Fig. 0(b) - Fig. 0(e) are the visualization of radar detection in the same scene of different methods. Ground truth boxes are denoted in red, the wrong detections are denoted in yellow and the correct detections are denoted in green. RGB images are **only used for visualization**.
intra-modality features through cross-modal learning and supplements inter-modality features for improved detection robustness (see Fig. 1). Our method outperforms previous state-of-the-art (SOTA) methods in 3D object detection, which we demonstrate on the public View-of-Delft dataset in both LiDAR and radar object detection tasks. Our code will be publicly released upon acceptance.
## II Related work
**LiDAR object detection**. LiDAR 3D object detection is categorized into voxel-based, point-based, and point-voxel approaches. Voxel-based methods transform point cloud data into grid cells, suitable for convolution operations [1, 9, 25, 26]. For example, [1] converts point clouds into 2D pseudo-images for faster inference, while [9] adopts a two-stage process, improving detection at a higher memory cost. Point-based techniques directly process unstructured point cloud data, minimizing data loss [2, 7, 8, 27]. Models in this category, such as [2, 7], utilize methodologies from [28] and [29] to extract features. Recent research introduces point-voxel networks that combine the best of both methods for enhanced accuracy [3, 5, 10, 30]. As highlighted by [7], these networks, like [5], perform well on the KITTI dataset [13], though they require more computation resources.
**Radar object detection**. Early efforts in radar-based object detection focused on 2D object detection [31, 32, 33]. It is only with the recent availability of 4D radar sensors that radar 3D object detection began receiving attention from researchers. [11] is an anchor-based 3D detection framework with a PointNet style backbone. It was only evaluated on short-range radar point clouds within \(\sim\) 10 meters in front of the ego-vehicle, which is far too close for real-world autonomous scenarios. On the other hand, authors of the recently published dataset [12] successfully repurposed PointPillars [1] on radar point clouds. Although the 3D object detection architecture was designed for LiDAR point clouds, it achieved SOTA performance on their radar 3D object detection benchmark [12].
**Cross-modal feature augmentation**. Learning with side information involves using supplementary data during training to enhance single-modality networks [16, 17, 18, 19, 20, 21, 22, 23]. [18] pioneered using cross-modal hallucination in object detection by enhancing RGB image detection using depth image features. [16] integrated thermal data for RGB pedestrian detection. Recent works, like [21], utilize multi-modal features via knowledge distillation to improve 3D detection. However, many of these methods do not maximize cross-modal information use. For instance, [16, 18] require an extra backbone for hallucination instead of optimizing the original. Knowledge distillation techniques [21, 22, 23], while strengthening the backbone, overlook the potential of hallucinated data.
## III Method
The proposed method is depicted in Fig. 2. Our framework includes (i) a point-based backbone for feature extraction, (ii) an instance feature aggregation module for aligning input modalities, (iii) a hallucination branch for cross-modal feature alignment, and (iv) a detection head for bounding box prediction. The selective matching module and shared detection head are training-specific. We use 'primary' and 'auxiliary' interchangeably to denote our two sensor modalities for clarity. At a high-level summary of our method, a backbone network takes point set \(\mathbf{P}\) of the primary modality and outputs a subset \(\mathbf{P}_{f}\) for the sampled foreground points with extracted features. For the auxiliary-modal data, we use a **hat** notation, such as \(\hat{\mathbf{P}}\). The instance feature aggregation module processes \(\mathbf{P}_{f}\) to obtain 'centered points' \(\mathbf{C}\), adjusting point positions with a spatial offset \(\tilde{o_{i}}\). Features from both modalities are then aligned via non-linear mapping for hallucination, constrained by instance location, resulting in hallucination features \(\mathbf{H}\). These are then concatenated with \(\mathbf{C}\) and fed into the detection head for final object recognition.
### _Backbone Network_
We construct our backbone network based on the Set-Abstraction (SA) layer proposed in PointNet++ [29] for better efficiency and to avoid information loss [7, 8]. Additionally, the farthest-point sampling (FPS) operation in SA layer is replaced with center-aware sampling proposed in [7] for better performance, which selects top-\(k\) points based on the predicted centeredness. This predicted centeredness is constrained during training as:
\[\mathcal{L}_{\mathit{Ctr}}=-\sum_{k=1}^{N}(Mask_{k}\cdot y_{k}log(\tilde{y}_{ k})+(1-y_{k})log(1-\tilde{y}_{k})) \tag{1}\]
with \(y_{k}\) as the ground truth centeredness and \(\tilde{y}_{k}\) as network estimated value. \(Mask_{k}\) is the corresponding mask value for each point proposed in [8], which assigns higher weights to points closer to the centroid of objects, and no weight at all for background points. The output of the backbone is denoted as \(\mathbf{P}_{f}=\{p_{i}\}\in\mathbb{R}^{N_{f}\times(3+D)}\) with \(N_{f}\) points and \(p_{i}=[t_{i},f_{i}^{D}]\).
### _Instance Feature Aggregation_
Single-stage detectors face challenges in point matching across different sensor modalities due to discrepancies between point clouds from co-located LiDAR and radar, evident in Fig.3a. This difference, arising from fundamental sensor characteristics and unsolvable by calibration, is worsened by sampling randomness, especially in the backbone's foreground sampling (c.f. Fig. 3b). To mitigate this, we use instance feature aggregation for improved alignment before cross-modal matching. This process is illustrated in Fig. 3c. Following [34], we generate 'centered points' for context. Each foreground point, \(p_{i}\in\mathbf{P}_{f}\), gets an offset, \(\tilde{o}_{i}\in\mathbb{R}^{3}\), guiding it towards its object center. We constrain this regression process with a smooth-L1 loss as:
\[\mathcal{L}_{O-Reg}=\frac{1}{\sum_{i}\mathbf{I}(p_{i})}\sum_{i=1}^{m}smooth_ {L1}(o_{i}-\tilde{o}_{i})\cdot\mathbf{I}(p_{i}) \tag{2}\]
here \(o_{i}\) is the true offset, and \(\mathbf{I}(p_{i})\) checks if \(p_{i}\) is within an object. Points after this instance-aggregation step is denoted as \(\mathbf{C}=\{c_{i}\}\in\mathbb{R}^{N_{f}\times(3+D)}\), with each having a new position \(t^{\prime}_{i}=t_{i}+\tilde{o_{i}}\). Since points now cluster near the
instance centroid, the subsequent SA layer's ball query often groups the same points, resulting in almost identical instance features after max pooling. Thus, we can quickly establish instance matches between modalities by matching points in close spatial locations.
### _Cross-Modal Feature Hallucination_
Different sensing principles make LiDAR and radar instance features modality-specific. For instance, radar measures relative Doppler velocity [14, 35, 36], while LiDAR provides detailed object geometry. Given these intrinsic differences, it's impractical to directly match all features across modalities. A solution is to ground cross-modal learning in a shared subspace, focusing on a subset of their features (the magenta block in Fig. 2). This mandates a feature projection module, allowing feature embedding in a shared latent space for both modalities. Specifically, given the instance-aggregated point set \(\mathbf{C}\in\mathbb{R}^{N_{f}\times(3+D)}\) of the primary modality in the domain \(X\) and clustered point set \(\mathbf{\hat{C}}\in\mathbb{R}^{\hat{N}_{f}\times(3+D)}\) of the auxiliary modality in domain \(\hat{X}\), we have two mapping functions acting as the feature projection: \(F_{pri}:X\to H\) and \(F_{aux}:\hat{X}\rightarrow\hat{H}\) to project both modalities to a shared common space. We adopt the MLP block for each of the mapping functions which yields two hallucination feature sets \(\mathbf{H}=\{h_{i}\}\in\mathbb{R}^{N_{f}\times F}\) and \(\mathbf{\hat{H}}=\{h_{i}\}\in\mathbb{R}^{N_{f}\times F}\) respectively. The dimension of the shared common subspace, \(F\), is set empirically.
### _Selective Matching_
Using the projected hallucination features, we must match pairs between modalities for training, as shown by the purple block in Fig.2. As we note that noisy and incorrectly classified points tend to cluster at random spots, our approach involves a cross-modal Nearest Neighbor (1-NN) search within a specific radius to pinpoint the right matches, as shown in Fig. 3(a). Notice that only foreground points successfully shifted towards object centers can reach neighbor points from the other modality, as they are close in space. This proximity not only aids in efficient cross-modal representation learning but also reduces distractions from noisy data points. The cross-modal hallucination loss is as follows:
\[\mathcal{L}_{FM}=\frac{1}{N_{p}}\sum_{i=0}^{N_{f}}\sum_{j=0}^{\hat{N}_{f}}||h_{ i}-\hat{h}_{i}||_{2}\cdot\mathbf{PM}_{ij} \tag{3}\]
where \(\mathbf{PM}\in\mathbb{R}^{N_{f}\times\hat{N}_{f}}\) is a binary matrix, and \(\mathbf{PM}_{ij}=1\) when \(c_{i}\) and \(\hat{c}_{j}\) are a selected matching pair, otherwise \(\mathbf{PM}_{ij}=0\). \(N_{p}\) denotes the total number of matched pairs.
Fig. 3: Point-level matches are difficult to obtain without centroid generation in the instance feature aggregation module due to sparsity. The black bounding boxes are the object ground truth labels. (Best viewed in color and zooming in).
Fig. 2: Method Overview. The upper figure illustrates the 2-step training strategy, blocks used in the first step of training are connected with green line and those for the second step are connected with orange line. Note the primary and auxiliary data can be interchangeable among two sensor modalities (radar and LiDAR) depending on the end goal. **Only single modal data (primary modal) will be used during inference** as shown in the lower figure connected with blue line.
### _Detection Head_
Following [7, 8], we encode bounding boxes as multidimensional vectors comprising locations, scale, and orientation. The detection head, shown as the blue block in Fig. 2, has branches for confidence prediction and box refinement. The detection head's loss function is:
\[\mathcal{L}_{Det}=\mathcal{L}_{ref}+\mathcal{L}_{cls} \tag{4}\]
To avoid trival solution for feature matching in Eq. 3, we use an extra detection head specifically for the shared space features, called shared detection head (the orange block in Fig. 2). This loss term is denoted as \(\mathcal{L}_{S-Det}\).
### _Training and Inference_
We adopt a two-step training strategy for stability. Unlike other methods [16, 17], our hallucination branch is co-optimized with the backbone in the second phase. In the first step, we train a basic detection network (shown by the green line in Fig.2) with loss:
\[\mathcal{L}_{s1}=\mathcal{L}_{Ctr}+\mathcal{L}_{O-Reg}+\mathcal{L}_{Det} \tag{5}\]
In the second step, parameters from the first phase initialize the backbone and feature aggregation module. While the auxiliary backbone remains static, the primary one undergoes joint refinement with the hallucination module. A new detection head addresses the augmented feature space (represented by the orange line in Fig.2). The associated loss is:
\[\mathcal{L}_{s2}=\mathcal{L}_{s1}+\lambda_{1}\mathcal{L}_{FM}+\lambda_{2} \mathcal{L}_{S-Det} \tag{6}\]
When it comes to inference, only the primary modality data is applied, as shown by the blue line in Fig. 2.
## IV Evaluation
### _Experimental Setup_
**Dataset**. The View-of-Delft (VoD) dataset [12] offers calibrated, synchronized data from LiDAR, RGB camera, and 4D radar for 3D object detection, comprising 8693 frames captured in Delft's urban environment. Unique to VoD is its inclusion of vulnerable road users like pedestrians and cyclists. Notably, VoD alone provides public access to simultaneous 4D radar recordings and LiDAR data and is thus selected for evaluation. While datasets such as nuScenes [14] have both radar and LiDAR data, their radars only offer 2D spatial measurements, limiting them for 3D object detection.
**Implementation details**. We follow the design of the backbone network in [7], which comprises several _SA layers_ with center-aware sampling to remove noisy points. Next, the _Vote Layer_ predicts an offset for each point to concentrate them on the corresponding object center for spatial alignment in centroid generation. After that, another _SA layer_ is employed to aggregate instance-level features. Once we obtain point clusters for different objects, we project instance-level features to a shared subspace using a 4-layer MLP. These projected features are later concatenated back to the instance-level features for bounding box classification and regression. We use \(\lambda_{1}=\frac{1}{3}\) and \(\lambda_{2}=\frac{2}{3}\) in Eqn. 6 for all our experiments. Only the primary modal data will be used during inference. The best LiDAR and radar detection models are selected based on the best validation result for their respective detection tasks.
### _Overall Results_
Tab. I and Tab. II show that our proposed framework gives rise to the best overall performance for both modalities, achieving \(41.77\) mAP and \(69.62\) mAP for radar and LiDAR
Fig. 4: Here is the illustration and visualization of the selective matching during training. In Fig. 3(b), we can see that all matched points (cyan points) are positioned near the bounding box center for co-visible objects, which demonstrates the effectiveness of the instance feature aggregation module. For wrongly sampled centered points, they will be moved to random positions and will not interfere with the training process (magenta points).
\begin{table}
\begin{tabular}{|p{42.7pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & Car & Pedestrian & Cyclist & \multirow{2}{*}{mAP} \\ & & (doU = 0.5) & (IoU = 0.25) & & (IoU = 0.25) \\ \hline PointPillars’ [1] & & 35.90 & 34.90 & 43.10 & 38.00 \\ SECOND [26] & V & 35.07 & 25.47 & 33.82 & 31.45 \\ CenterPoint [9] & & 32.32 & 17.37 & 40.25 & 29.98 \\ \hline PV-RCNN [5] & PV & **38.30** & 30.79 & 46.58 & 38.56 \\ \hline PointRCNN [2] & & 15.99 & 34.01 & 26.50 & 25.50 \\
3DSSD [8] & P & 23.86 & 9.09 & 32.20 & 21.71 \\ IASSD [7] & & 31.33 & 23.61 & 49.58 & 34.84 \\
**Ours** & & 32.32 & **42.49** & **50.49** & **41.77** \\ \hline \end{tabular}
\end{table} TABLE I: Test results for **radar object detection** on VoD dataset. Note that results for PointPillars with symbol \({}^{\dagger}\) are reported in [12]. The _‘Type’_ column denotes the data representation used in the method: _‘V’_ denotes voxel, _‘P’_ denotes point, _‘PV’_ denotes point-voxel. Methods underlined_ are all two-stage detectors.
object detection, respectively. Fig. 5 illustrates the qualitative results of our method.
**Radar Object Detection**. Despite the challenges of sparse and noisy radar point clouds, our detection method significantly outperforms SOTA techniques. PV-RCNN [5] excels in car detection due to reduced size ambiguity in voxel representation on sparse radar clouds. Yet, our method excels in detecting smaller objects like cyclists and pedestrians. Specifically, our model scores 42.49 mAP for pedestrians, outpacing the PV-RCNN by roughly 21.7%. This suggests that better representation for small objects with sparse clouds can be gleaned from cross-modal data, especially LiDAR. With our hallucination branch detailed in Sec. III-C, we encourage the radar detection network to emulate LiDAR's instance representation, and our results affirm its efficacy.
**LiDAR Object Detection**. In LiDAR detection, our technique leads in car and cyclist categories, surpassing the runner-up by 2.6% and 2.2% mAP respectively. This boost stems from the Radar Cross Section (RCS) features' semantic cues, distinguishing between metal and human skin [37, 38]. While two-stage detectors, such as [2, 5, 9], often excel in pedestrian detection through second-stage ROI refinement, they come with larger memory and slower speeds.
### _Ablation Study_
To be consistent with LiDAR detection work, the more demanding 3D IoU in KITTI [13] with 0.7, 0.5, and 0.5 is used for LiDAR evaluation and 0.5, 0.25, 0.25 for radar evaluation on the VoD [12]_validation_ set.
**Effect of auxiliary-modal attributes**. Through experiments with different auxiliary modal data inputs, we assess their influence on radar object detection using cross-modal supervision. Tab. III shows that radar detection improves when incorporating the LiDAR point cloud's geometric information (\(x\), \(y\), \(z\)). The LiDAR intensity attribute has a slight impact, given radar's stable semantic cues from RCS measurements. The Car category's modest performance gain is due to around 25% of cars lacking radar points, hindering detection. Moreover, cars' larger size versus cyclists and pedestrians complicates bounding box estimation with few points.
**Effect of joint optimization**. We study the effectiveness of our joint optimization strategy.
_(a). Setup_. We use the base network trained without cross-modal supervision and the hallucination branch as the ablation baseline. Later, we remove the hallucination branch of our framework shown in Fig. 2, freeze the backbone weights, and train a new detection head for performance comparison.
_(b). Results_. Looking at the first two rows in Tab. IV, mAP increases in both modalities, even without the hallucination branch. Significant improvements are seen with the Cyclist on radar and Pedestrian on LiDAR. For LiDAR, fewer car matches might cause a minor decline with only **BR** (Backbone Refinement), but other categories enhance. To understand why only refined backbone improves performance, we collected backbone output statistics. We can see from Fig. 6 that the percentage of instances containing more than five points gets improved. We propose that cross-modal representation learning enhances the backbone network's feature extraction. These robust features are partially propagated to close points via max pooling, retaining more significant instance points in the final sampling layer, thus creating a robust instance feature representation for improved detection. Furthermore, the last row in each modality shows that the entire network with the hallucination branch leads to the best performance for all three categories, indicating its effectiveness and that extra cross-modal features can be encoded in our model for better detection inference.
**Effect of two-level alignment**. We next study the LiDAR object detection results to better understand the influence of our alignment strategies introduced in Sec. III-B and Sec. III-C. Tab. V shows that spatial alignment in centroid generation is the key to cross-modal learning. When removed, the discrepancy between modalities results in mismatching pairs.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multicolumn{2}{c|}{Car} & \multicolumn{2}{c|}{Pedestrian} & \multicolumn{2}{c|}{Cyclist} \\ & & (IoU = 0.5) & (IoU = 0.25) & (IoU = 0.25) & \\ \hline PointPillars\({}^{\dagger}\)[1] & & 75.60 & 55.10 & 55.40 & 62.10 \\ SECOND [26] & V & 77.69 & 59.95 & 65.50 & 67.71 \\ CenterPoint [9] & 68.29 & 66.90 & 64.42 & 66.54 \\ \hline PV-RCNN [5] & PV & 75.16 & 65.24 & 66.09 & 68.83 \\ \hline PointRCNN [2] & & 61.51 & **67.36** & 67.03 & 65.30 \\
3DSSD [8] & \multirow{2}{*}{P} & \multirow{2}{*}{77.34} & \multirow{2}{*}{12.64} & \multirow{2}{*}{37.68} & \multirow{2}{*}{42.55} \\ & & & & & \\ & & & & & \\ & & & & & \\ \hline \end{tabular}
\end{table} TABLE II: Test results for **LiDAR object detection** on VoD dataset. Note that results for PointPillars with symbol \({}^{\dagger}\) are reported in [12]. The _’Type’_ column denotes the data representation used in the method: _’V’_ denotes voxel, _’P’_ denotes point, and _’PV’_ denotes point-voxel. Methods underlined are all two-stage detectors.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \multirow{2}{*}{Modality} & \multirow{2}{*}{**BR**} & \multirow{2}{*}{**HA**} & \multicolumn{2}{c}{Car} & \multicolumn{2}{c}{Pedestrian} & \multicolumn{2}{c}{Cyclist} \\ & & & (IoU=0.5) & (IoU = 0.25) & (IoU = 0.25) & mAP \\ \hline \multirow{3}{*}{ radar} & & 31.24 & 32.50 & 60.69 & 41.48 \\ & ✓ & 31.51 & 33.48 & 67.19 & 43.94 \\ & ✓ & ✓ & **32.20** & **40.42** & **68.67** & **47.03** \\ \hline \hline \multirow{3}{*}{ LiDAR} & & (IoU=0.7) & (IoU = 0.5) & (IoU = 0.5) & mAP \\ \cline{1-1} & & 57.94 & 29.40 & 68.26 & 51.87 \\ \cline{1-1} & ✓ & 55.72 & 48.64 & 76.22 & 60.20 \\ \cline{1-1} & ✓ & ✓ & **58.82** & **56.29** & **78.17** & **64.42** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Evaluation on the validation set. **BR**: network with backbone refinement which optimizes the backbone in the second step training with cross-modal supervision, **HA**: network with hallucination branch.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multicolumn{2}{c|}{Car} & \multicolumn{2}{c|}{Pedestrian} & \multicolumn{2}{c|}{Cyclist} \\ & & (IoU = 0.5) & (IoU = 0.25) & (IoU = 0.25) & mAP \\ \hline PointPillars\({}^{\dagger}\)[1] & & 75.60 & 55.10 & 55.40 & 62.10 \\ SECOND [26] & V & 77.69 & 59.95 & 65.50 & 67.71 \\ \cline{1-1} CenterPoint [9] & & 68.29 & 66.90 & 64.42 & 66.54 \\ \hline PV-RCNN [5] & PV & 75.16 & 65.24 & 66.09 & 68.83 \\ \hline PointRCNN [2] & & 61.51 & **67.36** & 67.03 & 65.30 \\
3DSSD [8] & \multirow{2}{*}{P} & \multirow{2}{*}{77.34} & \multirow{2}{*}{12.64} & \multirow{2}{*}{37.68} & \multirow{2}{*}{42.55} \\ & & & & & \\ & & & & \\ & & & & & \\ \hline \end{tabular}
\end{table} TABLE III: Radar detection results in _validation set_ supervised by different LiDAR attributes during training. \(x,y,z\) for point position, \(I\) for intensity.
Misleading supervision signals caused by mismatched pairs degenerate the network performance, especially for large objects like Cars. When the spatial alignment in centroid generation is added, the model establishes correct cross-modal instance matches and improves mAP, as shown in the third row of Tab. V. We attribute the slight performance drops on cars to fewer matched instances in this category. As expected, the feature alignment will only function and positively contribute to the model when the cross-modal points are spatially aligned first. The last row shows that combining two alignment operations yields the best performance. Similar conclusions are observed for radar detection, and we omit its discussion to avoid repetition.
### _Runtime Efficiency_
We evaluate the memory consumption and inference speed compared with SOTA methods with LiDAR as the primary modal input. As shown in Tab. VI, our method has the second lowest memory footprint among all methods and the fastest inference speed in the point-based methods. Together with Tab. II and Tab. I, our framework has the best balance between efficiency and accuracy.
## V Conclusions and Future Work
This work introduced a novel framework to improve the robustness of single-modal 3D object detection via cross-modal supervision. Our method is able to effectively exploit the side information from auxiliary modality data for a better-informed backbone network and a robust hallucination branch. Bespoken spatial and domain alignment strategies are also proposed to address the fundamental discrepancy across modalities and showed a significant performance improvement. Experimental results on VoD [12] demonstrate that our method outperforms SOTA methods in object detection for both radar and LiDAR while maintaining a competitive memory footprint and runtime efficiency.
\begin{table}
\begin{tabular}{c|c c|c c c} \hline \hline \multirow{2}{*}{Modality} & \multirow{2}{*}{Spatial} & \multirow{2}{*}{Feature} & Car & Pedestrian & Cyclist \\ & & (IoU=0.7) & (IoU=0.5) & (IoU=0.5) & mAP \\ \hline \multirow{3}{*}{LiDAR} & & & 57.94 & 29.40 & 68.26 & 51.87 \\ & & ✓ & 28.88 & 30.22 & 64.33 & 41.15 \\ & ✓ & & 57.29 & 48.66 & 77.70 & 61.22 \\ & ✓ & ✓ & **58.82** & **56.29** & **78.17** & **64.42** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Ablation on the effect of spatial alignment in centroid generation and feature level alignment to the cross-modal hallucination. Notice that we use the basic model trained without any cross-modal information in Tab. IV as the baseline performance.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Type & Method & Memory & Parallel & Speed \\ \hline \multirow{3}{*}{Voxel-based} & PointPillars [1] & 354MB & 69 & 123 \\ \cline{2-5} & SECOND [26] & 710MB & 34 & 34 \\ \hline Point-Voxel & PV-RCNN [5] & 1223MB & 17 & 13 \\ \hline \multirow{3}{*}{Point-based} & PointRCNN [2] & 560MB & 43 & 14 \\ \cline{2-5} & 3DSSD [8] & 502MB & 48 & 20 \\ \cline{1-1} \cline{2-5} & IA-SSD [7] & 120MB & 202 & 23 \\ \cline{1-1} \cline{2-5} & **Ours** & **133MB** & **183** & **23** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Comparing memory usage and runtime efficiency with LiDAR input. Memory for each method is measured using the same 16384 points per scan. ’Parallel’ refers to the maximum batch size for one RTX 3090. Speed is gauged with single scan input and reported in frames per second. Our method’s results are highlighted in **bold**.
Fig. 5: Qualitative result of our method. Bounding boxes for GTs are denoted in red, and the predictions are denoted in green. The left images and the middle figures are the radar detection results. Notice that the RGB images here are **only for visualization purposes but not used in model training/inference**. The right figures visualize the LiDAR point clouds and the prediction results.
Fig. 6: Percentage of instance containing 5-20 points in the backbone output. Left for **radar**, right for **LiDAR**, The **BR** denotes backbone refinement. |
2309.07125 | Text-Guided Generation and Editing of Compositional 3D Avatars | Our goal is to create a realistic 3D facial avatar with hair and accessories
using only a text description. While this challenge has attracted significant
recent interest, existing methods either lack realism, produce unrealistic
shapes, or do not support editing, such as modifications to the hairstyle. We
argue that existing methods are limited because they employ a monolithic
modeling approach, using a single representation for the head, face, hair, and
accessories. Our observation is that the hair and face, for example, have very
different structural qualities that benefit from different representations.
Building on this insight, we generate avatars with a compositional model, in
which the head, face, and upper body are represented with traditional 3D
meshes, and the hair, clothing, and accessories with neural radiance fields
(NeRF). The model-based mesh representation provides a strong geometric prior
for the face region, improving realism while enabling editing of the person's
appearance. By using NeRFs to represent the remaining components, our method is
able to model and synthesize parts with complex geometry and appearance, such
as curly hair and fluffy scarves. Our novel system synthesizes these
high-quality compositional avatars from text descriptions. The experimental
results demonstrate that our method, Text-guided generation and Editing of
Compositional Avatars (TECA), produces avatars that are more realistic than
those of recent methods while being editable because of their compositional
nature. For example, our TECA enables the seamless transfer of compositional
features like hairstyles, scarves, and other accessories between avatars. This
capability supports applications such as virtual try-on. | Hao Zhang, Yao Feng, Peter Kulits, Yandong Wen, Justus Thies, Michael J. Black | 2023-09-13T17:59:56Z | http://arxiv.org/abs/2309.07125v1 | # TECA: Text-Guided Generation and Editing of Compositional 3D Avatars
###### Abstract
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description. While this challenge has attracted significant recent interest, existing methods either lack realism, produce unrealistic shapes, or do not support editing, such as modifications to the hairstyle. We argue that existing methods are limited because they employ a monolithic modeling approach, using a single representation for the head, face, hair, and accessories. Our observation is that the hair and face, for example, have very different structural qualities that benefit from different representations. Building on this insight, we generate avatars with a compositional model, in which the head, face, and upper body are represented with traditional 3D meshes, and the hair, clothing, and accessories with neural radiance fields (NeRF). The model-based mesh representation provides a strong geometric prior for the face region, improving realism while enabling editing of the person's appearance. By using NeRFs to represent the remaining components, our method is able to model and synthesize parts with complex geometry and appearance, such as curly hair and fluffy scarves. Our novel system synthesizes these high-quality compositional avatars from text descriptions. Specifically, we generate a face image using text, fit a parametric shape model to it, and inpaint texture using diffusion models. Conditioned on the generated face, we sequentially generate style components such as hair or clothing using Score Distillation Sampling (SDS) with guidance from CLIFSeg segmentations. However, this alone is not sufficient to produce avatars with a high degree of realism. Consequently, we introduce a hierarchical approach to refine the non-face regions using a BLIP-based loss combined with SDS. The experimental results demonstrate that our method, Text-guided generation and Editing of Compositional Avatars (TECA), produces avatars that are more realistic than those of recent methods while being editable because of their compositional nature. For example, our TECA enables the seamless transfer of compositional features like hairstyles, scarves, and other accessories between avatars. This capability supports applications such as virtual try-on. The code and generated
_avatars will be publicly available for research purposes at_yfeng95.github.io/teca_._
## 1 Introduction
There are two traditional approaches to creating facial avatars for games and social media. The first method allows users to select attributes such as skin color, hairstyle, and accessories manually through a graphical interface. While one can create nearly photo-realistic avatars with tools such as MetaHuman [14], the manual process is cumbersome and it is difficult to make an avatar resemble a specific individual's appearance. The second method involves estimating facial shape and appearance from an image [10, 15, 17, 36, 59, 71] or a video [19, 22, 69, 70]. These methods, however, do not support editing and customization of the captured avatar. Recently, a third approach has emerged that exploits text descriptions, generative models, and neural radiance fields [9, 35, 42, 48]. These methods promise the easy creation of diverse and realistic avatars but suffer in terms of 3D realism and editability. To address these challenges, some methods [4, 67] incorporate additional priors from existing face or body models [33, 39] for face generation and animation. However, their ability to synthesize complex geometries like those of hair and scarves is constrained by the fixed topology of typical 3D mesh models.
Going beyond prior work, our goal is to make text a viable interface for creating and editing realistic 3D face avatars with accessories and hair. Our approach is guided by two key observations: 1) Different components of the avatar, such as the hair and face, have unique geometric and appearance properties that benefit from distinct representations. 2) Statistical shape models of head and body shapes can provide valuable guidance to generative image models. To exploit the first observation, we adopt a _compositional_ approach to avatar generation, leveraging the strengths of neural and mesh-based 3D content creation methods. Specifically, we model an avatar as a combination of two primary components: the face/body and non-face/body regions. To exploit the second observation, we use the SMPL-X body model [47] to represent the shape of the head and shoulders. By leveraging a model-based representation of face/body shape, we remove the need to model shapes. Instead, we focus such models on creating realistic face texture. Exploiting 3D shapes enables us to generate realistic faces by inpainting textures with existing, pretrained, diffusion models. Moreover, the integration of the shape model enables flexible body shape modifications by manipulating the parametric shape representation, facilitating the transfer of hairstyles and other accessories between avatars with different proportions. For the non-face components like hair, clothing, and accessories, we model their shape and appearance with NeRF [44] since it can represent diverse geometry and reflectance.
Starting with a textual description, our avatar generation process involves multiple steps (see Fig. 2 for an illustration). First, we use stable diffusion model [53] to generate an image of a face conditioned on the text description. Second, we optimize the shape parameters of the SMPL-X body model [47] to obtain a shape representation of the person. Third, inspired by TEXTure [52], we leverage the estimated 3D face shape to rotate the head and use Stable Diffusion to generate missing texture. Fourth, and most important, we employ a sequential, compositional approach to generate additional style components. Conditioned on the face mesh, we learn a NeRF-based component model from the text description. To enable the transfer of non-face components between avatars, we define a canonical space in which we use a template face shape and train the NeRF component on top of it. Using the hybrid volume rendering technique from SCARF [18] and DELTA [19], including shape skinning, we render our hybrid avatar model into the observation image space. We then apply Score Distillation Sampling (SDS) [48] to optimize the NeRF model. The SDS loss provides gradients for updating NeRF to guide 2D renderings to match the text input. While NeRF is a highly flexible method, we argue that it is not needed to represent the face/body. Instead, narrow its focus to modeling specific components of the avatar that are not well represented by parametric face/body models, such as hair or accessories. To that end, we use segmentation to steer the generation. For example, in the case of hair, we compute a hair segmentation mask and use it to focus NeRF on representing the hair region. The segmentation mask is obtained by running CLIPSeg [41] on the current rendering and is updated iteratively throughout the generation process. To enhance learning efficiency, we adopt the Latent-NeRF [42] approach and train the NeRF model in latent space. Finally, we refine the non-face regions using a combination of a BLIP-based loss [31] and SDS in image space. This improves the visual quality of the non-face components.
Figure 1 shows several avatars generated by our method. It shows the underlying body mesh, the generated texture from different views, the generation of NeRF-based hair, hats, and clothing, as well as the transfer of components from one avatar to another. Our approach for avatar generation surpasses existing methods in terms of realism, as demonstrated by extensive qualitative analysis. TECA' compositional framework has two key advantages. First, it uses the "right" models for the task: meshes for the face and body and NeRF for hair and clothing. This disentangling of the face and non-face parts results in avatars of higher realism than the prior art. Second, the compositional nature supports editing of the individual components and enables the seamless transfer of features such as hairstyle or clothing between avatars. These advancements open new possibilities for diverse applications, including virtual try-on.
## 2 Related work
3D Avatar Creation From X.Creating realistic avatars is a long-standing challenge, with many solutions for building digital avatars from scans, videos, and images. Sophisticated capture systems are used to acquire high-quality 3D scans, which are turned into realistic, personalized, photorealistic avatars [1, 2, 6, 23, 38, 57]. However, these methods are not scalable due to the high cost of building such systems and the sophisticated pipelines required for avatar creation. Consequently, there is great interest in creating avatars from easily captured images [15, 16, 17, 59, 71] and monocular videos [8, 19, 20, 22, 69]. Such methods estimate 3D faces with the assistance of parametric models of the head [5, 21, 34] or full body [34, 47]. Recent methods also learn _generative_ 3D head models using only images [3, 10], allowing the creation of novel avatars from random noise inputs. Additionally, there exist methods [18, 19, 32, 51, 54] that adopt a compositional approach to avatar modeling, enabling manipulation and control. Unlike these approaches, TECA, requires only natural language descriptions to control the shape, texture, and accessories of a virtual avatar, making the creation and editing of realistic personal avatars accessible to a broad audience.
Text-Guided 3D General Object Generation.Following the success of recent 2D text-to-image models [46, 50, 53], the generation of 3D content from text is gaining attention [27, 35, 42, 48]. Since paired text and 3D training data is scarce, recent text-to-3D methods generate 3D content by leveraging large, pretrained 2D text-to-image models. These approaches learn a 3D representation using losses on the projected image in multiple 2D views, where pretrained models serve as frozen critics. As an example, Contrastive Language-Image Pretraining (CLIP) [49] is used to generate 3D objects in the form of an occupancy network [55], a mesh [13, 43, 45], and a NeRF [27, 61]. Similarly, Dream-Fusion [48] and subsequent work [12, 26, 35, 42, 58, 64] adopts a text-to-image diffusion model as a guide to optimize 3D object generation, significantly improving visual quality. Despite the rapid progress in generating general objects, these methods suffer from visual artifacts when generating avatars (such as creating a person with multiple faces).
Text-Guided 3D Avatar Generation.Large, pretrained 2D text-to-image models are also used for avatar creation. AvatarCLIP [25] uses the CLIP model to generate coarse body shapes, which are parameterized by the Skinned Multi-Person Linear (SMPL) model [40]. DreamAvatar [9] generates a 3D human avatar from a given text prompt and SMPL body shape, where detailed shape and texture are learned under the guidance of a text-conditioned diffusion model. These methods focus on full-body 3D avatar generation, resulting in low-resolution faces with limited expression. T2P [68] uses CLIP supervision to optimize discrete attributes in a video game character creation engine. As their method is confined to what can be represented by the game engine, their avatars are limited by the expressiveness of the artist-designed hairstyles and facial features. ClipFace [4] enables text-guided editing of a textured 3D morphable face model [34], including expression and texture. Describe3D [66] synthesizes a 3D face mesh with texture from a text description. Neither approach explicitly models hair, and Describe3D resorts to manual post-processing to add hair. To address this problem and boost the visual quality, DreamFace [67] employs a dataset of textures and artist-designed hairstyles, utilizing a CLIP model for component selection and a diffusion model for texture generation. While DreamFace achieves realistic head avatars, it is limited to selecting pre-existing hairstyles from a manually curated gallery of artist-designed assets. Such an approach is expensive and does not scale. Rodin [63] is a generative 3D face model trained from 100K synthetic 3D avatars. They exploit CLIP's text and image embedding to enable text-guided generation and editing. However, their results inherit the limited realism of the synthetic training data, and the approach does not disentangle the hair from the rest of the face. Thus, it is difficult to change hairstyles without unwanted changes in facial appearance. In contrast, our method generates realistic facial avatars using diffusion models without relying on artist-designed hairstyles. Moreover, its compositional nature guarantees that edits to the hairstyle do not impact the face region.
## 3 Method
TECA generates 3D facial avatars with realistic hair and style components from text descriptions. The overview of the pipeline is illustrated in Fig. 2. Given a text description of a person's physical appearance, we first generate a corresponding image using Stable Diffusion [53]. We extract the 3D geometry by fitting a SMPL-X model to this image. To generate the face texture, we follow the iterative inpainting approach of TEXTure [52], which generates images from different viewpoints and projects them onto the surface. The texture is generated using diffusion [53], taking the text, already-generated texture, and shape into account. Conditioned on the generated face, we generate other components such as hairstyles or clothing using NeRF, which we optimize using SDS constrained with a semantic mask generated with CLIPSeg. The final refinement is done using a combination of SDS and BLIP-based losses.
### Preliminaries
Parametric Body Model.To model a realistic human avatar including the face and shoulders, we use the SMPL-X model [47]. SMPL-X is a parametric mesh model with identity \(\mathbf{\beta}\in\mathbb{R}^{|\mathbf{\beta}|}\), pose \(\mathbf{\theta}\in\mathbb{R}^{3n_{k}+3}\), and expression \(\mathbf{\psi}\in\mathbb{R}^{|\mathbf{\psi}|}\) parameters that control the body and facial shape.
of the avatar. SMPL-X provides a consistent, predefined topology with a set of vertices \(\mathbf{V}\in\mathbb{R}^{n_{v}\times 3}\), which are computed by:
\[\begin{split}\mathbf{V}&=F_{\text{smpk}}(\mathbf{\beta},\mathbf{ \theta},\mathbf{\psi})\\ &=F_{\text{bs}}(T_{P}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi}),J(\mathbf{ \beta}),\mathbf{\theta};\mathbf{W})\end{split} \tag{1}\]
where \(F_{\text{bs}}\) is a linear blend skinning function and \(\mathbf{W}\in\mathbb{R}^{n_{k}\times n_{v}}\) are the blend weights. \(T_{P}\) represents the template mesh in a neutral, canonical pose:
\[T_{P}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi})=\mathbf{T}+B(\mathbf{\beta},\mathbf{\theta},\mathbf{ \psi}), \tag{2}\]
where \(B(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi}):\mathbb{R}^{|\mathbf{\beta}|}\times\mathbb{R} ^{3n_{k}+3}\times\mathbb{R}^{|\mathbf{\psi}|}\rightarrow\mathbb{R}^{n_{v}\times 3}\) gives deformations of the template based on the shape, pose, and expression parameters. \(J(\mathbf{\beta}):\mathbb{R}^{|\mathbf{\beta}|}\rightarrow\mathbb{R}^{n_{k}\times 3}\) is a joint regressor that takes in the identity shape parameters and produces the positions of the body joints. SMPL-X builds a connection between the mean body shape \(\mathbf{T}\) and a specific body shape \(\mathbf{V}\) with identity, pose, and expression information. This can be formulated as a vertex-wise mapping from \(\mathbf{T}\) to \(\mathbf{V}\). Specifically, given \(\mathbf{t}_{i}\) and \(\mathbf{v}_{i}\) (the \(i\)-th row from \(\mathbf{T}\) and \(\mathbf{V}\), respectively), the mapping is:
\[\begin{bmatrix}\mathbf{v}_{i}^{T}\\ 1\end{bmatrix}=M_{i}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi})\begin{bmatrix}\mathbf{t}_{i} ^{T}\\ 1\end{bmatrix}. \tag{3}\]
Here, the function \(M_{i}(\cdot)\) produces a \(4\times 4\) matrix, \(M_{i}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi})=\)
\[\begin{split}\left(\sum_{k=1}^{n_{k}}w_{k,i}G_{k}(\mathbf{\theta},J( \mathbf{\beta}))\right)\begin{bmatrix}\mathbf{E}&B_{i}(\mathbf{\beta},\mathbf{\theta},\mathbf{ \psi})^{T}\\ \mathbf{0}&1\end{bmatrix},\end{split} \tag{4}\]
where \(w_{k,i}\) is an entry from the blend skinning weights \(\mathbf{W}\) and \(G_{k}(\mathbf{\theta},J(\mathbf{\beta}))\in\mathbb{R}^{4\times 4}\) computes the world transformation for the \(k\)-th body joint. \(\mathbf{E}\in\mathbb{R}^{3\times 3}\) is the identity matrix, and \(B_{i}\) computes the \(i\)-th row of the blend shapes.
NeRF RepresentationNeRF [44] encodes a 3D object as a continuous volumetric radiance field of color \(\mathbf{c}\in\mathbb{R}^{|\mathbf{c}|}\) and density \(\sigma\in\mathbb{R}\). A NeRF is represented by a neural network \((\sigma,\mathbf{c})=F_{\text{neff}}(\mathbf{x},\mathbf{p};\mathbf{\Phi})\), where \(\mathbf{x}\in\mathbb{R}^{3}\) is the location, \(\mathbf{p}\in\mathbb{R}^{2}\) is the viewing direction, and \(\mathbf{\Phi}\) are the learnable parameters of \(F_{\text{neff}}\). Given a camera position, we estimate a 2D image from the NeRF with volume rendering. We denote a per-pixel ray \(R(\ell)=\mathbf{o}+\ell\mathbf{d}\) by the origin \(\mathbf{o}\in\mathbb{R}^{3}\), direction \(\mathbf{d}\in\mathbb{R}^{3}\), and \(\ell\in[\ell_{n},\ell_{f}]\). To discretize the rendering, we evenly split the rendering range into \(n_{\ell}\) bins and randomly sample a \(\ell_{i}\) for every bin with stratified sampling. The volume rendering formulation for each pixel is:
\[\begin{split} C(R(\ell))=\sum_{i=1}^{n_{\ell}}\alpha_{i}\mathbf{c}_{ i},\\ \text{with}&\alpha_{i}=\exp\left(-\sum_{j=1}^{i-1} \sigma_{j}\Delta\ell_{j}\right)\left(1-\exp\left(-\sigma_{i}\Delta\ell_{i} \right)\right).\end{split} \tag{5}\]
Here, \(\Delta\ell_{i}=\ell_{i+1}-\ell_{i}\) is the adjacent samples distance.
Score Distillation Sampling.DreamFusion [48] proposes Score Distillation Sampling (SDS) to guide 3D content generation using pre-trained 2D text-to-image diffusion models. Following [48], we denote the learned denoising function in the diffusion model as \(\epsilon(\mathbf{Q}_{t},\mathbf{y},t)\). Here, \(\mathbf{Q}_{t}\) is the noisy image at timestep \(t\). SDS adopts the denoising function as a critic to update the 2D rendering \(\mathbf{Q}\) of the generated 3D object across different viewpoints. The gradient is computed as:
\[\nabla_{\mathbf{Q}}L_{\text{skds}}(\mathbf{Q})=\mathbb{E}_{t,\mathbf{\epsilon}}\left[\mathbf{u }_{t}\cdot\left(\epsilon\left(\mathbf{Q},\mathbf{y},t)-\mathbf{\epsilon}\right)\right], \tag{6}\]
where \(\mathbf{u}_{t}\) is a weight at timestep \(t\)[24].
### 3D Face Generation
To generate a 3D facial shape from a text description, we use a pre-trained Stable Diffusion model [53] to synthesize
Figure 2: Overview of TECA. TECA follows a sequential pipeline to generate realistic avatars. First, the text input is passed to Stable Diffusion to generate a single face image, which serves as a reference to obtain the geometry by SMPL-X fitting. We then adopt a texture painting approach inspired by [52], where the mesh is iteratively painted with a texture corresponding to the text using Stable Diffusion. Subsequently, style components such as the hair are modeled using NeRF in a latent space, with optimization guided by an SDS loss (\(L_{\text{SDS}}\)) and a mask loss (\(L_{\text{mask}}\)) with CLIPSeg segmentation, and finally refined in pixel space using \(L_{\text{SDS}}\), \(L_{\text{mask}}\), and an additional BLIP-based loss (\(L_{\text{sim}}\)). This hybrid modeling approach results in high-quality and realistic avatars.
a 2D face image that semantically matches the given text. The descriptor keywords might include overweight, slim, muscular, or old. Given the generated face image, we use an off-the-shelf landmark detector [7] to obtain a set of facial landmarks \(\{\mathbf{e}_{i}|\mathbf{e}_{i}\in\mathbb{R}^{3}\}_{i=1}^{n_{e}}\), where \(n_{e}\) is the number of landmarks. We optimize the SMPL-X parameters using:
\[(\mathbf{\beta}^{*},\mathbf{\theta}^{*},\mathbf{\psi}^{*})=\operatorname*{argmin}_{\mathbf{ \beta},\mathbf{\theta},\mathbf{\psi}}\sum_{i=1}^{n_{e}}\left\|M_{\kappa(i)}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi})\mathbf{t}_{\kappa(i)}-\mathbf{e}_{i}\right\|_{1} \tag{7}\]
\(\kappa(i)\) denotes the index of a vertex of the SMPL-X model that corresponds to the \(i\)-th landmark. Note that optimizing facial shape in this way results in full-body shape parameters, but the pose parameters are not well-constrained. Then, the final avatar shape is given by \(\mathbf{V}^{*}=F_{\text{smplx}}(\mathbf{\beta}^{*},\mathbf{\theta}^{c},\mathbf{0})\), where \(\mathbf{\theta}^{c}\) represents the body pose parameters corresponding to an "A-pose".
To generate a complete texture of the reconstructed face, we follow the iterative inpainting procedure of TEX-Ture [52]. In each step, we generate an image \(\mathbf{I}_{i}\) in the viewpoint of \(\mathbf{p}_{i}\) using Stable Diffusion [53] and project \(\mathbf{I}_{i}\) back to the mesh surface according to the geometry \(\mathbf{V}^{*}\) and \(\mathbf{p}_{i}\). The iterative inpainting can be denoted \(\{\mathbf{A}_{i}\}_{i=0}^{n_{p}}\), where \(\mathbf{A}_{0}\), \(\{\mathbf{A}_{i}\}_{i=1}^{n_{p}-1}\), and \(\mathbf{A}_{n_{p}}\) are the initial, intermediate, and final texture UV maps, respectively. Denoting the differentiable mesh renderer as \(R_{m}(\mathbf{A},\mathbf{p},\mathbf{V}^{*})\), the painting process can be summarized as:
\[\mathbf{A}_{i}=\operatorname*{argmin}_{\mathbf{A}}\left\|R_{m}(\mathbf{A},\mathbf{p}_{i},\mathbf{ V}^{*})-\mathbf{I}_{i}\right\|_{1}. \tag{8}\]
To reduce cross-view conflicts in texture, we follow [52] to take both the mesh geometry \(\mathbf{V}^{*}\) and previous texture \(\mathbf{A}_{i-1}\) information into account when generating the image \(\mathbf{I}_{i}\). This is achieved by iteratively applying depth-aware and inpainting diffusion models in the denoising process. We also make the assumption that the face of the individual is approximately bilaterally symmetric and add an additional symmetry regularization term \(L_{\text{sym}}\). This term enforces similarity between the frontal face image and its horizontally flipped counterpart. Further information on this regularization and our texture generation process can be found in the Sup. Mat.
### Hair, Clothing, and Accessory Generation
Canonicalization.Building upon the generated face, represented as a textured mesh, we learn a separate NeRF model for attributes like hair, clothing, or accessories. The NeRF model is built in a canonical space, which is constructed around the SMPL-X template mesh \(\mathbf{T}\), enabling the animation and transfer of the non-face parts. For body mesh \(\mathbf{V}\) and its corresponding parameters \(\mathbf{\beta}\), \(\mathbf{\theta}\), and \(\mathbf{\psi}\), we follow previous work [9, 11, 18] to map the points from observation space to canonical space (\(\mathbf{x}\rightarrow\mathbf{x}^{c}\)):
\[\mathbf{x}^{c}=\sum_{\mathbf{v}_{i}\in\mathcal{N}(\mathbf{x})}\frac{\omega_{i}(\mathbf{x},\bm {v}_{i})}{\omega(\mathbf{x},\mathbf{v}_{i})}M_{i}(\mathbf{0},\mathbf{\theta}^{c},\mathbf{0})\left( M_{i}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi})\right)^{-1}\mathbf{x}, \tag{9}\]
where \(\mathcal{N}(\mathbf{x})\) is the set of nearest neighbors of \(\mathbf{x}\) in \(\mathbf{V}\). The weights are computed as:
\[\begin{split}\omega_{i}(\mathbf{x},\mathbf{v}_{i})&=\exp \left(-\frac{\left\|\mathbf{x}-\mathbf{v}_{i}\right\|_{2}\left\|\mathbf{w}_{\xi(\mathbf{x})}- \mathbf{w}_{i}\right\|_{2}}{2\tau^{2}}\right),\text{ and}\\ \omega(\mathbf{x},\mathbf{v}_{i})&=\sum_{\mathbf{v}_{i}\in \mathcal{N}(\mathbf{x})}\omega_{i}(\mathbf{x}),\end{split} \tag{10}\]
where \(\xi(\mathbf{x})\) is the index of the vertex in \(\mathbf{V}\) that is closest to \(\mathbf{x}\). \(\mathbf{w}_{i}\) is the \(i\)-th column of \(\mathbf{W}\), and \(\tau\) is 0.2.
Mesh-Integrated Volumetric Rendering.Conditioned on the textured face mesh, we learn NeRF models with mesh-integrated volume rendering following [18]. Specifically, when a ray \(R(\ell)\) is emitted from the camera center \(\mathbf{o}\) and intersects the mesh surface, we set \(\ell_{f}\) such that \(R(\ell_{f})\) represents the first intersection point. The texture color at \(R(\ell_{f})\), denoted by \(\mathbf{c}^{*}\), is then used in the volume rendering by extending Eqn. 5:
\[\begin{split} C(R(\ell))=\left(1-\sum_{i=1}^{n_{\ell}-1}\alpha_{i }\right)\mathbf{c}^{*}+\sum_{i=1}^{n_{\ell}-1}\alpha_{i}\mathbf{c}_{i},\\ \text{with}\ \ \alpha_{i}=\exp\left(-\sum_{j=1}^{i-1}\sigma_{j} \Delta\ell_{j}\right)\left(1-\exp(-\sigma_{i}\Delta\ell_{i})\right).\end{split} \tag{11}\]
For a ray that does not intersect the mesh surface, the computation of aggregated color follows Eqn. 5. Unlike in a traditional NeRF representation, we model NeRF in a latent space to accelerate the learning process. Specifically, \(\mathbf{c}\) and \(\mathbf{c}^{*}\) represent 4-dimensional latent features. While the features \(\mathbf{c}_{i}\) are optimized, the latent feature on the mesh surface \(\mathbf{c}^{*}\) is obtained by running the stable diffusion encoder [53] on the RGB rendering of the textured mesh. After volumetric integration, the resulting feature image is decoded with the Stable Diffusion model.
To train a NeRF model for a specific style component, we rely on CLIPSeg [41] for spatial guidance. Taking hair as an example, we use CLIPSeg with a keyword of "hair" to segment the hair region in the image generated by rendering the hybrid mesh-NeRF model. This hair segmentation mask \(\mathbf{\Omega}\) indicates the hair and non-hair regions. We use \(\mathbf{\Omega}\) to guide the NeRF model to focus on the representation of objects within the masked region while discouraging it from learning geometry outside the region. This is useful to prevent the NeRF from modeling the face, when it should only represent hair. We use the following mask-based loss:
\[L_{\text{mask}}=\left\|\mathbf{\Omega}-\hat{\mathbf{\Omega}}\right\|_{1}, \tag{12}\]
where \(\hat{\mathbf{\Omega}}\) is the rendered NeRF mask, obtained by sampling rays for all pixels of the entire image. The computation of a mask value at pixel location \(R(\ell)\) is given by:
\[\Omega_{i}\left(R(\ell),n_{\ell}-1\right)=\sum_{i=1}^{n_{\ell}-1}\alpha_{i}. \tag{13}\]
To prevent floating "radiance clouds," we incorporate the sparsity loss from [42] into the training of the NeRF:
\[L_{\text{sparse}}=\sum_{i}F_{\text{BE}}\left(\Omega_{i}\left(R(\ell),n_{\ell} \right)\right), \tag{14}\]
where \(F_{\text{BE}}(a)=-a\ln a-(1-a)\ln(1-a)\) is a binary entropy function. The total training loss function for the latent NeRF models is:
\[L_{\text{NeRF}}=L_{\text{abs}}+\lambda_{\text{mask}}L_{\text{mask}}+\lambda_ {\text{sparse}}L_{\text{sparse}}. \tag{15}\]
Style Component Refinement in RGB Space.While latent NeRF models can be used to learn reasonable geometry and texture, we have observed that adding further refinement in RGB space using a combination of the SDS loss and a loss based on BLIP [31] improves local detail. Our BLIP loss, \(L_{\text{sim}}\), measures the similarity of high-level visual and text features. Maximizing their similarity encourages the NeRF model to capture additional details, including structure, texture, and semantic content from the text description, leading to visually appealing results, see Fig. 7.
To perform the refinement, we append an additional linear layer to the NeRF model that converts the 4D latent feature into a 3D color representation [60]. The initial weights of this layer are computed using pairs of RGB images and their corresponding latent codes over a collection of natural images. Let \(\mathbf{z}_{\text{img}}\) and \(\mathbf{z}_{\text{text}}\) be the embeddings of the rendered image and text prompt, then the similarity loss is:
\[L_{\text{sim}}=-\frac{\mathbf{z}_{\text{img}}^{T}\mathbf{z}_{\text{text}}}{\|\mathbf{z}_{ \text{img}}\|\cdot\|\mathbf{z}_{\text{text}}\|}, \tag{16}\]
and the learning objective in refinement stage is:
\[L_{\text{refine}}=L_{\text{NeRF}}+\lambda_{\text{sim}}L_{\text{sim}}. \tag{17}\]
More implementation details are included in Sup. Mat.
## 4 Experiments
We evaluate TECA through 1) comparisons with state-of-the-art (SOTA) methods for text-guided generation, 2) an online perceptual study, 3) quantitative evaluation, 4) the application of try-on and animation of generated avatars, and 5) ablation studies exploring our design choices. To evaluate TECA's ability to generate diverse compositional avatars, we need text prompts that are also compositional. These text prompts may include facial attributes (_e.g._ overweight, slim, muscular), hairstyles (_e.g._ bun, afro, braid), clothing (_e.g._ jacket, wool scarf, hat), and more. The candidate attributes and styles words are taken from a dataset of faces [37], a dataset of hairstyles [65], and other online sources12. In total, these attributes produce up to 3,300 text-prompt combinations, ensuring rich diversity.
Figure 3: Qualitative comparison with SOTA methods. Text prompts: (a) “An overweight man with a crown-braid hairstyle,” (b) “A man in a purple hoodie,” (c) “An old bald African woman wearing a wool scarf,” (d) “A young woman in a military hat.”
### Comparisons with SOTA Methods
We compare TECA with four SOTA methods. Two are solely based on NeRF representations (SJC [62] and Latent-NeRF [42]) and two are based on mesh painting techniques (Latent-Paint [42] and TEXTure [52]). Figure 3 shows generated examples obtained from four diverse text prompts describing various personal characteristics, hairstyles, and clothing. Notably, all methods successfully generate avatars with recognizable features that semantically align with the text, such as gender, color, and clothing items. However, SJC and Latent-NeRF produce visually distorted and incomplete avatars, primarily due to flawed geometry and low-resolution textures. While Latent-Paint incorporates a mesh as a shape prior, leading to reasonable proportions, the textures still suffer from blurriness and a lack of cross-view consistency. TEXTure demonstrates good texture quality but is limited by the mesh topology; it cannot non-body components like the crown-braid hairstyle and hoodie. In contrast, TECA generates more realistic and natural avatars with strong cross-view consistency. Our text-generated avatars exhibit detailed appearances, including diverse hairstyles and accessories (see Sup. Mat.).
### Perceptual Study
We conducted an online perceptual study to rate avatars synthesized by TECA and the baseline methods, SJC, Latent-NeRF, LatentPaint, and TEXTure. Participants on Amazon Mechanical Turk were presented with videos of the generated avatars and asked to rate the visual realism and consistency with the text prompt using a seven-point Likert scale. We showed each participant the same thirty prompts but shuffled which method's avatar the participants saw for each prompt. Each participant was shown an equal number of avatars synthesized by each method. A total of 150 responses were collected, out of which 52 (35%) participants passed all of the catch trials and were included in the study. We applied the non-parametric Friedman test and then performed the Nemenyi post-hoc test to identify pairwise differences. The results are shown in Fig. 4 and only our method receives, on average, positive ratings to both questions. See Sup. Mat. for more details.
### Quantitative Evaluation
We also quantitatively compare our method with other state-of-the-art methods. Specifically, we assess the semantic matching between text and avatar using the CLIP [49] score and evaluate the generated avatar quality through the Frechet Inception Distance (FID) [56]. To compute the CLIP score, we convert the videos used in Section 4.2 into images and employed CLIP [49] to compute the cosine distance between these images and their respective text prompts. The results are shown in Table 1. Our method achieves the highest semantic consistency, indicating its ability to accurately represent text descriptions. For FID, we generated 200 avatars using 40 different text prompts, with 5 random seeds for each prompt. Each avatar was rendered from 50 different views, resulting in a total of 10,000 images for evaluation. The ground truth distribution was based on the first 10,000 images from the Flickr-Faces-HQ Dataset (FFHQ) [28]. The FID scores are shown in Table 1. TECA has the lowest FID score, indicating its superior image quality relative to the other methods.
avatar. The non-face components adjust in size to adapt to a new head shape. This makes our generated hairstyles and accessories highly versatile. For instance, users can input their own avatars and transfer a learned hairstyle to it.
Leveraging the SMPL-X model, we also gain the flexibility to animate the avatar across various poses and expressions. As detailed in Sec. 3.3, the NeRF component has the capacity to synchronize its movement with the dynamics of the SMPL-X body. Displayed in Fig. 6, our generated avatar can be animated with varying head poses and expressions. Notice the transition from a neutral to an open-mouth expression (middle image), where the interior mouth region is absent. This limitation could be addressed by inpainting the inside mouth region using diffusion models. Further instances showcasing this including face editing results are available in the Sup. Mat.
### Ablation Experiments
Non-Face Refinement.We investigate the effect of refining non-face details using a combination of SDS and BLIP losses. Figure 7 illustrates the difference between refined and non-refined style components. The results demonstrate that the refinement produces more detail, noticeably enhancing the overall visual quality.
CLIPSeg Segmentation.The segmentation loss prevents NeRF from trying to represent the entire avatar and focuses it on representing a specific part. Without the loss, the results are significantly worse; see Fig. 8.
## 5 Discussion and Limitations
Segmentation with CLIPSeg.To guide the learning process, we use CLIPSeg to obtain masks for the region of interest, such as hair or clothing. This encourages the NeRF to focus on learning specific components rather than on the entire avatar. Our method's effectiveness is contingent on the segmentation quality. If CLIPSeg encounters challenges, flawed NeRF representations may result, such as floating points in the face region, as shown in Fig. 9.
Performance of Diffusion Models.Our results are constrained by the capabilities and biases of pretrained diffusion models because they provide the semantic information for avatar generation.
Dynamics.TECA's ability to animate avatars via the SMPL-X parametric model highlights its potential, yet addressing complex dynamics in elements like hair and clothing calls for further exploration.
Relighting.Our model does not support relighting in new environments, as the learned RGB color for the face texture and NeRF-based hair or accessories is baked with lighting. Further work is needed to disentangle albedo and lighting attributes to enable relighting.
## 6 Conclusion
We presented TECA, an innovative method for generating realistic 3D facial avatars with hair and accessories from text descriptions. By adopting a compositional model and using distinct representations for different components, we addressed the limitations of existing methods in terms of realism, shape fidelity, and capabilities for editing. Our experimental results demonstrate the superior performance of TECA compared to state of the art, delivering highly detailed and editable avatars. Further, we demonstrated the transfer of hairstyles and accessories between avatars.
Figure 8: Ablation of CLIPSeg. Without the segmentation information, NeRF learns the entire avatar instead of the components.
Figure 6: TECA Animation. The generated avatar is animated to different poses and expressions.
Figure 7: Comparison of results between unrefined (top) and refined (bottom) style components. The refinement improves the details of hair, hat, scarf, and clothing. The refined components are indicated by red dotted line boxes.
Figure 9: Failure cases showing the impact of poor segmentation.
DisclosureThis work was partially supported by the Max Planck ETH Center for Learning Systems. MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a consultant for Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.
## 7 Appendix
### Implementation Details
For the SMPL-X model [47], we use \(|\mathbf{\beta}|=300\) and \(|\mathbf{\psi}|=100\), and we use 68 facial landmarks to fit the SMPL-X shape to the reference images, (\(n_{e}=68\)). In the SMPL-X optimization process, we incorporate shape and expression regularization [47] with a weight of \(5e-5\). For texture generation on the mesh, we use \(n_{p}=10\) viewing directions. The mesh texture is optimized in an iterative manner from various view angles, following the sequence illustrated in Fig. 10. The symmetry loss \(L_{sym}\) is applied during this process, leveraging our assumption that the face is approximately symmetric. Specifically, for front and back views (No. 1 and No. 10), we apply an \(L_{2}\) loss between the rendered image \(R_{m}(\mathbf{A},\mathbf{p}_{i},\mathbf{V}^{*})\) that renders the geometry \(\mathbf{V}^{*}\) in the view direction \(\mathbf{p}_{i}\) based on the UV map \(\mathbf{A}\) and the horizontally flipped version of the Stable Diffusion [53] generated image \(\mathbf{I}^{\prime}_{i}\). For right views (Nos. 3, 5, 7, and 9), we implement the \(L_{2}\) loss by comparing the rendered images and the corresponding left views (Nos. 2, 4, 6, and 8). During NeRF training, we sample 96 points each ray (\(n_{\ell}=96\)). During the refinement stage, we increase the number of sampled points \(n_{\ell}\) to 128. In the optimization, we employ a latent code with dimensions of \(64\times 64\times 4\). During refinement, we render the image at a resolution of \(480\times 480\times 3\). The network used in the NeRF training process comprises three fully connected layers, and the adapter from the latent space to the pixel space is a one-layer MLP following [60]. Other hyperparameters include \(\tau=0.1\), \(|\mathcal{N}(\mathbf{x})|=6\), \(\ell_{n}=-1\), and \(\ell_{f}=1\).
For the Stable Diffusion model [53] and its depth-aware and inpainting variants, we use the implementations available on HuggingFace345. For the BLIP model [31], we use the released model6 from the LAVIS project [30].
Footnote 3: stabilitiyai/stable-diffusion-2
Footnote 4: stabilitiyai/stable-diffusion-2-depth
Footnote 5: stabilitiyai/stable-diffusion-2-inpaniting
Footnote 6: BLIP/models/model_base.pth
We employ the Adam optimizer [29] with a learning rate of \(0.01\) for texture optimization, while for other optimizations, we use a learning rate of \(0.001\). For loss weights, we fix \(\lambda_{\text{sym}}=0.5\), \(\lambda_{\text{mask}}=0.1\), \(\lambda_{\text{sparse}}=0.0005\), and \(\lambda_{\text{sim}}=1\). The average run time for avatar generation is currently around three hours on an A100 GPU.
### Perceptual Study Details
In our survey, we used a random selection process to create a set of thirty unique prompt combinations, each consisting of five elements. These elements were: 1) gender (male or female); 2) a color7; 3) a hairstyle8 or hat9, weighted equally; 4) another color; and 5) a type of upper-body clothing10. These combinations were used to construct a prompt in the form of "A [1] with a(n) [2][3] wearing a(n) [4][5]".
Footnote 7: hairCLIP/main/README.md
Footnote 8: hairCLIP/main/mapper/hairstyle_list.txt
Footnote 9: yesl.com/types-of-hats
Footnote 10: yesl.com/vocabulary-clothing-clothes-accessories
To mitigate potential interaction effects resulting from participants' unfamiliarity with the styles presented, we included an image from the internet that represented the hairstyle or type of hat described in the prompt; the image was displayed next to the avatar video. Participants were then asked to rate their agreement with two statements: 1) "The avatar in the video is visually realistic" and 2) "The appearance of the avatar in the video matches the text description below it." To determine whether a participant successfully passed the catch trials, we examined their ratings for both questions. Participants were considered to have passed if they rated both questions with greater-than-neutral agreement or greater-than-neutral disagreement on all five constant manually curated high-quality samples and catastrophic generation failures, respectively. A total of 150 responses were collected, out of which 52 (35%) participants passed all of the catch trials and were included in the study. The response distributions failed the Shapiro-Wilk normality test, so we applied the non-parametric Friedman test, which indicated that the method used to generate the avatar had a statistically significant effect on the outcomes of both study questions. Subsequently, we performed the Nemenyi post-hoc test to identify pairwise differences. Us
Figure 10: Left: The ten virtual camera views used for the texture optimization process. Right: The rendered images (1st column) and the corresponding UV maps (2nd column).
ing a significance level (\(\alpha\)) of \(0.05\), the perceived _realism_ of TECA was determined to be significantly different than that of all baselines other than TEXTure, and the _text consistency_ was determined to be significantly different than that of all baselines. These findings confirm our initial expectations regarding the strengths of each method and support the value of our proposed combination of mesh-based and NeRF-based representations.
### More Qualitative Results
More generation results.Avatars with distinct hairstyles, clothing, and accessories are shown in Fig. 11. In addition, Fig. 15 shows instances with a variety of other types of accessories, such as earrings, necklaces, and glasses. To illustrate the diversity of the generated avatars, Fig. 13 shows avatars generated using the same prompt but different seeds, producing diverse faces and clothing that consistently align with the text input. Fig. 14 presents avatars that are generated using more detailed descriptions, highlighting the alignment between the avatars and the input texts.
More applications.TECA allows for the transfer of hairstyles and accessories, as demonstrated by additional examples in Fig. 15. Moreover, the generated avatars can be animated with different expressions. As our method automatically estimates the SMPL-X shape parameters of this subject, we can then change the expression parameters of SMPL-X model to animate the face. Regarding face texture, the inner mouth region is missing. To address this, we apply an inpainting Stable Diffusion model to inpaint the missing area. The results are shown in Fig. 16. We can further edit the avatar using text input. For example, Fig. 17 shows the results of altering the color of lips and eyes.
Figure 11: Additional examples for generated avatars by our method.
Figure 12: Additional examples of accessories such as earrings, necklaces, and glasses.
Figure 14: Additional examples of generated avatars with more detailed text descriptions such as colors and detailed shapes.
Figure 13: Additional examples of the diversity of avatars generated with the same text prompt: “a woman with long hair wearing a shirt.”
Figure 15: Additional examples for hair and accessory transfer.
Figure 16: Additional examples avatars generated with different expressions. The in-mouth area is represented with a painted texture. The expressions from top to bottom are: smiling, angry, disgusted, and screaming.
Figure 17: Additional examples of editing the generated avatar’s texture. The first line changes the lip color with the “red lip” prompt and the second line alters the eye color with prompts “green eyes” and “brown eyes”. |
2309.11974 | Real quadratic singular moduli and $p$-adic families of modular forms | The classical theory of elliptic curves with complex multiplication is a
fundamental tool for studying the arithmetic of abelian extensions of imaginary
quadratic fields. While no direct analogue is available for real quadratic
fields, a (conjectural) theory of "real multiplication" was recently proposed
by Darmon and Vonk, relying on $p$-adic methods, and in particular on the new
notion of rigid meromorphic cocycles. A rigid meromorphic cocycle is a class in
the first cohomology of the group $\mathrm{SL}_2(\mathbb{Z}[1/p])$ acting on
the non-zero rigid meromorphic functions on the Drinfeld $p$-adic upper half
plane by M\"obius transformation. The values of rigid meromorphic cocycles at
real quadratic points can be thought of as analogues of singular moduli for
real quadratic fields. In this survey article, we will discuss aspects of the
theory of complex multiplication and compare them with conjectural analogues
for real quadratic fields, with an emphasis on the role played by families of
modular forms in both settings. | Paulina Fust, Judith Ludwig, Alice Pozzi, Mafalda Santos, Hanneke Wiersema | 2023-09-21T11:17:08Z | http://arxiv.org/abs/2309.11974v1 | # Real quadratic singular moduli and
###### Abstract
The classical theory of elliptic curves with complex multiplication is a fundamental tool for studying the arithmetic of abelian extensions of imaginary quadratic fields. While no direct analogue is available for real quadratic fields, a (conjectural) theory of "real multiplication" was recently proposed by Darmon and Vonk, relying on \(p\)-adic methods, and in particular on the new notion of rigid meromorphic cocycles. A rigid meromorphic cocycle is a class in the first cohomology of the group \(\mathrm{SL}_{2}(\mathbb{Z}[1/p])\) acting on the non-zero rigid meromorphic functions on the Drinfeld \(p\)-adic upper half plane by Mobius transformation. The values of rigid meromorphic cocycles at real quadratic points can be thought of as analogues of singular moduli for real quadratic fields.
In this survey article, we will discuss aspects of the theory of complex multiplication and compare them with conjectural analogues for real quadratic fields, with an emphasis on the role played by families of modular forms in both settings.
## 1 Introduction
The goal of this article is to present recent developments in the theory of _singular moduli for real quadratic fields_ introduced in [4] in a parallel perspective to the classical theory of complex multiplication. The theory of elliptic curves with complex multiplication has yielded some spectacular arithmetic applications. Kronecker tackled the problem of constructing abelian extensions of imaginary quadratic fields, known as the _Kronecker Jugendraum_, via singular moduli, i.e., values of modular functions at imaginary quadratic points of the complex upper half plane \(\mathcal{H}_{\infty}\). While class field theory provides a description of the Galois groups classifying abelian extensions of arbitrary number fields, finding an explicit recipe for their generators is more elusive beyond the CM setting.
Another problem for which in current approaches CM theory is an essential tool is the Birch and Swinnerton-Dyer Conjecture. Given an elliptic curve \(E\) over a number field \(K\), the Birch and Swinnerton-Dyer Conjecture (BSD) predicts an equality between the rank of its group of \(K\)-rational points and the order of vanishing of its \(L\)-function \(L(E/K,s)\) at \(s=1\). The only known approach to systematically producing global points is via the theory of Heegner points. The works of Gross-Zagier [10] and Kolyvagin [14] settle many instances of BSD in rank 1 by exploiting properties of this supply of global points.
The construction of singular moduli and Heegner points share some common features. They both arise from an appropriate evaluation process at CM points on the complex upper half plane. The corresponding values are defined over abelian extensions of imaginary
quadratic fields \(K\). We will refer to this type of construction as _Heegner constructions_. It is natural to ask if a similar approach can be implemented to construct collections of objects lying in abelian extensions of other number fields.
Let \(K\) be a real quadratic field. From a naive perspective, the idea of evaluating modular functions at real quadratic points (or RM, by analogy with CM points) of \(\mathcal{H}_{\infty}\) already fails because RM points lie on the real line. This suggests that an appropriate analogue of CM theory should be sought by replacing the role played by the archimedean place with a finite place that does not split in \(K\). This is the point of view proposed by Darmon and Vonk in their theory of singular moduli for real quadratic fields.
For a finite prime \(p\), the Drinfeld \(p\)-adic upper half plane is the rigid analytic space \(\mathcal{H}_{p}\) whose \(\mathbb{C}_{p}\)-valued points are \(\mathbb{P}^{1}(\mathbb{C}_{p})\setminus\mathbb{P}^{1}(\mathbb{Q}_{p})\). It contains RM points over real quadratic fields \(K\) in which \(p\) is ramified or inert. In analogy with CM theory, it would be natural to attempt to define singular moduli as values of suitable modular functions at RM points in \(\mathcal{H}_{p}\), for some action of a modular group \(\Gamma\subset\operatorname{GL}_{2}(\mathbb{Q}_{p}).\) Typically, one would expect this group to arise as the elements of norm one in a \(\mathbb{Z}[1/p]\)-order of a quaternion algebra \(B/\mathbb{Q}\) which splits at \(p\). On the one hand, if the quaternion algebra is definite, the quotient \(\Gamma\backslash\mathcal{H}_{p}\) admits the structure of a rigid analytic space. However, in that setting, \(B\) does not contain any real quadratic subalgebra. Therefore one would not expect such quotients to appear in the framework of RM theory. On the other hand, if \(B\) is indefinite, it contains real quadratic subalgebras but the action of \(\Gamma\) on \(\mathcal{H}_{p}\) is not discrete, so that there are no suitable modular functions. Darmon and Vonk fix the indefinite quaternion algebra \(B=\operatorname{M}_{2}(\mathbb{Q}_{p})\), so that \(\Gamma=\operatorname{SL}_{2}(\mathbb{Z}[1/p])\) is the Ihara group, and they define rigid meromorphic cocycles as classes in \(H^{1}(\Gamma,\mathcal{M}^{\times})\), where \(\mathcal{M}\) is the ring of rigid meromorphic functions on \(\mathcal{H}_{p}\). They show that these classes (or more precisely, the quasi-parabolic classes, see SS3.3) can be meaningfully evaluated at RM points. They then conjecture that their RM values lie in composita of abelian extensions of real quadratic fields, making them suitable candidates for an analogue of singular moduli in the real quadratic setting. The theory of real quadratic singular moduli fits into a program launched by Darmon in [1] attempting to define conjectural analogues of Heegner constructions for real quadratic fields via \(p\)-adic methods. Most notably, the construction of a putative analogue of Heegner points, known as Stark-Heegner points, has since seen many generalisations. While the theoretical evidence on the rationality of these points is scarce, the computational results are extensive.
In the last 50 years, new aspects of the theory of complex multiplication have emerged, in which automorphic methods are used substantially. An important theme is the study of modular generating series constructed out of CM cycles on Shimura curves, pioneered by the work of Gross and Zagier on the factorisation of differences of singular moduli [10]. The crucial idea of relating generating series to derivatives of real analytic families of Eisenstein series culminated in the proof of many cases of the Birch and Swinnerton-Dyer Conjecture for a modular elliptic curve in analytic rank 1 in [10] (see Thm. 4.2.1). Although the algebraicity properties of the analogues of the Heegner constructions in the real quadratic setting are conjectural, one can package RM cycles on \(\mathcal{H}_{p}\) into modular generating series. It is natural to speculate that these generating series would be related to derivatives of \(p\)-adic families of modular forms. This perspective has been explored by Darmon, Pozzi and Vonk in [11] and [11].
In the archimedean setting, only real analytic Eisenstein series admit variations in families. By contrast, the work of Hida, and its generalisation via the theory of eigenvarieties intro
duced by Coleman-Mazur, show that \(p\)-adic families of automorphic forms abound. A useful feature of the \(p\)-adic theory is that cusp forms vary in \(p\)-adic families. In addition, their corresponding Galois representations can be interpolated. This aspect has been leveraged into breakthroughs in Iwasawa theory and the theory of \(p\)-adic \(L\)-functions, for example the proof of the Iwasawa Main Conjecture over totally real fields by Mazur-Wiles, or the proof of the Mazur-Tate-Teitelbaum Conjecture by Greenberg-Stevens.
In the context of modular generating series for RM values, the additional flexibility of the \(p\)-adic setting has been used by Darmon-Pozzi-Vonk to prove some instances of the expected algebraicity results for the analogues of elliptic units for real quadratic fields (see Theorem 5.3.1). Their approach is somewhat indirect. On the one hand, derivatives of certain \(p\)-adic families of modular forms are used to produce modular generating series for RM values. On the other hand, the derivatives of cuspidal families are related to global cohomology classes informing the arithmetic of abelian extensions of real quadratic fields. This strategy towards the algebraicity of Heegner constructions in the RM setting suggests that the richer theory of \(p\)-adic automorphic forms can sometimes make up for the lack of an _a priori_ global description of the corresponding RM objects.
**Overview of the paper.** This article is structured as follows. Section 2 contains a review of some aspects of the classical theory of complex multiplication, with an emphasis on singular moduli, elliptic units and Heegner points. Section 3 focuses on the construction of their conjectural analogues for real quadratic fields. For the reader's convenience, we include some background on the Drinfeld \(p\)-adic upper half plane as a rigid space and introduce the theory of rigid meromorphic cocycles. Sections 4 and 5 are devoted to the interplay between derivatives of families of modular forms and singular moduli, in the archimedean and non-archimedean setting respectively. In particular, Section 4 treats Gross and Zagier's work on the factorisation of singular moduli and the BSD Conjecture in analytic rank 1 [10] and [11], while Section 5 focuses on the articles [12] and [13]. Sections 2 and 3 are meant to be mostly self-contained, while Sections 4 and 5 aim to be an overview of various arithmetic applications to Heegner constructions in a unified framework.
**Acknowledgements.** We would like to thank the organisers of the WIN-Europe 4 conference for inviting us to contribute to these proceedings. We would like to thank Henri Darmon and Jan Vonk for enlightening explanations of their work. We would also like to thank Sandra Rozensztajn, Ana Caraiani and Catherine Hsu for helpful conversations during the preparation of this manuscript. We would like to thank Oguz Gezmis and the anonymous referees for many helpful comments and suggestions.
P.F. was funded by the DFG Graduiertenkolleg 2553. J.L. acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through TRR 326 _Geometry and Arithmetic of Uniformized Structures_, project number 444845124. A.P. was funded by the Leverhulme Trust (via the Leverhulme Prize of Ana Caraiani). M.S. was funded by the Engineering and Physical Sciences Research Council [EP/L015234/1], the EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory), Imperial College London, and by the Max-Planck-Institut fur Mathematik. H.W. acknowledges support from the Herchel Smith Postdoctoral Fellowship Fund, and the Engineering and Physical Sciences Research Council (EPSRC) grant EP/W001683/1.
Heegner Constructions for imaginary quadratic fields
In this section, we will give a brief overview of results in the classical theory of elliptic curves with complex multiplication, and its applications to the construction of arithmetic objects over abelian extensions of imaginary quadratic fields. At the heart of these results lies the fact that modular curves, quotients of the complex upper half plane by congruence subgroups, admit a moduli interpretation which endows them with a rational structure. The moduli interpretation of CM points can be used to determine their images under field automorphisms, which allows one to deduce that they define global points on modular curves.
We will begin by recalling some facts about the moduli interpretation of CM points on modular curves and the Main Theorem of Complex Multiplication. We will then discuss in more detail the Heegner constructions of singular moduli, elliptic units and Heegner points.
### Modular curves and CM points
#### 2.1.1 Elliptic curves with complex multiplication
Let \(E\) be a complex elliptic curve, which we view as a Riemann surface. It admits a uniformisation \(E(\mathbb{C})\simeq\mathbb{C}/\Lambda\), where \(\Lambda\subset\mathbb{C}\) is its period lattice, well-defined up to homothety; by rescaling, the latter can be chosen of the form \(\Lambda=\tau\mathbb{Z}+\mathbb{Z}\), for \(\tau\in\mathbb{C}\) with positive imaginary part. Endomorphisms of \(E\) are given by endomorphisms of its universal covering \(\mathbb{C}\) that preserve the lattice \(\Lambda\). Denote by \(\mathcal{R}_{E}\) the set of endomorphisms of \(E\). For a generic \(\tau\) in the upper half plane, all endomorphisms of \(E\) are given by multiplication by an integer. Otherwise, there exists \(\alpha\in\mathbb{C}\setminus\mathbb{Z}\), and integers \(a,b,c,d\) satisfying
\[c\tau+d=\alpha,\qquad a\tau+b=\alpha\tau.\]
This forces \(K=\mathbb{Q}(\tau)\) to be an imaginary quadratic field and \(\alpha\) to be an algebraic integer in \(K\). As a result \(\mathcal{R}_{E}\) is isomorphic to an order \(\mathcal{O}\) in \(K\), for which the lattice \(\Lambda\) is a proper fractional ideal. In this case we say that \(E\) has complex multiplication by \(\mathcal{O}\).
#### 2.1.2 The Main Theorem of Complex Multiplication
Let \(\mathcal{H}_{\infty}\) be the Poincare complex upper half plane, on which the group \(\operatorname{GL}_{2}^{+}(\mathbb{R})\) of invertible real matrices with positive determinant acts by Mobius transformation.
For a positive integer \(N\), consider the order
\[\operatorname{M}_{0}(N)=\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{M}_{2}(\mathbb{Z}):N\mid c\right\}\]
in the ring of \(2\times 2\)-matrices with integer coefficients, and let \(\Gamma_{0}(N)\) be the group of matrices of determinant \(1\) in \(M_{0}(N)\). The assignment \(\tau\mapsto(\tau\mathbb{Z}+\mathbb{Z},\tau\mathbb{Z}+1/N\mathbb{Z})\) identifies the quotient \(\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}\) with the moduli space of pairs of lattices \((\Lambda,\Lambda^{\prime})\) with
\[\Lambda\subset\Lambda^{\prime}\text{ such that }\Lambda^{\prime}/\Lambda \simeq\mathbb{Z}/N\mathbb{Z},\]
defined up to homothety.
The moduli problem parametrising isomorphism classes of pairs of elliptic curves \((E,E^{\prime})\) with a cyclic degree \(N\)-isogeny \(E\to E^{\prime}\) admits a coarse moduli space \(Y_{0}(N)\) over \(\mathbb{Q}\) which we refer to as the (open) modular curve of the level \(\Gamma_{0}(N)\). We let \(X_{0}(N)\) be its compactification, obtained by adding cusps.
There is a bijection
\[\iota_{N}\colon\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}\to Y_{0}(N)( \mathbb{C}),\qquad\tau\mapsto(\mathbb{C}/(\tau\mathbb{Z}+\mathbb{Z}),\ \mathbb{C}/(\tau\mathbb{Z}+1/N\mathbb{Z}))\]
which provides a uniformisation of \(Y_{0}(N)(\mathbb{C})\) by the complex upper half plane.
We say that \(\tau\in\mathcal{H}_{\infty}\) is imaginary quadratic of discriminant \(D\) if \([\tau,1]\) is a solution to a quadratic equation of the form \(AX^{2}+BXY+CY^{2}=0\) for some coprime integers \(A\), \(B\), \(C\) and \(D=B^{2}-4AC<0\). For \(\tau\in\mathcal{H}_{\infty}\) and \(N\in\mathbb{Z}_{\geq 1}\) let
\[\mathcal{O}_{\tau}^{(N)}:=\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{M}_{0}(N):a\tau+b=c\tau^{2}+d\tau\right\}.\]
The image of the inclusion \(\mathcal{O}_{\tau}^{(N)}\hookrightarrow\mathbb{C}\) given by \(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\mapsto c\tau+d\) is either isomorphic to \(\mathbb{Z}\) or to an order in the imaginary quadratic field \(K=\mathbb{Q}(\tau)\). In the latter case, we say that \(\tau\) is a CM point on \(Y_{0}(N)\) and denote by \([\tau]\) its \(\Gamma_{0}(N)\)-orbit, when the level \(\Gamma_{0}(N)\) is understood. For an order \(\mathcal{O}\) in an imaginary quadratic field \(K\) of discriminant prime to \(N\), we denote by \(\mathcal{H}_{\infty}^{\mathcal{O}}\) the set of points \(\tau\in\mathcal{H}_{\infty}\) with
\[\mathcal{O}_{\tau}^{(N)}=\mathcal{O}_{\tau}^{(1)}\cap\mathcal{O}_{N\tau}^{(1) }\simeq\mathcal{O}.\]
Such points are in bijection with the pairs of elliptic curves \((\mathbb{C}/\Lambda,\mathbb{C}/\Lambda^{\prime})\) whose corresponding lattices \((\Lambda,\Lambda^{\prime})\) are contained in \(K\) (up to homothety) and for which the set
\[\{\alpha\in K:\alpha\Lambda\subset\Lambda,\ \alpha\Lambda^{\prime}\subset \Lambda^{\prime}\} \tag{1}\]
is equal to \(\mathcal{O}\). The set \(\mathcal{H}_{\infty}^{\mathcal{O}}\) is preserved by the action of \(\Gamma_{0}(N)\) and the quotient \(\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathcal{O}}\) is finite. We let \(\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathcal{\mathrm{CM}}}\) be the union of \(\Gamma_{0}(N)\)-orbits of points with complex multiplication by \(\mathcal{O}\) for some imaginary quadratic order \(\mathcal{O}\).
Denote by \(\mathrm{Cl}(\mathcal{O})\) the class group of \(\mathcal{O}\). Given a proper fractional ideal \(\mathfrak{a}\) in \(\mathcal{O}\) and a pair of lattices \((\Lambda,\Lambda^{\prime})\), condition (1) is stable under multiplication by \(\mathfrak{a}\). As a result
\[\mathfrak{a}*(\mathbb{C}/\Lambda,\mathbb{C}/\Lambda^{\prime}):=(\mathbb{C}/( \mathfrak{a}*\Lambda),\mathbb{C}/(\mathfrak{a}*\Lambda^{\prime}))\]
defines an action of the class group \(\mathrm{Cl}(\mathcal{O})\) on \(\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathcal{O}}\). For \([\tau]\in\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathcal{O}}\), we write \(\mathfrak{a}*\tau\) for \(\mathfrak{a}*(\mathbb{C}/(\mathbb{Z}+\tau\mathbb{Z}),\mathbb{C}/(1/N\mathbb{Z }+\tau\mathbb{Z}))\) by abuse of notation.
The Main Theorem of Complex Multiplication ensures that CM points on modular curves are defined over suitable ring class fields, and describes the Galois action on CM points. More precisely, let \(\mathcal{O}\) be an order in an imaginary quadratic field \(K\). The Artin reciprocity map yields an isomorphism
\[\mathrm{rec}\colon\mathrm{Cl}(\mathcal{O})\to\mathrm{Gal}(H/K)\]
where \(H\) is the ring class field of \(\mathcal{O}\). The Main Theorem of Complex Multiplication can be stated as follows.
**Theorem 2.1.1**.: _Let \(\mathcal{O}\) be an order of discriminant \(D\) in an imaginary quadratic field \(K\), and let \(H\) be its corresponding ring class field. Let \(N\) be a positive integer coprime to \(D\). For every point \([\tau]\in\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathcal{O}}\), we have:_
1. \(\iota_{N}([\tau])\in Y_{0}(N)(H)\)_,_
2. _(Shimura reciprocity law): every_ \(\mathfrak{a}\in\mathrm{Cl}(\mathcal{O})\) _satisfies_ \(\iota_{N}(\mathfrak{a}*[\tau])=\mathrm{rec}(\mathfrak{a})^{-1}\iota_{N}([\tau])\)_._
For a reference see [10, Cor. 11.37]. In SS2.2 we will explain how Theorem 2.1.1 can be exploited towards the construction of global units and global points on elliptic curves over ring class fields of imaginary quadratic fields.
### Heegner Constructions
Theorem 2.1.1 endows the modular curve \(Y_{0}(N)\) with a systematic supply of global points. Given a group scheme \(X/\mathbb{Q}\) together with a \(\mathbb{Q}\)-rational morphism \(Y_{0}(N)\to X\), one obtains a collection of global points on \(X\), satisfying similar properties to those of CM points on modular curves. We refer to constructions of this flavour as _Heegner constructions_.
In the following examples, Heegner constructions are obtained from an a priori holomorphic map
\[\Phi:\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}\to X(\mathbb{C})\]
of Riemann surfaces that is shown to arise from a \(\mathbb{Q}\)-rational morphism of curves (an important example being the modular parametrisation of an elliptic curve). As a consequence of Theorem 2.1.1, the following properties are satisfied:
1. If \([\tau]\in\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathcal{O}}\) then \(\Phi([\tau])\in X(H)\);
2. The action of the class group \(\mathrm{Cl}(\mathcal{O})\) is compatible with the Galois action on \(X(H)\) under a Shimura reciprocity law.
Below we will give three examples of Heegner constructions. The following table summarises how these examples fit into this perspective.
\begin{tabular}{c|c|c} & \(X\) & \(\Phi\) \\ \hline \hline Singular Moduli & affine line \(\mathbb{A}^{1}/\mathbb{Q}\) & \(j\)-invariant \\ Elliptic Units & multiplicative group \(\mathbb{G}_{m}/\mathbb{Q}\) & modular unit \\ Heegner Points & elliptic curve \(E/\mathbb{Q}\) & modular parametrisation \\ \end{tabular} The setting of singular moduli is obtained by specialising Theorem 2.1.1 to the case of level \(\Gamma_{0}(1)\). Elliptic units are constructed from the evaluation of a modular unit at CM points. Roughly speaking, Heegner points are obtained by projecting CM cycles on the Jacobian of the modular curve onto its \(f\)-isotypic component for a newform \(f\) of weight \(2\).
_Remark 1_.: The constructions presented in SS3 will mimic the ones above by replacing \(\mathcal{H}_{\infty}\) with the Drinfeld \(p\)-adic upper half plane \(\mathcal{H}_{p}\) and imaginary quadratic with real quadratic points. The Heegner constructions in the RM setting are obtained via techniques of rigid analytic geometry and are expected to satisfy similar properties to those of classical Heegner constructions. However, there are significant differences. The analogues of the map \(\Phi\) described above will only be defined at real multiplication points. More crucially, the RM constructions are purely local, and no analogues of the modular curve are available in that setting.
We will now proceed to describe the inputs of these constructions in some detail.
#### 2.2.1 Singular moduli
Given a lattice \(\Lambda\) in \(\mathbb{C}\), the theory of elliptic curves over the complex numbers yields an identification between the complex torus \(\mathbb{C}/\Lambda\) and the points of an elliptic curve with equation
\[Y^{2}=4X^{3}-g_{2}(\Lambda)X-g_{3}(\Lambda)\]
where the constants are defined as
\[g_{2}(\Lambda)=60\sum_{\omega\in\Lambda-[0]}\frac{1}{\omega^{4}},\quad g_{3}( \Lambda)=140\sum_{\omega\in\Lambda-[0]}\frac{1}{\omega^{6}}.\]
For \(i=2,3\), and \(\tau\) in the upper half plane, the assignment \(g_{i}(\tau):=g_{i}(\mathbb{Z}+\tau\mathbb{Z})\) defines a modular form of weight \(2i\) and level \(\operatorname{SL}_{2}(\mathbb{Z})\). The complex functions
\[\Delta :=g_{2}^{3}-27g_{3}^{2}, j :=1728\frac{g_{2}^{3}}{\Delta}\]
are the _modular discriminant_ and the _modular \(j\)-function_ respectively. The former is a nowhere vanishing cusp form of weight \(12\). The latter is a holomorphic \(\operatorname{SL}_{2}(\mathbb{Z})\)-invariant function on \(\mathcal{H}_{\infty}\). The map
\[j\colon\operatorname{SL}_{2}(\mathbb{Z})\backslash\mathcal{H}_{\infty} \to\mathbb{A}^{1}(\mathbb{C}) \tag{2}\]
extends to an isomorphism of Riemann surfaces between the compactifications of \(\operatorname{SL}_{2}(\mathbb{Z})\backslash\mathcal{H}_{\infty}\) and \(\mathbb{A}^{1}(\mathbb{C})\), respectively. The crucial property of the \(j\)-invariant is that it characterises elliptic curves up to isomorphism over an algebraically closed field, and captures the minimal field of definition of an elliptic curve. As a result, the coarse moduli space for elliptic curves \(Y_{0}(1)\) can be identified with the affine line via (2). For \(N\geq 1\), the image of the morphism
\[\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}\to\mathbb{A}^{2}(\mathbb{C})\,, \qquad\tau\mapsto(j(\tau),j(N\tau))\]
is cut out by a single equation given by \(\varphi_{N}\in\mathbb{Q}[X,Y]\), referred to as the \(N\)-th modular polynomial ([13, Section 7.5]). The corresponding plane curve is birational to the modular curve \(X_{0}(N)\).
**Definition 1**.: _Singular moduli_ are values of the \(j\)-function at points in \(\operatorname{SL}_{2}(\mathbb{Z})\backslash\mathcal{H}_{\infty}^{\mathrm{CM}}\).
For \(N=1\), Theorem 2.1.1 directly translates into properties of singular moduli. The general level case easily follows from the level \(1\) setting, together with properties of the modular polynomials. An important application is the following corollary.
**Corollary 1**.: _Let \(D<0\) be a fundamental discriminant, let \(\tau\in\mathcal{H}_{\infty}\) be a point of discriminant \(D\) and denote \(K=\mathbb{Q}(\tau)\). Then the Hilbert class field of \(K\) is \(K(j(\tau))\)._
This result (and its analogues for non-fundamental discriminants) can be interpreted as providing explicit generators for certain abelian extensions of imaginary quadratic fields. More generally, Hilbert's Twelfth Problem consists in finding explicit formulae for generators of all abelian extensions of number fields. The motivating example is the Kronecker-Weber Theorem. It states that all abelian extensions of \(\mathbb{Q}\) can be obtained by adjoining roots of unity, which can be viewed as values of the function \(z\mapsto e^{2\pi iz}\) at rational arguments. However, for an imaginary quadratic field \(K\), singular moduli do not suffice to generate the maximal abelian extension, as one would also need to consider values of coordinates of torsion points of elliptic curves with CM by orders in \(K\) (see [14, Chp. II, Cor. 5.7]).
#### 2.2.2 Elliptic units
Singular moduli give generators of certain abelian extensions of imaginary quadratic fields. For arithmetic applications, it can be useful to construct generators with controlled integrality properties, by replacing the \(j\)-function with suitable units on modular curves of higher level.
A modular unit \(u\) is a nowhere vanishing function on the open modular curve \(Y_{0}(N)\) extending meromorphically to the cusps. Examples of modular units can be constructed as products of pullbacks of the modular discriminant via projection maps from \(Y_{0}(N)\) to the modular curve of level 1. For a fixed level \(N\), let \(\Sigma_{N}\) be the set of divisors of \(N\) and consider a degree zero formal linear combination \(\alpha=\sum_{d\in\Sigma_{N}}m_{d}[d]\). Then the holomorphic function
\[u_{\alpha}(z):=\prod_{d}\Delta(dz)^{m_{d}}\]
is \(\Gamma_{0}(N)\)-invariant because \(\alpha\) has degree 0 and nowhere vanishing because \(\Delta\) is. The modular unit \(u_{\alpha}\) can be viewed as a morphism from the open modular curve of level \(\Gamma_{0}(N)\) to the multiplicative group.
**Definition 2**.: Given a divisor \(\alpha\) as above, _elliptic units_ are values of \(u_{\alpha}\) at points in \(\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathrm{CM}}\).
Theorem 2.1.1 implies that elliptic units are defined over ring class fields of imaginary quadratic fields. However, a careful study of the integrality of modular units (cfr. [10, Chp. 1, Lemma 1.1]) yields the following result.
**Theorem 2.2.1**.: _Let \(\mathcal{O}\) be an order in an imaginary quadratic field \(K\), of discriminant prime to \(N\). Let \(u_{\alpha}\) be a modular unit of level \(\Gamma_{0}(N)\) and let \(\tau\in\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathcal{O}}\). Then_
\[u_{\alpha}(\tau)\in\mathcal{O}_{H}[1/N]^{\times},\]
_where \(\mathcal{O}_{H}\) is the ring of integers of the ring class field \(H\)._
_Remark 2_.: In fact, the values \(u_{\alpha}(\tau)\) for \(\tau\in\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathcal{O}}\) can be used to produce genuine units in the ring class field of \(\mathcal{O}\). More precisely, one can show that for every \(\sigma\in\mathrm{Gal}(H/K)\), we have
\[(\sigma-1)u_{\alpha}(\tau)\in\mathcal{O}_{H}^{\times}.\]
#### 2.2.3 Heegner points
The construction of Heegner points on an elliptic curve \(E/\mathbb{Q}\) relies crucially on the _modularity_ of the elliptic curve \(E\). We briefly recall this notion.
Let \(f=\sum_{n\geq 1}a_{n}q^{n}\) be a newform of weight 2 and level \(\Gamma_{0}(N)\) for some \(N\geq 1\); suppose that its Fourier coefficients are integers. The Eichler-Shimura construction attaches to \(f\) an elliptic curve, as we now recall. Let \(\mathbb{T}_{2}(\Gamma_{0}(N))\) be the Hecke algebra acting on cusp forms of weight 2 and level \(\Gamma_{0}(N)\) over \(\mathbb{Z}\). Let \(I_{f}\) be the ideal given by the kernel of the ring morphism \(\mathbb{T}_{2}(\Gamma_{0}(N))\to\mathbb{Z}\) sending the Hecke operator \(T_{n}\) to \(a_{n}\). Let \(J_{0}(N)\) be the Jacobian of the modular curve \(X_{0}(N)\) of level \(\Gamma_{0}(N)\). It is an abelian variety of dimension equal to the genus of \(X_{0}(N)\), endowed with an action of \(\mathbb{T}_{2}(\Gamma_{0}(N))\). The quotient
\[E_{f}:=J_{0}(N)/I_{f}J_{0}(N)\]
is the elliptic curve attached to \(f\). The composition of the morphism \(X_{0}(N)\to J_{0}(N)\) given by \(P\mapsto(P)-(\infty)\) with the projection \(J_{0}(N)\to E_{f}\) yields a surjective \(\mathbb{Q}\)-rational morphism
\[\pi_{E_{f}}\colon X_{0}(N)\to E_{f}.\]
**Definition 3**.: Given an elliptic curve \(E\), we say that \(E\) admits a modular parametrisation if there exists a surjective \(\mathbb{Q}\)-rational morphism \(\pi_{E}\colon X_{0}(N)\to E\). When this is the case, we say that \(E\) is _modular_.
The work of Wiles [10], Taylor-Wiles [11] on Fermat's Last Theorem and its generalisations by Breuil, Conrad, Diamond, and Taylor [1] culminated in a full proof of the following theorem, formerly known as the Shimura-Taniyama Conjecture.
**Theorem 2.2.2**.: _Let \(E/\mathbb{Q}\) be an elliptic curve. Then \(E\) is modular._
_Remark 3_.: For an alternative formulation of modularity via \(L\)-functions see [1].
**Definition 4**.: Let \(E\) be a modular elliptic curve, and let \(\pi_{E}\colon X_{0}(N)\to E\) be its modular parametrisation. _Heegner points_ on \(E\) are images of points in \(\Gamma_{0}(N)\backslash\mathcal{H}_{\infty}^{\mathrm{CM}}\) under \(\pi_{E}\).
Theorem 2.1.1 implies that Heegner points are defined over ring class fields of imaginary quadratic fields. Their importance in relation to the Birch and Swinnerton-Dyer conjecture will be discussed in SS4.2.
## 3 Heegner constructions for real quadratic fields
Given the importance of singular moduli and more general Heegner constructions for algebraic number theory, it is desirable to have an analogue for real quadratic fields. While no direct analogue is available for real quadratic fields, a (conjectural) theory of "real multiplication" relying on \(p\)-adic methods was recently proposed by Darmon and Vonk in [13]. In this section we will present the theory of _rigid meromorphic cocycles_, which is the main new tool in this approach. A rigid meromorphic cocycle is a class in the first cohomology of the Ihara group \(\mathrm{SL}_{2}(\mathbb{Z}[1/p])\) acting on the non-zero rigid meromorphic functions on the Drinfeld \(p\)-adic upper half plane \(\mathcal{H}_{p}\). These rigid meromorphic cocycles can be evaluated at real quadratic points of \(\mathcal{H}_{p}\). Conjecturally their values display striking similarities with the classical singular moduli. In a similar vein, the theory of analytic theta cocycles, obtained by considering cocycles for the Ihara group with values in analytic functions on \(\mathcal{H}_{p}\) defined up to scalars, can be used to reinterpret constructions of Darmon-Dasgupta [1] and Darmon [1] of conjectural analogues of elliptic units and Heegner points for real quadratic fields.
Before introducing rigid meromorphic cocycles, we will review the relevant background on the Drinfeld \(p\)-adic upper half plane.
### Drinfeld \(p\)-adic upper half plane
In search for an analogue of singular moduli for real quadratic fields, the first obstacle one runs into is the absence of real quadratic points on the complex upper half plane. However, there is a \(p\)-adic analogue of the complex upper half plane containing many real quadratic points,
the _Drinfeld upper half plane_, which we briefly describe. The Drinfeld upper half plane \(\mathcal{H}_{p}\) is a rigid analytic space1, whose \(\mathbb{C}_{p}\)-points are given by
Footnote 1: For an introduction to rigid analytic geometry see e.g. [11, 12].
\[\mathcal{H}_{p}=\mathbb{P}^{1}(\mathbb{C}_{p})\setminus\mathbb{P}^{1}(\mathbb{ Q}_{p}).\]
In the following we abuse notation and write \(\mathcal{H}_{p}\) for the rigid space and as well as its \(\mathbb{C}_{p}\)-points. The space \(\mathcal{H}_{p}\) has an admissible covering by an increasing sequence of open affinoid subdomains \(\mathcal{H}_{p}^{\leq n}\) which are constructed by removing balls of decreasing radius around all \(\mathbb{Q}_{p}\)-rational points. More explicitly, for each \(z\in\mathbb{P}^{1}(\mathbb{C}_{p})\) we fix homogeneous coordinates \(z=[z_{0}:z_{1}]\) such that \(\max\{|z_{0}|,|z_{1}|\}=1\). Then
\[\mathcal{H}_{p}^{\leq n}=\{[z_{0}:z_{1}]\in\mathbb{P}^{1}(\mathbb{C}_{p})|\, \operatorname{ord}_{p}(x_{0}z_{1}-x_{1}z_{0})\leq n\,\,\forall[x_{0}:x_{1}] \in\mathbb{P}^{1}(\mathbb{Q}_{p})\}.\]
For example, when \(n=0\), the open affinoid \(\mathcal{H}_{p}^{\leq 0}\) consists of the points \([z_{0}:z_{1}]\in\mathbb{P}^{1}(\mathbb{C}_{p})\) such that \(\operatorname{ord}_{p}(x_{0}z_{1}-x_{1}z_{0})\leq 0\) for any \([x_{0}:x_{1}]\in\mathbb{P}^{1}(\mathbb{Q}_{p})\). Hence \(\mathcal{H}_{p}^{\leq 0}\) is obtained from \(\mathbb{P}^{1}(\mathbb{C}_{p})\) by removing balls of radius \(1\) around the \(\mathbb{Q}_{p}\)-rational points. The latter are precisely the points that are mapped to the \(\mathbb{F}_{p}\)-rational points under the projection map \(\pi:\mathbb{P}^{1}(\mathbb{C}_{p})\to\mathbb{P}^{1}(\overline{\mathbb{F}}_{p})\), so that
\[\mathcal{H}_{p}^{\leq 0}=\mathbb{P}^{1}(\mathbb{C}_{p})\setminus\pi^{-1}( \mathbb{P}^{1}(\mathbb{F}_{p})).\]
For convenience let us include the following _ad hoc_ description of the rigid analytic functions on \(\mathcal{H}_{p}\), which the reader who is unfamiliar with rigid analytic geometry can take as a definition.
**Definition 5**.:
* A _rigid analytic function_ on \(\mathcal{H}_{p}^{\leq n}\) is a limit with respect to uniform convergence of rational functions on \(\mathbb{P}^{1}(\mathbb{C}_{p})\) with poles in \(\mathbb{P}^{1}(\mathbb{C}_{p})\setminus\mathcal{H}_{p}^{\leq n}\).
* A _rigid analytic function_ on \(\mathcal{H}_{p}\) is a function \(f:\mathcal{H}_{p}\to\mathbb{C}_{p}\), such that the restriction of \(f\) to \(\mathcal{H}_{p}^{\leq n}\) is a rigid analytic function for every \(n\geq 0\).
* A _rigid meromorphic function_ on \(\mathcal{H}_{p}\) is a ratio \(f=\frac{g}{h}\) of rigid analytic functions \(g\) and \(h\) with \(h\neq 0\).
We denote by \(\mathcal{A}\) the set of rigid analytic functions on \(\mathcal{H}_{p}\) and by \(\mathcal{M}\) the set of rigid meromorphic functions.
The key structural properties of the rigid space \(\mathcal{H}_{p}\) are that it is a smooth \(1\)-dimensional Stein space and that its algebra of rigid analytic functions \(\mathcal{A}\) is a reflexive Frechet space.
The group \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\) acts on \(\mathcal{H}_{p}\) by Mobius transformations, i.e., for \(\gamma=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\operatorname{GL}_{2}(\mathbb{Q}_{p})\) and \(z\in\mathcal{H}_{p}\), the action of \(\gamma\) on \(z\) is given by
\[\gamma\cdot z=\frac{az+b}{cz+d}.\]
As the center of \(\operatorname{GL}_{2}(\mathbb{Q}_{p})\) acts trivially, this also defines an action of \(\operatorname{PGL}_{2}(\mathbb{Q}_{p})\).
#### 3.1.1 Bruhat-Tits tree for \(\mathrm{PGL}_{2}(\mathbb{Q}_{p})\)
A useful tool for studying the Drinfeld upper half plane is the Bruhat-Tits tree \(\mathcal{T}\) attached to the group \(\mathrm{PGL}_{2}(\mathbb{Q}_{p})\). We recall its construction.
We call two \(\mathbb{Z}_{p}\)-lattices \(\Lambda\) and \(\Lambda^{\prime}\) in \(\mathbb{Q}_{p}^{2}\) equivalent if they are homothetic. This defines an equivalence relation on the set of all \(\mathbb{Z}_{p}\)-lattices in \(\mathbb{Q}_{p}^{2}\). The set of vertices of the tree \(\mathcal{T}\) is defined to be the set of equivalence classes of lattices:
\[V(\mathcal{T}):=\{\Lambda\subseteq\mathbb{Q}_{p}^{2}\ \ \mathbb{Z}_{p}\text{-lattice}\}/ \sim.\]
By definition two vertices \(v,v^{\prime}\in V(\mathcal{T})\) are connected by an edge in the graph \(\mathcal{T}\) if there are representatives \(v=[\Lambda]\) and \(v^{\prime}=[\Lambda^{\prime}]\), such that \(p\Lambda\subsetneq\Lambda^{\prime}\subsetneq\Lambda\). One can show that the graph defined by this is a homogeneous tree of degree \(p+1\). For example, Figure 1 shows a part of the Bruhat-Tits tree for \(p=2\).
The group \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\) acts transitively on the set of all \(\mathbb{Z}_{p}\)-lattices in \(\mathbb{Q}_{p}^{2}\), preserving the equivalence classes. This induces a transitive action of \(\mathrm{GL}_{2}(\mathbb{Q}_{p})\) and of \(\mathrm{PGL}_{2}(\mathbb{Q}_{p})\) on \(\mathcal{T}\).
_Example 1_.: The lattice \(\mathbb{Z}_{p}^{2}\) gives rise to a vertex called the _standard vertex_\(v_{0}=[\mathbb{Z}_{p}^{2}]\). The adjacent vertices of \(v_{0}\) are given by
\[\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\begin{pmatrix}1&0\\ 0&p\end{pmatrix}v_{0}\text{ and }\begin{pmatrix}1&0\\ \lambda&1\end{pmatrix}\begin{pmatrix}1&0\\ 0&p\end{pmatrix}v_{0}\text{ for }\lambda\in\mathbb{F}_{p}.\]
Moreover, if we consider the action of \(\mathrm{SL}_{2}(\mathbb{Z}[1/p])\) on \(\mathcal{T}\), the stabiliser of the standard vertex is the group \(\mathrm{SL}_{2}(\mathbb{Z})\) and the stabiliser of the edge \(e_{0}\) from \(v_{0}\) to \(\begin{pmatrix}1&0\\ 0&p\end{pmatrix}v_{0}\) is the congruence subgroup \(\Gamma_{0}(p)\).
There is the so-called reduction map
\[\mathrm{red}:\mathcal{H}_{p}\to|\mathcal{T}|\]
from the \(p\)-adic upper half plane to the geometric realisation of \(\mathcal{T}\). We refer to [10] for details regarding its construction. Via this map, the tree \(\mathcal{T}\) serves as a nice combinatorial tool to study the Drinfeld upper half plane. Let us indicate a few features of the reduction map. It is equivariant for the action of \(\mathrm{PGL}_{2}(\mathbb{Q}_{p})\). Moreover we can recover the affinoids \(\mathcal{H}_{p}^{\leq n}\) in the admissible covering described above as inverse images
\[\mathcal{H}_{p}^{\leq n}=\mathrm{red}^{-1}(|\mathcal{T}^{\leq n}|),\]
Figure 1: The Bruhat–Tits tree of \(\mathrm{PGL}_{2}(\mathbb{Q}_{2})\)
where \(\mathcal{T}^{\leq n}\) is the subtree of \(\mathcal{T}\) consisting of all vertices and edges which have distance \(\leq n\) to the standard vertex \(v_{0}=[\mathbb{Z}_{p}^{2}]\). Finally, the reduction map allows us to view \(\mathcal{H}_{p}\) as a tubular neighbourhood of the Bruhat-Tits tree. For example, the inverse image of the edge \(e_{0}\) is the annulus
\[\operatorname{red}^{-1}(e_{0})=\{z\in\mathbb{C}_{p}|1<|z|<p\}.\]
#### 3.1.2 RM points on the Drinfeld \(p\)-adic upper half plane
We introduce some notation for real quadratic points on \(\mathcal{H}_{p}\) which will later be used to define the evaluation of rigid meromorphic cocycles.
Let \(\tau\in\mathbb{C}_{p}\). We say that \(\tau\) is a real quadratic point of discriminant \(D\) if \([\tau:1]\) satisfies the homogeneous quadratic equation \(AX^{2}+BXY+CY^{2}=0\) with \(A,B,C\in\mathbb{Z}\) coprime integers satisfying \(D=B^{2}-4AC>0\), with \(D\) not a square.
Then \(K=\mathbb{Q}(\tau)\) is a real quadratic extension of \(\mathbb{Q}\) and the point \([\tau:1]\in\mathbb{P}^{1}(\mathbb{C}_{p})\) lies in \(\mathcal{H}_{p}\) if and only if the prime \(p\) does not split in \(K\). We refer to points arising in this way as _real multiplication_ points or RM points of \(\mathcal{H}_{p}\) and denote the set of these points as \(\mathcal{H}_{p}^{\text{RM}}\). For \(\tau\in\mathcal{H}_{p}^{\text{RM}}\), define
\[\mathcal{O}_{\tau}:=\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{M}_{2}(\mathbb{Z}[1/p]):a\tau+b=c\tau^{2}+d \tau\right\}.\]
Then \(\mathcal{O}_{\tau}\) is isomorphic to a \(\mathbb{Z}[1/p]\) order in the real quadratic field \(K=\mathbb{Q}(\tau)\) via the inclusion
\[\mathcal{O}_{\tau}\hookrightarrow K,\quad\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\mapsto c\tau+d.\]
Let \(\Gamma:=\operatorname{SL}_{2}(\mathbb{Z}[1/p])\) denote the Ihara group. Let \(\mathcal{O}_{\tau,1}^{\times}\) denote the elements in \(\mathcal{O}_{\tau}^{\times}\) of reduced norm one. The Dirichlet \(S\)-unit Theorem implies that the stabiliser
\[\operatorname{Stab}_{\Gamma}(\tau)\cong\mathcal{O}_{\tau,1}^{\times}\cong\{ \pm 1\}\times\langle\gamma_{\tau}\rangle\]
is up to torsion a free group of rank \(1\). It is generated by a fundamental unit \(\gamma_{\tau}\) of norm \(1\), called the _automorph_ of \(\tau\). Moreover the points of \(\mathcal{H}_{p}^{\text{RM}}\) are precisely those for which the stabiliser in \(\Gamma\) has rank \(1\). This feature is crucial to the evaluation of the rigid meromorphic cocycles defined in the subsequent sections. We note that the action of \(\Gamma\) on \(\mathcal{H}_{p}^{\text{RM}}\) need not preserve the discriminant, but it does preserve its prime-to-\(p\) part.
### Group actions, curves and \(p\)-adic theta functions
We review the theory of \(p\)-adic theta functions arising in the context of \(p\)-adic uniformisation of Shimura curves. While the construction of rigid cocycles does not fit in this setting, the cocycles arising in SS3.3 are obtained via similar methods, after an appropriate degree shift in cohomology.
Let \(\Gamma_{0}\subset\operatorname{SL}_{2}(\mathbb{Q}_{p})\) be any subgroup. We say that \(\Gamma_{0}\) acts discretely on \(\mathcal{H}_{p}\) if the orbit \(\Gamma_{0}\tau\) of each element \(\tau\in\mathcal{H}_{p}\) intersects each of the affinoids \(\mathcal{H}_{p}^{\leq n}\) only in finitely many points.
Note that the groups \(\operatorname{SL}_{2}(\mathbb{Z})\), \(\operatorname{SL}_{2}(\mathbb{Z}[1/p])\) and \(\operatorname{SL}_{2}(\mathbb{Z}_{p})\) all act non-discretely on \(\mathcal{H}_{p}\). To see this recall that \(\operatorname{SL}_{2}(\mathbb{Z})\) is contained in the stabiliser of the standard vertex \(v_{0}\) of the Bruhat-Tits
tree and that any element \(\tau\in\mathcal{H}_{p}^{\leq 0}\) is mapped to \(v_{0}\) via the reduction map. So the whole orbit \(\operatorname{SL}_{2}(\mathbb{Z})\tau\) is contained in \(\mathcal{H}_{p}^{\leq 0}=\operatorname{red}^{-1}(\mathcal{T}^{\leq 0})\) and the set \(\operatorname{SL}_{2}(\mathbb{Z})\tau\cap\mathcal{H}_{p}^{\leq 0}\) contains infinitely many elements.
The upper half plane \(\mathcal{H}_{p}\) is of major significance for arithmetic geometry as it can be used to construct families of algebraic curves (so-called Mumford curves) which uniformise certain Shimura curves \(p\)-adically. These curves arise as quotients of \(\mathcal{H}_{p}\) by discrete group actions. We refer to [10, SS2.4] and references therein for details. Here we only mention the following class of examples.
_Example 2_.: Let \(B\) be the definite quaternion algebra \(\mathbb{Q}[i,j]\subset\mathbb{H}\) inside the Hamilton quaternions and let \(R=\mathbb{Z}[i,j,\frac{i+j+k+1}{2}]\) be a maximal order. Choose an odd prime \(p\) at which \(B\) splits and consider the group \(\Gamma_{B}=(R[1/p])_{1}^{\times}\) of units \(\gamma\in R[1/p]^{\times}\) of reduced norm \(1\). After choosing an isomorphism \((B\otimes_{\mathbb{Q}}\mathbb{Q}_{p})^{\times}\cong\operatorname{GL}_{2}( \mathbb{Q}_{p})\) the group \(\Gamma_{B}\) embeds into \(\operatorname{SL}_{2}(\mathbb{Q}_{p})\) and acts discretely on \(\mathcal{H}_{p}\). Let \(\Gamma^{\prime}\subset\Gamma_{B}\) be a subgroup of finite index which has no fixed points on \(\mathcal{H}_{p}\). Then the quotient space \(X_{\Gamma^{\prime}}=\Gamma^{\prime}\backslash\mathcal{H}_{p}\) is a rigid analytic space which admits a model as a Shimura curve.
Note that a definite quaternion algebra does not admit an embedding of a real quadratic field. Essentially due to this fact the quotients \(X_{\Gamma^{\prime}}\) are not so relevant for a theory of real multiplication. As we will see below, a better setup for these purposes is to consider the indefinite quaternion algebra \(B=\operatorname{M}_{2}(\mathbb{Q})\) and the (non-discrete) actions of groups like \(\operatorname{SL}_{2}(\mathbb{Z})\) and \(\operatorname{SL}_{2}(\mathbb{Z}[1/p])\) on \(\mathcal{H}_{p}\).
Nevertheless we want to briefly discuss one aspect from the theory of Mumford curves, the construction of \(p\)-adic theta functions. These are important tools in the study of the Jacobian of the compactification of a Mumford curve. Key ideas of their construction can be modified to construct interesting functions also in situations where a non-discrete group action is involved. Assume that \(\Gamma_{0}\subset\operatorname{SL}_{2}(\mathbb{Q}_{p})\) is a subgroup which acts discretely and without fixed points on \(\mathcal{H}_{p}\). The \(p\)-adic theta functions for \(\Gamma_{0}\) are meromorphic functions on \(\mathcal{H}_{p}\), which are invariant under \(\Gamma_{0}\) up to multiplicative scalars and which can be constructed as follows.
For an element \(w\in\mathbb{P}^{1}(\mathbb{C}_{p})\), define the rational functions \(t_{w}\) on \(\mathbb{P}^{1}(\mathbb{C}_{p})\) by
\[t_{w}(z)=\begin{cases}z-w,&\text{if }\ |w|\leq 1,\\ z/w-1,&\text{if }\ |w|>1,\\ 1,&\text{if }w=\infty.\end{cases}\]
Note that for two distinct elements \(w^{+},w^{-}\) in \(\mathcal{H}_{p}\) and \(\gamma\in\Gamma_{0}\) the quotient function
\[\frac{t_{w^{+}}(\gamma z)}{t_{w^{-}}(\gamma z)}\]
has a simple zero at \(\gamma^{-1}w^{+}\) and a simple pole at \(\gamma^{-1}w^{-}\), as does the function \(\frac{t_{y^{-1}w^{+}}(z)}{t_{y^{-1}w^{-}}(z)}\). Therefore they are equal up to a constant.
Now consider the degree zero divisor \(\Delta=w^{+}-w^{-}\). Then the product
\[f_{\Delta}(z)=\prod_{\gamma\in\Gamma_{0}}\frac{t_{\gamma w^{+}}(z)}{t_{\gamma w ^{-}}(z)}\]
converges to a rigid meromorphic function on \(\mathcal{H}_{p}\). (For more details, see Section 2.2 in [4].) By the observation above, the function \(f_{\Delta}\) is \(\Gamma_{0}\)-invariant up to multiplication by a constant in \(\mathbb{C}_{p}\). Hence, it defines an element of \((\mathcal{M}^{\times}/\mathbb{C}_{p}^{\times})^{\Gamma_{0}}=H^{0}(\Gamma_{0}, \mathcal{M}^{\times}/\mathbb{C}_{p}^{\times})\). The function \(f_{\Delta}\) is called the \(p\)_-adic theta function_ associated to the degree zero divisor \(\Delta\).
### Rigid meromorphic cocycles
One way of looking at the \(j\)-function is as an \(\operatorname{SL}_{2}(\mathbb{Z})\)-invariant function on the complex upper half plane \(\mathcal{H}_{\infty}\), i.e., an element of \(H^{0}(\operatorname{SL}_{2}(\mathbb{Z}),\operatorname{Hol}(\mathcal{H}_{ \infty}))\), where \(\operatorname{Hol}(\mathcal{H}_{\infty})\) denotes the holomorphic functions on \(\mathcal{H}_{\infty}\). Now consider \(\Gamma:=\operatorname{SL}_{2}(\mathbb{Z}[1/p])\) acting on \(\mathcal{H}_{p}\). In search for direct analogues of \(j\) in the \(p\)-adic setting, one might first naturally want to consider the groups \(H^{0}(\Gamma,\mathcal{A})\) or \(H^{0}(\Gamma,\mathcal{M})\). However it follows essentially from the Weierstrass preparation theorem that any \(\Gamma\)-invariant rigid analytic function must be constant (cf. [4, Lemma 1.9]), hence \(H^{0}(\Gamma,\mathcal{A})\cong\mathbb{C}_{p}\cong H^{0}(\Gamma,\mathcal{M})\) and so these groups do not contain interesting functions. Considering the corresponding cohomology groups in degree one on the other hand turns out to be a successful strategy, even more so if one furthermore switches to the multiplicative theory. This leads to the following definition:
**Definition 6**.:
* A _rigid meromorphic_ (resp. _analytic_) _cocycle_ is an element in \(H^{1}(\Gamma,\mathcal{M}^{\times})\) (resp. in \(H^{1}(\Gamma,\mathcal{A}^{\times})\)).
* A _rigid meromorphic_ (resp. _analytic_) _theta cocycle_ is an element in \(H^{1}(\Gamma,\mathcal{M}^{\times}/\mathbb{C}_{p}^{\times})\) (resp. in \(H^{1}(\Gamma,\mathcal{A}^{\times}/\mathbb{C}_{p}^{\times})\)).
Let \(\Gamma_{\infty}\subset\Gamma\) be the subgroup of upper triangular matrices. A rigid meromorphic cocycle is called _parabolic_ if its restriction to \(\Gamma_{\infty}\) is trivial. It is called _quasi-parabolic_ if its restriction to \(\Gamma_{\infty}\) lies in the image of \(H^{1}(\Gamma_{\infty},\mathbb{C}_{p}^{\times})\). The groups of such cohomology classes are denoted by \(H^{1}_{\operatorname{par}}(\Gamma,\mathcal{M}^{\times})\) and \(H^{1}_{f}(\Gamma,\mathcal{M}^{\times})\), respectively. One can show that any cohomology class in \(H^{1}_{f}(\Gamma,\mathcal{M}^{\times})\) has a unique representative, whose values on \(\Gamma_{\infty}\) consist of constant functions.
_Remark 4_.: One reason for considering the multiplicative group \(\mathcal{M}^{\times}\) as coefficients rather than the additive group \(\mathcal{M}\) is the connection to the theory of modular symbols and so-called rigid meromorphic period functions. More precisely, the subgroup \(H^{1}_{\operatorname{par}}(\Gamma,\mathcal{M}^{\times})\) can be identified with the group of \(\Gamma\)-invariant modular symbols \(\mathcal{MS}^{\Gamma}(\mathcal{M}^{\times})\) ([4, Section 1]). This connection to modular symbols is useful for establishing structural results. It also allows one to produce many interesting elements in \(H^{1}(\Gamma,\mathcal{M}^{\times})\). Contrary to this, the group \(H^{1}(\Gamma,\mathcal{M})\) contains no _parabolic_ elements ([4, Remark 1.8]).
In practice it is easier to write down examples of theta cocycles (we will give a few of these below). Note that there is a natural map \(H^{1}(\Gamma,\mathcal{M}^{\times})\to H^{1}(\Gamma,\mathcal{M}^{\times}/ \mathbb{C}_{p}^{\times})\) and one would like to lift theta cocycles to rigid meromorphic cocycles. Unfortunately this is in general not possible as there is an obstruction to lifting. What is possible though is to lift the restriction of a theta cocycle to the group \(\operatorname{SL}_{2}(\mathbb{Z})\) to an element in \(H^{1}(\operatorname{SL}_{2}(\mathbb{Z}),\mathcal{M}^{\times})\), as \(H^{2}(\operatorname{SL}_{2}(\mathbb{Z}),\mathbb{C}_{p}^{\times})=0\). In some situations this is good enough.
#### 3.3.1 Examples
Let us discuss some examples.
_Example 3_ (The trivial cocycle).: Fix a base point \(\xi\in\mathbb{P}^{1}(\mathbb{Q}_{p})\). For any \(\gamma\in\Gamma\), define the rational function
\[J_{\mathrm{triv}}(\gamma)(z):=\frac{z-\gamma\xi}{z-\xi}.\]
Up to a scalar this function is uniquely determined as the rational function with divisor \((\gamma\xi)-(\xi)\). Then for two elements \(\gamma_{1},\,\gamma_{2}\in\Gamma\), the function \(J_{\mathrm{triv}}(\gamma_{1}\gamma_{2})\) is the function with divisor \((\gamma_{1}\gamma_{2}\xi)-(\xi)=(\gamma_{1}\gamma_{2}\xi)-(\gamma_{1}\xi)+( \gamma_{1}\xi)-(\xi)\). Hence up to a scalar, \(J_{\mathrm{triv}}(\gamma_{1}\gamma_{2})\) is the same as \(J_{\mathrm{triv}}(\gamma_{1})\gamma_{1}J_{\mathrm{triv}}(\gamma_{2})\). So we obtain a rigid analytic theta cocycle \(J_{\mathrm{triv}}\in H^{1}(\Gamma,A^{\times}/\mathbb{C}_{p}^{\times})\).
_Example 4_.: (Theta cocycles attached to RM points). This example is inspired by a similar construction in the theory of rational modular cocycles (see [4, Section 1.4]). As we have observed above, \(\Gamma\) does not act discretely on \(\mathcal{H}_{p}\). However, it does act discretely on the product \(\mathcal{H}_{p}\times\mathcal{H}_{\infty}\) of the Drinfeld upper half plane and the complex upper half plane \(\mathcal{H}_{\infty}\). We can use this to construct examples of theta cocycles as follows.
For a tuple \((r,s)\in\mathbb{P}^{1}(\mathbb{Q})^{2}\) and \(w\) real quadratic, let \(w^{\prime}\) be its algebraic conjugate. The geodesic from \(r\) to \(s\) in \(\mathcal{H}_{\infty}\) cuts \(\overline{\mathcal{H}}_{\infty}=\mathcal{H}_{\infty}\cup\mathbb{R}\cup\{\infty\}\) into two connected components, say "on the right hand side" of the geodesic is the \(-\)-component \(\mathcal{H}_{\infty}^{-}\) and on the left hand side is the \(+\)-component \(\mathcal{H}_{\infty}^{+}\) (see Figure 2). This allows us to define the following symbol
\[(r,s)(w,w^{\prime}):=\begin{cases}0,\,\text{if $w,w^{\prime}$ are in the same connected component}\\ 1,\,\text{if $w\in\mathcal{H}_{\infty}^{+},\ w^{\prime}\in\mathcal{H}_{\infty}^{-}$} \\ -1,\,\text{if $w\in\mathcal{H}_{\infty}^{-},\ w^{\prime}\in\mathcal{H}_{\infty}^{+}$.} \end{cases} \tag{3}\]
We refer to it as the signed intersection number of the geodesics from \(r\) to \(s\) and from \(w\) to \(w^{\prime}\). For example the intersection number of the geodesics in Figure 2 is \(-1\).
For an RM point \(\tau\in\mathcal{H}_{p}^{RM}\), define the function
\[J_{\tau}\colon\Gamma \to\mathcal{M}^{\times}\] \[\gamma \mapsto\prod_{w\in\Gamma\tau}t_{w}(z)^{(\gamma\infty,\infty)(w,w ^{\prime})},\]
where \(t_{w}(z)\) is as in Section 3.2.
Figure 2: Intersection of geodesics
As it turns out, the set of elements \(w\in\Gamma\tau\) such that \((\gamma\infty,\infty)(w,w^{\prime})\neq 0\) is a discrete subset of \(\mathcal{H}_{p}\). One can show that the product converges, that indeed \(J_{\tau}(\gamma)\) defines a non-zero rigid meromorphic function and that furthermore \(J_{\tau}\) defines a class in \(H^{1}(\Gamma,\mathcal{M}^{\times}/\mathbb{C}_{p}^{\times})\).
_Example 5_ (The winding cocycle).: Let us also mention here the so-called _winding cocycle_ from [14, Section 2.3], which is an example of a rigid analytic theta cocycle. To define it choose a base point \(\xi=(\xi_{p},\xi_{\infty})\in\mathcal{H}_{p}\times\mathcal{H}_{\infty}\), such that \(\xi_{p}\in\mathcal{H}_{p}^{\leq 0}\) and \(\xi_{\infty}\) does not lie in any geodesic in the \(\Gamma\)-orbit of \([0,\infty]\). Denote by \(\Sigma\) the \(\Gamma\)-orbit of the pair \((0,\infty)\) in \(\mathbb{P}^{1}(\mathbb{Q})^{2}\) and define for every \(\gamma\in\Gamma\) and \(z\in\mathcal{H}_{p}\)
\[J_{(0,\infty)}^{\xi}(\gamma)(z):=\prod_{(r,s)\in\Sigma}\left(\frac{\xi_{p}-r} {\xi_{p}-s}\cdot\frac{z-s}{z-r}\right)^{(r,s)(\xi_{\infty},\gamma)\xi_{\infty })},\]
using the usual conventions when some of the points are \(\infty\) and where \((r,s)(\xi_{\infty},\gamma\xi_{\infty})\) denotes the signed intersection number as in (3). The function \(J_{(0,\infty)}^{\xi}(\gamma)\) converges to a rigid analytic function on \(\mathcal{H}_{p}\) and \(\gamma\mapsto J_{(0,\infty)}^{\xi}(\gamma)\) satisfies the cocycle condition up to scalars in \(\mathbb{C}_{p}^{\times}\) (cf. [14, Proposition 2.5]). The corresponding class in \(H^{1}(\Gamma,\mathcal{A}^{\times}/\mathbb{C}_{p}^{\times})\) is independent of the choice of base point [14, Prop. 2.6] and is called the winding cocycle and we denote it by \(J_{(0,\infty)}\).
#### 3.3.2 Hecke module structure of rigid meromorphic cocycles
On the space \(H^{1}(\Gamma,\mathcal{M}^{\times})\) of rigid meromorphic cocycles and on the space \(H^{1}(\Gamma,\mathcal{M}^{\times}/\mathbb{C}_{p}^{\times})\) of rigid meromorphic theta cocycles one can define Hecke operators \(T_{n}\) for any \(n\geq 1\). This is done by decomposing a relevant union of double cosets into cosets and following a recipe of Shimura. We refer to [14, SS2.4] for details.
**Definition 7**.: Given a quasi-parabolic rigid meromorphic cocycle \(J\), its _rigid meromorphic period function_ is the value at \(S:=\left(\begin{smallmatrix}0&-1\\ 1&0\end{smallmatrix}\right)\in\Gamma\) of the quasi-parabolic representative of \(J\).
Let us define the abstract Hecke algebra as \(\mathbb{T}:=\mathbb{Z}[T_{n}\,\ n\geq 1]\). One can then prove the following structural result.
**Theorem 3.3.1** ([14, Theorem 1 and Remark 2]).: _The group \(H^{1}(\Gamma,\mathcal{M}^{\times})\) is of infinite rank. Any finite rank \(\mathbb{T}\)-stable submodule of \(H^{1}_{f}(\Gamma,\mathcal{M}^{\times})\) is contained in \(H^{1}_{f}(\Gamma,\mathcal{A}^{\times})\)._
Analytic theta cocycles behave differently. As Hecke modules, they are closely related to modular forms. In [14], Darmon and Vonk prove a classification result using ideas of Stevens and Schneider-Teitelbaum. We briefly review the key points (see also [14, Section 3.1]).
Consider the annulus \(U:=\{z\in\mathbb{C}_{p}|\ 1<|z|<p\}\). Its stabiliser in \(\Gamma\) is \(\Gamma_{0}(p)\). Denote \(\mathrm{dlog}f:=\frac{f^{\prime}(z)}{f(z)}dz\). The group homomorphism
\[\mathcal{A}^{\times}\to\mathbb{C}_{p},\qquad f\mapsto\mathrm{res}_{U}( \mathrm{dlog}f),\]
where \(\mathrm{res}_{U}\) denotes the \(p\)-adic annular residue along \(U\), is \(\Gamma_{0}(p)\)-equivariant and trivial on constants. It can be shown that it takes values in \(\mathbb{Z}\). This induces a map on cohomology
\[\delta_{U}:=\mathrm{res}_{U}\,\mathrm{dlog}\colon H^{1}(\Gamma,\mathcal{A}^{ \times}/\mathbb{C}_{p}^{\times})\to H^{1}(\Gamma_{0}(p),\mathbb{Z}).\]
**Theorem 3.3.2**.: _[_4_, Section 3]__. The map_
\[\delta_{U}\otimes 1:H^{1}(\Gamma,\mathcal{A}^{\times}/\mathbb{C}_{p}^{\times}) \otimes\mathbb{Q}\to H^{1}(\Gamma_{0}(p),\mathbb{Z})\otimes\mathbb{Q}\]
_is surjective and has a \(1\)-dimensional kernel, generated by the analytic theta cocycle \(J_{\mathrm{triv}}\)._
_The map \(\delta_{U}\) admits a Hecke-equivariant section_
\[\mathrm{ST}^{\times}:H^{1}(\Gamma_{0}(p),\mathbb{Z})\to H^{1}(\Gamma, \mathcal{A}^{\times}/\mathbb{C}_{p}^{\times}).\]
The section \(\mathrm{ST}^{\times}\) above is called the multiplicative _Schneider-Teitelbaum lift_, and one can use it to classify the analytic theta cocycles as follows.
**Theorem 3.3.3**.: _[_4_, Section 3]__. The space \(H^{1}(\Gamma,\mathcal{A}^{\times}/\mathbb{C}_{p}^{\times})\otimes\mathbb{Q}\) is of dimension \(2g+2\), where \(g\) is the genus of the modular curve \(X_{0}(p)\). The Hecke action factors through the faithful Hecke algebra of modular forms \(M_{2}(\Gamma_{0}(p))\)._
### Heegner constructions
The crucial motivation for the study of rigid meromorphic cocycles and analytic theta cocycles is the construction of conjectural analogues of Heegner objects for real quadratic fields. In analogy with the Heegner constructions, they can be viewed as images of maps
\[\Phi\colon\Gamma\backslash\mathcal{H}_{p}^{\mathrm{RM}}\to X(\mathbb{C}_{p})\]
for the group schemes \(X/\mathbb{Q}\) appearing in SS2.2.
The corresponding images are expected to be global. Note that however in the RM settings these maps will be defined exclusively at RM points in \(\mathcal{H}_{p}\) via the evaluation of rigid meromorphic or theta cocycles, which we now describe.
#### 3.4.1 Evaluation at RM points
We now define the evaluation of quasi-parabolic rigid meromorphic cocycles at RM points. Recall that for \(\tau\in\mathcal{H}_{p}^{\mathrm{RM}}\), \(\gamma_{\tau}\in\Gamma\) denotes the automorph of \(\tau\) as defined in 3.1.2.
**Definition 8**.: Let \([J]\in H_{f}^{1}(\Gamma,\mathcal{M}^{\times})\) be a quasi-parabolic rigid meromorphic cocycle. Consider a representative \(J\in[J]\) given by a map \(J\colon\Gamma\to\mathcal{M}^{\times}\) such that \(J(\Gamma_{\infty})\subset\mathbb{C}_{p}^{\times}\) and let \(\tau\in\mathcal{H}_{p}^{\mathrm{RM}}\). Define the _evaluation of \(J\) at \(\tau\)_ as
\[J[\tau]:=J(\gamma_{\tau})(\tau)\in\mathbb{C}_{p}\cup\{\infty\}.\]
_Remark 5_.: One can check that this is constant on \(\Gamma\) orbits.
_Remark 6_.: Defining the evaluation of a theta cocycle at an RM point is also possible but requires some extra work involving the restriction to (a conjugate of) \(\mathrm{SL}_{2}(\mathbb{Z})\). We refer to Section 2.3 of [4] for the definition. When the theta cocycle lifts to a rigid meromorphic cocycle, the values essentially agree. When the cocycle does not lift, the values are expected to be transcendental in general, but are still of arithmetic interest as they are related to Gross-Stark units (see Conj. 2) and Stark-Heegner points (see Conj. 3) below.
#### 3.4.2 Singular moduli for real quadratic fields
Let \(\tau\in\mathcal{H}_{p}^{\mathrm{RM}}\) and write \(H_{\tau}\) for the narrow ring class field corresponding to \(\mathcal{O}_{\tau}\). For a quasi-parabolic rigid meromorphic cocycle \(J\) we let \(j\) be its associated period function (see Def. 7). Then we define a field
\[H_{j}=H_{j}:=\mathrm{Compositum}_{j(\tau)=\infty}(H_{\tau}),\]
which is a finite extension of \(\mathbb{Q}\).
**Definition 9**.: Let \(\tau\in\Gamma\backslash\mathcal{H}_{p}^{\mathrm{RM}}\). For a quasiparabolic meromorphic cocycle \(J\), we say that \(J[\tau]\) is a _real quadratic singular modulus_.
The following conjecture, due to Darmon and Vonk, justifies the analogy with the values of the \(j\)-function at CM points.
**Conjecture 1** ([4]).: _If \(J\in H^{1}_{f}(\Gamma,\mathcal{M}^{\times})\), then \(J[\tau]\) is algebraic and lies in the compositum of \(H_{J}\) and \(H_{\tau}\)._
Note that the analogy with classical singular moduli is imperfect, because the choice of cocycle \(J\) impacts the field of definition of its value. A perhaps more appropriate analogy is that these values should be thought of as multiplicative analogues of differences of singular moduli, lying in composita of the corresponding ring class fields.
#### 3.4.3 Elliptic units for real quadratic fields
Consider the unique (normalised) Eisenstein eigenform of weight \(2\) and level \(\Gamma_{0}(p)\) with Fourier expansion
\[E_{2}^{(p)}=\frac{p-1}{12}+\sum_{n\geq 1}\left(\sum_{d|n,p|d}d\right)q^{n}.\]
For a choice of base point \(z_{0}\in\mathcal{H}_{\infty}\), the function computing path integrals of the differential attached to the Eisenstein series
\[\gamma\mapsto\int_{z_{0}}^{\gamma z_{0}}\frac{1}{2\pi i}E_{2}^{(p)}(z)dz\]
for \(\gamma\in\Gamma_{0}(p)\) takes integer values and transforms as a (non-parabolic) cocycle \(\varphi_{\mathrm{DR}}\in H^{1}(\Gamma_{0}(p),\mathbb{Z})\). Its values can be described explicitly in terms of Dedekind sums (see [4, SS2.5]).
The Schneider-Teitelbaum lift
\[J_{\mathrm{DR}}:=\mathrm{ST}^{\times}(\varphi_{DR})\]
of \(\varphi_{DR}\) to \(H^{1}(\Gamma,\mathcal{A}^{\times}/\mathbb{C}_{p}^{\times})\) is called the _Dedekind-Rademacher theta cocycle_.
**Definition 10**.: Let \(\tau\in\Gamma\backslash\mathcal{H}_{p}^{\mathrm{RM}}\) be a point of discriminant \(D\) prime to \(p\). The value \(J_{\mathrm{DR}}[\tau]\) is called the _real quadratic elliptic unit_ attached to \(\tau\).
Its values at RM points in \(\mathcal{H}_{p}\) should be thought of as analogues of the elliptic units defined in SS2, as the following conjecture suggests.
**Conjecture 2**.: _[_10_, Conj. 2.14]_ _Let \(\tau\in\mathcal{H}_{p}\) be a real quadratic point of discriminant \(D\) prime to \(p\) and let \(H_{\tau}\) be the narrow ring class field of the order defined by \(\tau\). Then the real quadratic elliptic unit attached to \(\tau\) satisfies_
\[J_{\mathrm{DR}}[\tau]\in\mathcal{O}_{H_{\tau}}[1/p]^{\times}.\]
Note that unlike in the CM setting, these values are expected to be \(p\)-units rather than genuine units in the Hilbert class field. The analogy with the construction of elliptic units is perhaps not obvious on the surface. However, the Eisenstein series \(E_{2}^{(p)}\) whose periods appear in the definition above can be viewed as the logarithmic derivative of \(\Delta(pz)/\Delta(z)\), a typical example of the modular unit featuring in SS2.2.2.
#### 3.4.4 Stark-Heegner points
Let \(f\) be a newform of weight \(2\) and level \(\Gamma_{0}(p)\), with Fourier coefficients valued in \(\mathbb{Z}\). Consider the differential form \(\omega_{f}=2\pi if(z)dz\), and let
\[\omega_{f}^{+}=\omega_{f}+\overline{\omega}_{f},\text{ and }\omega_{f}^{-}= \omega_{f}-\overline{\omega}_{f}.\]
be its real and imaginary part, respectively. Given \(z_{0}\in\mathcal{H}_{\infty}\), the maps \(\tilde{\phi}_{f}\colon\Gamma_{0}(p)\to\mathbb{R}\) defined by
\[\gamma\mapsto\int_{z_{0}}^{\gamma z_{0}}\omega_{f}^{\pm},\]
can be rescaled by suitable periods \(\Omega_{f}^{\pm}\) in such a way that the pair of cocycles \(\varphi_{f}^{\pm}:=1/\Omega_{f}^{\pm}\tilde{\varphi}_{f}^{\pm}\) take values in \(\mathbb{Z}\). The Schneider-Teitelbaum lifts of the cocycles defined above
\[J_{f}^{\pm}:=\mathrm{ST}^{\times}(\varphi_{f}^{\pm})\]
are called the _elliptic theta cocycles_ attached to \(f\).
By the Eichler-Shimura construction reviewed in SS2.2.3, a newform \(f\in S_{2}(\Gamma_{0}(p))\) as above corresponds to an elliptic curve \(E:=E_{f}\) with multiplicative reduction at \(p\). For such an elliptic curve, Tate showed that its \(\mathbb{C}_{p}\)-valued points can be uniformised as
\[\Psi_{\mathrm{Tate}}\colon\mathbb{C}_{p}^{\times}/q_{E}^{\mathbb{Z}}\xrightarrow {\sim}E(\mathbb{C}_{p})\]
for some parameter \(q_{E}\in\mathbb{C}_{p}^{\times}\) satisfying \(|q_{E}|<1\).
**Definition 11**.: Let \(\tau\in\Gamma\backslash\mathcal{H}_{p}^{\mathrm{RM}}\) be a point of discriminant \(D\) prime to \(p\). A _Stark-Heegner point_ for \(E\) attached to \(\tau\) is
\[P_{\tau}^{\pm}:=\Psi_{\mathrm{Tate}}(J_{f}^{\pm}[\tau])\in E(\mathbb{C}_{p}).\]
The following conjecture, originally formulated by Darmon in [10] motivates the conjectural analogy between Stark-Heegner points and the classical Heegner points in CM theory.
**Conjecture 3** ([10],[11],[11]).: _Let \(\tau\in\Gamma\backslash\mathcal{H}_{p}^{\mathrm{RM}}\) be a point of discriminant \(D\) prime to \(p\). Denote \(H_{\tau}\) the narrow ring class field of the order \(\mathcal{O}_{\tau}\) attached to \(\tau\). Then_
\[P_{\tau}^{\pm}\in E(H_{\tau}).\]
In addition, the point \(P_{\tau}^{\pm}\) is expected to lie in the \((\pm 1)\)-part of \(E(H_{\tau})\) for the action of the complex conjugation in the narrow Hilbert class field of \(\tau\).
_Remark 7_.: The above conjecture (and suitable generalisations) has been verified extensively, see for example [10]. The theoretical evidence is much more fragmentary, and no general approach is known at this stage. However, an important case was showed in the article [1] for Stark-Heegner points defined over genus fields of real quadratic fields by relating them to Heegner points in this special setting.
## 4 Families of real analytic Eisenstein series and CM theory
In this section, we introduce the second main theme of this article, which is how modular generating series constructed from Heegner objects appear in relation to derivatives of families of modular forms. We will start by reviewing some archimedean examples; we will then move to their \(p\)-adic counterparts in SS5.
In the archimedean setting, derivatives of real analytic families of Eisenstein series fit prominently in the Kudla program. Roughly speaking, this vast program seeks to show that certain formal \(q\)-expansions encoding special cycles on orthogonal Shimura varieties are modular. The most basic instances of these statements involve CM cycles on modular curves. The works [11], [12] of Gross and Zagier were indispensable to the emergence of this program. In SS4.1, we will give an account of their work on the differences of singular moduli, which will be close in spirit to the \(p\)-adic analogues discussed in SS5. We will then give a brief overview of the work of Gross and Zagier on the derivative of the \(L\)-function attached to an elliptic curve [11]. Finally in SS4.3 we explain how the two articles can be interpreted under a unified perspective, which will be translated in the \(p\)-adic setting in SS5.
### Gross-Zagier's factorisation formula
We review the work of Gross and Zagier on the factorisation of differences of singular moduli, the CM values of the \(j\)-function introduced in SS2.2.1. Their landmark work unveiled new phenomena in the theory of complex multiplication, initiating the study of the intersection theory of CM cycles on modular curves via automorphic methods.
Let us introduce the main result. Let \(D_{1}\) and \(D_{2}\) be two negative relatively prime fundamental discriminants. Let \(D=D_{1}D_{2}\) and let \(w_{i}\) be the number of roots of unity in the quadratic orders of discriminant \(D_{i}\). For \(\tau\in\mathcal{H}_{\infty}\) denote by \([\tau]\) the equivalence class of \(\tau\) under the action of \(\mathrm{SL}_{2}(\mathbb{Z})\). The main theorem of complex multiplication implies that the product
\[J(D_{1},D_{2})=\prod_{\begin{subarray}{c}[\tau_{1}],\;\mathrm{disc}\,\tau_{1 }=D_{1}\\ [\tau_{2}],\;\mathrm{disc}\,\tau_{2}=D_{2}\end{subarray}}(j(\tau_{1})-j(\tau_ {2}))^{\frac{4}{w_{1}w_{2}}}\]
(or more precisely, its square for \(D_{i}\geq-4\)) is an integer. Gross and Zagier are motivated by the deceptively simple question of determining its prime factorisation.
For primes \(\ell\) with \(\left(\frac{D_{1}D_{2}}{\ell}\right)\neq-1\) define
\[\epsilon(\ell)=\begin{cases}\left(\frac{D_{1}}{\ell}\right)&\text{if }(\ell,D_{1})=1,\\ \left(\frac{D_{2}}{\ell}\right)&\text{if }(\ell,D_{2})=1.\end{cases}\]
Now if \(n=\prod_{i}\ell_{i}^{a_{i}}\) is such that \(\left(\frac{D_{1}D_{2}}{\ell_{i}}\right)\neq-1\) for all \(i\), then we define \(\epsilon(n)=\prod_{i}\epsilon(\ell_{i})^{a_{i}}\). Gross and Zagier prove the following theorem.
**Theorem 4.1.1**.: _[_12, Thm 1.3_]___
\[J(D_{1},D_{2})^{2}=\pm\prod_{\begin{subarray}{c}x,n,n^{\prime}\in\mathbb{Z}\\ n,n^{\prime}>0,\\ x^{2}+4nn^{\prime}=D\end{subarray}}n^{\epsilon(n^{\prime})}. \tag{4}\]
In _loc. cit_, the authors provide two alternative proofs of this result. The first one is essentially algebraic and relies on the moduli interpretation of CM points on the modular curve. The second is purely analytic and makes use of automorphic techniques to obtain a formula for \(J(D_{1},D_{2})^{2}\). While one can easily deduce the algebraicity of \(J(D_{1},D_{2})\) from the theory of complex multiplication, no use of this fact is made in the analytic proof, where instead it can be seen as a byproduct of the proof itself. In view of translating the Gross-Zagier approach to the RM setting, the analytic strategy is of course more relevant, as a moduli interpretation of RM points of the Drinfeld \(p\)-adic upper half plane is lacking.
We will briefly outline some ideas behind the algebraic proof before focusing on the analytic one. The general idea is rather natural: if a prime \(p\) divides \(J(D_{1},D_{2})^{2}\), there is a pair of elliptic curves with CM by two orders \(\mathcal{O}_{K_{1}}\) and \(\mathcal{O}_{K_{2}}\) with isomorphic reductions over \(\overline{\mathbb{F}}_{p}\). This forces the endomorphism ring of the reduction to be a maximal order in the quaternion algebra ramified at \(p\) and \(\infty\).
The existence of embeddings of the orders \(\mathcal{O}_{K_{i}}\) into an order in a quaternion algebra of discriminant \(p\) is rather restrictive on the prime. Working out explicit conditions, one determines the primes contributing to the factorisation of \(J(D_{1},D_{2})^{2}\). A delicate calculation using Deuring's theory of endomorphisms of elliptic curves in finite characteristic is necessary to determine the precise exponent in the factorisation formula.
The first step of the analytic proof consists in producing a reformulation of (4) more suitable to analytic manipulation. Let \(F=\mathbb{Q}(\sqrt{D})\) and consider the diagram of quadratic extensions
where \(\mathbb{Q}(\sqrt{D}_{1},\sqrt{D}_{2})\) is an unramified extension of the real quadratic field \(F\). By Artin reciprocity, the genus character cutting out this extension can be viewed as a quadratic character of the narrow class group \(\mathrm{Cl}^{+}(\mathcal{O}_{F})\) of \(F\), which we denote by \(\chi\). An elementary calculation shows that the explicit factorisation formula for \(J(D_{1},D_{2})\) is equivalent to
\[-\log\lvert J(D_{1},D_{2})\rvert^{2}=\sum_{\nu\in\mathbb{b}_{*}^{-1},\mathrm{ Tr}\nu=1}\sum_{\overline{n}\|(\nu\mathbf{b})}\chi(n)\log\mathrm{Nm}(\mathbf{n}), \tag{5}\]
where \(\mathfrak{d}_{+}^{-1}\) denotes the totally positive elements in the inverse different \(\mathfrak{b}^{-1}\) of \(\mathcal{O}_{F}\). The right hand side of (5) evokes formulae appearing in the work of Siegel on special values of Dedekind \(\zeta\)-functions for totally real fields [10, SS2]. For \(\nu\in F\), denote \((\nu_{1},\nu_{2})\) its real embeddings. For \(\mathbf{z}=(z_{1},z_{2})\in\mathcal{H}_{\infty}^{2}\) and \((\nu,\mu)\) in \(F^{2}\), we let
\[\operatorname{Nm}(\mu\mathbf{z}+\nu):=(\mu_{1}z_{1}+\nu_{1})\cdot(\mu_{2}z_{2} +\nu_{2}).\]
The formulae appearing in _loc. cit_. arise as restrictions of Hilbert Eisenstein series of weight \((k,k)\), where \(k\geq 2\) is an even integer, and level \(\operatorname{SL}_{2}(\mathcal{O}_{F})\), which are given by
\[E_{F,k}(\mathbf{z})=\sum_{\mathfrak{a}\in\operatorname{Cl}(\mathcal{O}_{F})} \operatorname{Nm}(\mathfrak{a})^{k}\sum_{(m,n)\in(\mathfrak{a}^{2})/\mathcal{ O}_{F}^{\infty}}\operatorname{Nm}(m\mathbf{z}+n)^{-k},\qquad\text{for $\mathbf{z}\in\mathcal{H}_{\infty}^{2}$}, \tag{6}\]
along the diagonal embedding \(\Delta\colon\mathcal{H}_{\infty}\to\mathcal{H}_{\infty}^{2}\). The diagonal restriction inherits modular properties from those of \(E_{F,k}\). It is an elliptic modular form of weight \(2k\) and level \(1\). Siegel's formulae are obtained by expressing the resulting form, whose constant term in the Fourier expansion encodes the Dedekind special value, in terms of a basis of the space of elliptic modular forms for small values of \(k\).
The strategy of Gross and Zagier consists of mimicking Siegel's approach in the degenerate case in which \(k=1\). The series in (6) would not converge for \(k=1\). The problem can be obviated by considering the family of real analytic Hilbert Eisenstein series of parallel weight one
\[\mathcal{E}_{F,1,s}(\mathbf{z})=\sum_{\mathfrak{a}\in\operatorname{Cl}( \mathcal{O}_{F})}\chi(\mathfrak{a})\operatorname{Nm}(\mathfrak{a})^{1+2s}\sum _{(m,n)\in(\mathfrak{a}^{2})/\mathcal{O}^{\infty}}\operatorname{Nm}(m\mathbf{ z}+n)^{-k}|\operatorname{Nm}(m\mathbf{z}+n)|^{-s}\mathbf{y}^{s} \tag{7}\]
for a complex variable \(s\) with \(\operatorname{Re}(s)>0\). For fixed values \(\mathbf{z}\in\mathcal{H}_{\infty}^{2}\), this defines a holomorphic function of \(s\), and it can be extended meromorphically to all of \(\mathbb{C}\). The strategy for constructing holomorphic weight \(1\) Eisenstein series, also known as _Hecke's trick_, consists in taking the limit of \(\mathcal{E}_{F,1,s}\) as \(s\) tends to \(0\). In this setting, Hecke's trick fails at producing an interesting weight one Hilbert Eisenstein series: an unexpected cancellation occurs when computing the corresponding Fourier expansion. Gross-Zagier are drawn to consider instead the derivative \(\mathcal{E}_{F,1,s}^{\prime}\) at \(s=0\). This is a real analytic weight \(1\) Hilbert modular form; it can be restricted along the diagonal embedding \(\Delta\) to obtain a (non-holomorphic) elliptic modular form of weight \(2\), as in [10]. From this real analytic form \(\Delta^{*}\mathcal{E}_{F,1}^{\prime}\) one can extract a holomorphic one by applying a projector onto the holomorphic subspace as in [11]. The resulting modular form, denoted by \((\Delta^{*}\mathcal{E}_{F,1}^{\prime})_{\operatorname{hol}}\), is the object of the following theorem, summarising the calculations in [10, SS7].
**Theorem 4.1.2**.: _The first Fourier coefficient \(a_{1}\) of the holomorphic modular form \((\Delta^{*}\mathcal{E}_{F,1}^{\prime})_{\operatorname{hol}}\) of weight 2 and level \(\operatorname{SL}_{2}(\mathbb{Z})\) satisfies the equation_
\[a_{1}\cdot\lambda=\log|J(D_{1},D_{2})|^{2}+\sum_{\nu\in\mathfrak{d}_{+}^{-1}, \operatorname{Tr}\nu=1}\sum_{n\mid(\nu\mathfrak{d})}\chi(n)\log\operatorname{ Nm}(n),\]
_for some \(\lambda\in\mathbb{C}^{\times}\)._
Now as \((\Delta^{*}\mathcal{E}_{F,1}^{\prime})_{\operatorname{hol}}\) is of weight \(2\) and level \(\operatorname{SL}_{2}(\mathbb{Z})\), one abstractly knows that it is the zero function, hence \(a_{1}=0\) and the equality (5) follows.
### Gross-Zagier's work on BSD in analytic rank 1
Beyond the factorisation of differences of singular moduli, the circle of ideas of [10] yielded some striking developments towards the Birch and Swinnerton-Dyer Conjecture.
Let \(E/\mathbb{Q}\) be an elliptic curve. The \(L\)-function \(L(E,s)\) is defined as a convergent infinite product for \(\operatorname{Re}(s)>3/2\). If \(E\) is modular, that is \(L(f,s)=L(E,s)\) for a suitable modular form \(f\), the \(L\)-function \(L(E,s)\) admits analytic continuation to all of \(\mathbb{C}\) and functional equation with centre \(s=1\). The Birch and Swinnerton-Dyer Conjecture states that the algebraic rank, that is the rank of the group \(E(\mathbb{Q})\), is equal to the analytic rank, the order of vanishing of \(L(E,s)\) at \(s=1\).
The main result in [10] is the following.
**Theorem 4.2.1**.: _[_10_, Thm. 7.3]_ _Suppose that \(L(E,1)=0\). There exists a point \(P\in E(\mathbb{Q})\) such that_
\[L^{\prime}(E,1)\doteq\Omega_{E}\langle P,P\rangle\]
_where \(\Omega_{E}\) is the real period of a regular differential on \(E\), the pairing \(\langle\cdot,\cdot\rangle\) denotes the Neron-Tate height pairing on \(E(\mathbb{Q})\otimes\mathbb{R}\) and the equality denoted by \(\doteq\) is defined up to scalars in \(\mathbb{Q}^{\times}\)._
In particular, if the analytic rank is one, the rank of the Mordell-Weil group \(E(\mathbb{Q})\) is positive. This result was later complemented by Kolyvagin's work bounding the algebraic rank in analytic rank 0 and 1. The point \(P\) arises as a Heegner construction (see below). Combining the results of Gross-Zagier and Kolyvagin, one can conclude that this Heegner construction produces non-torsion points in \(E(\mathbb{Q})\) precisely when the algebraic (or analytic) rank is 1.
Let us discuss some ingredients of the proof. Let \(N\) be the conductor of \(E\). The assumption that \(E\) is modular implies that up to isogeny \(E\) is a direct factor of the Jacobian \(J_{0}(N)\) of the modular curve of level \(X_{0}(N)\). Let \(D<0\) be a fundamental discriminant coprime to \(N\), and let \(K=\mathbb{Q}(\sqrt{D})\). Let \(\tau\in\mathcal{H}_{\infty}\) be an element with discriminant \(D\) defining a CM point of the modular curve of level \(\Gamma_{0}(N)\). Then the divisor \(c_{\tau}:=(\tau)-(\infty)\) defines an \(H\)-rational point of \(J_{0}(N)\), where \(H\) is the Hilbert class field of \(K\). By Shimura reciprocity, the divisor
\[c_{D}=\sum_{[\tau]:\ \operatorname{disc}(\tau)=D}c_{\tau},\]
where the sum runs over a set of representatives of the \(\Gamma_{0}(N)\)-orbits of points of discriminant \(D\) in \(\mathcal{H}_{\infty}\), is \(K\)-rational. The point \(P\) appearing in the statement of Theorem 4.2.1 is constructed as a suitable trace of the divisor \(c_{D}\).
The vector space \(J_{0}(N)(H)\otimes\mathbb{R}\) is endowed with a canonical height pairing, which can be described as a sum of local heights and is equivariant for the Hecke action.
The formal \(q\)-expansion
\[G_{D}(q)=\sum_{n\geq 1}\langle c_{D},T_{n}c_{D}\rangle q^{n} \tag{8}\]
is the \(q\)-expansion of a modular forms of weight 2 and level \(\Gamma_{0}(N)\). This formal series \(G_{D}(q)\) is a prototypical example of a _modular generating series_. It packages pairings of Hecke translates of Heegner objects into a formal \(q\)-expansion. Its modularity follows from the fact that the Hecke action on the Jacobian of the modular curve factors through the Hecke algebra of modular forms of weight 2 and level \(\Gamma_{0}(N)\).
The proof of Theorem 4.2.1 hinges on providing an alternative construction of the \(f\)-isotypic component of the series \(G_{D}\) which can be related to \(L\)-values. For this, the key input is the
Rankin-Selberg method. It allows to reinterpret the derivative \(L^{\prime}(f/K,s)\) at \(s=1\) as the \(f\)-isotypic component of the holomorphic projection of the product of
* a weight \(1\) theta series \(\theta_{1}\) attached to \(K\) and
* the derivative at \(s=0\) of a family of real analytic Eisenstein series \(\mathcal{E}_{1,s}\) of constant weight \(1\), parametrised by a complex variable \(s\).
The comparison between the modular generating series \(G_{D}\) and the holomorphic projection of \(\theta_{1}\cdot\mathcal{E}^{\prime}\) involves matching local contributions for both. In particular, it requires computing local height pairings for the divisor \(c_{D}\) at non-archimedean places via intersection theory, and at archimedean places via Green functions.
### Biquadratic extensions and modular generating series
While the arithmetic applications of [10] and [10] are on the surface fairly different, the strategies of proof fit into a general framework. The diagram 4.1 can be viewed as a special case of a diagram of quadratic field extensions (or, more generally, of etale algebras of degree \(2\)) of the form
where \(F\) is a real quadratic field (or a split quadratic algebra over \(\mathbb{Q}\), that is \(F\simeq\mathbb{Q}\times\mathbb{Q}\)) and \(K_{i}=\mathbb{Q}(\sqrt{D_{i}})\) are imaginary, so that \(L\) is forced to be a CM extension of \(F\).
For the imaginary quadratic fields \(K_{1}\) and \(K_{2}\), one can produce Heegner divisors \(c_{D_{i}}\) attached to points of discriminant \(D_{i}\) in the Jacobian of modular curves of suitable level. As in (8), the global height pairings on the Hecke translates of \(c_{D_{i}}\) can be parlayed into a modular generating series \(G_{D_{1},D_{2}}(q)=\sum_{n\geq 1}\langle c_{D_{1}},T_{n}c_{D_{2}}\rangle q^{n}\). The setting of Gross-Zagier's work on BSD in analytic rank \(1\) corresponds to the degenerate case in which the imaginary quadratic fields are equal, \(F\) is the split quadratic algebra over \(\mathbb{Q}\), and \(L=K_{1}\times K_{1}\).
Obtaining an explicit characterisation of the Fourier coefficients of \(G_{D_{1},D_{2}}\) is an essential step towards the desired arithmetic applications:
* In [10], the quantity \(J(D_{1},D_{2})\) can be read off the archimedean contribution of the height pairing \(\langle c_{D_{1}},c_{D_{2}}\rangle\) for the modular curve of level \(1\);
* In [10], for a modular elliptic curve \(E\) corresponding to a newform \(f\), the \(f\)-isotypic component of \(G_{D_{1},D_{2}}\) encodes the height of the Heegner point appearing in the statement of 4.2.1.
The crucial idea in both [10] and [10] is to express the modular generating series \(G_{D_{1},D_{2}}\) in an alternative form by exploiting the derivative of a suitable family \(\mathcal{G}_{s}\) of real analytic Hilbert modular forms of constant parallel weight \(1\) for the quadratic \(\mathbb{Q}\)-algebra \(F\), parametrised by the variable \(s\in\mathbb{C}\). In [10], the role of \(\mathcal{G}_{s}\) is the Eisenstein series \(E_{F,1,s}\)
In the degenerate setting of [10], the role of \(\mathcal{G}_{s}\) is played by the pair of elliptic modular forms \((\mathcal{E}_{1,s},\theta_{1})\) viewed as a pair of functions on two copies of \(\mathcal{H}_{\infty}\) invariant under the action of a congruence subgroup of \(\operatorname{GL}_{2}(\mathbb{Q}\times\mathbb{Q})\).
The modular generating series \(G_{D_{1},D_{2}}\) is then shown to be equal to the formal \(q\)-expansion of \((\Delta^{*}\mathcal{G}^{\prime})_{\operatorname{hol}}\), the elliptic (holomorphic) modular form obtained by:
* computing the _derivative_\(\mathcal{G}^{\prime}\) of \(\mathcal{G}_{s}\) at \(s=0\). This yields a real analytic Hilbert modular form of parallel weight \(1\);
* pulling back \(\mathcal{G}^{\prime}\) under the diagonal embedding \(\Delta\colon\mathcal{H}_{\infty}\to\mathcal{H}_{\infty}^{2}\). The resulting _diagonal restriction_\(\Delta^{*}\mathcal{G}^{\prime}\) is a real analytic elliptic weight \(2\) modular form;
* applying a _holomorphic projection_ to \(\Delta^{*}\mathcal{G}^{\prime}\).
The resulting holomorphic projection \((\Delta^{*}\mathcal{G}^{\prime})_{\operatorname{hol}}\) is finally shown to equal the modular generating series \(G_{D_{1},D_{2}}\).
In SS5 the structure of these proofs will be mimicked in the \(p\)-adic setting, by replacing CM cycles with their real quadratic counterparts, and real analytic families with \(p\)-adic families of modular forms.
_Remark 8_.: In [10], the modular generating series \(G_{D_{1},D_{2}}\) does not appear explicitly. Because the calculation is carried out for the modular curve of level \(1\), which has genus \(0\), such a generating series is identically zero. The calculation in the analytic proof of _loc. cit_. amounts to determining the degree \(1\) coefficient of the modular generating series as a sum of local contributions, thereby leading to the desired factorisation formula.
_Remark 9_.: Note that in [10], the theta series \(\theta_{1}\) appearing as the second component of the family \(\mathcal{G}_{s}\) should be thought of as constant in the variable \(s\). This reflects the fact that in the archimedean setting, only real analytic Eisenstein series admit variations in families. This imposes some restrictions on the Heegner cycles for which the Gross-Zagier strategy can be carried out. In general, one would expect the relevant Hilbert family \(\mathcal{G}_{s}\) to deform a parallel weight \(1\) Hilbert theta series attached to the CM extension \(L/F\). However, these theta series are usually cuspidal, and as such they do not admit archimedean deformations. By contrast, in the \(p\)-adic setting, cusp forms can often be \(p\)-adically deformed. This will allow additional flexibility in the settings arising in SS5.
## 5 \(p\)-adic families of modular forms and RM theory
As we have seen in the last section real analytic families of Eisenstein series play a vital role in the works of Gross and Zagier on CM theory. In the \(p\)-adic setting there is a direct analogue, the \(p\)-adic family of Eisenstein series and below we will review how it is used in RM theory. Before we go into details we provide a bit of context for the much richer theory of \(p\)_-adic_ families of modular forms.
### A brief overview of \(p\)-adic families of modular forms
The theory of \(p\)-adic families of modular forms (or more generally of automorphic forms) has its origins in the 70s when Serre observed that the Hecke eigenvalues of Eisenstein series can
be \(p\)-adically interpolated and hence that Eisenstein series can be viewed as a \(p\)-adic family parametrised by the weight. This simple yet striking observation combined with a general interest in establishing congruences of modular forms motivated the search for a theory of \(p\)-adic variations of modular forms.
A \(p\)-adic modular form (of level \(\operatorname{SL}_{2}(\mathbb{Z})\)) as defined by Serre in [14] is a power series \(f=\sum_{n\geq 0}a_{n}q^{n}\in\mathbb{Q}_{p}[[a]]\) such that there is a sequence of classical modular forms \(f_{i}=\sum_{n\geq 0}a_{i,n}q^{n}\) that converges to \(f\) (uniformly in the coefficients). There is an alternative geometric definition due to Katz [13], which involves the (rigid analytic versions of) modular curves and generalises the classical definition of modular forms as sections of certain line bundles.
In a family of \(p\)-adic modular forms the weight is one of the crucial \(p\)-adically varying parameters. In fact this parameter can be viewed to vary in the so-called _weight space_, which is a rigid analytic space \(\mathcal{W}\) whose \(\mathbb{C}_{p}\)-points are given by
\[\mathcal{W}(\mathbb{C}_{p})=\operatorname{Hom}_{\operatorname{cts}}(\mathbb{ Z}_{p}^{\times},\mathbb{C}_{p}^{\times})\,,\]
the group of continuous \(\mathbb{C}_{p}^{\times}\)-valued characters of \(\mathbb{Z}_{p}^{\times}\) and which identifies with a finite union of open unit discs. The weight \(k\in\mathbb{Z}\) of a classical modular form is viewed as a \(\mathbb{Q}_{p}\)-point of \(\mathcal{W}\) by identifying it with the character \(z\mapsto z^{k-1}\), a general \(p\)-adic modular form then comes with a weight \(\kappa\in\mathcal{W}(\mathbb{C}_{p})\).
For building a good geometric theory of families the space of \(p\)-adic modular forms turns out to be too big, a problem that both Hida and Coleman managed to overcome using the Hecke operator \(U_{p}\) which acts on the space of \(p\)-adic modular forms. In a nutshell, Hida studies the _ordinary subspace_, i.e., the subspace of the space of \(p\)-adic modular forms on which \(U_{p}\) acts invertibly. Coleman on the other hand works with certain subspaces of the space of \(p\)-adic forms, the so called _overconvergent_\(p\)-adic modular forms. Overconvergence was already studied by Katz and is an extra condition which decreases the size of these spaces (although the result is still an infinite-dimensional space). Now, on these spaces of overconvergent forms the operator \(U_{p}\) acts as a compact operator and the complement of the kernel of \(U_{p}\) is well behaved. Coleman managed to use this fact to build \(p\)-adic families of modular forms [15].
From a geometric point of view the theory for \(p\)-adic families of modular forms then culminates in the construction of the so-called eigencurve by Coleman and Mazur ([15]). The eigencurve is a rigid analytic curve \(\mathcal{C}\) which lives over \(\mathcal{W}\) and whose points correspond to overconvergent \(p\)-adic modular Hecke eigenforms that are not in the kernel of the \(U_{p}\)-operator. The _classical points_, i.e., the points corresponding to classical modular forms, form a Zariski-dense subset of \(\mathcal{C}\). The global geometry of the Coleman-Mazur eigencurve remains a fascinating and challenging subject in current research (see [16] for very recent progress on some of the original questions of Coleman and Mazur).
Let us highlight one important feature of the theory of eigencurves, namely the connection to the theory of Galois representations. To a classical modular eigenform \(f\) one can associate a \(2\)-dimensional Galois representation \(\rho_{f}\), which is characterised by identifying the Hecke eigenvalues with traces of Frobenius elements ([13], [14, Chapter 9]). As it turns out, one can interpolate these Galois representations (or rather, their traces and determinants) on the eigencurve to a family of pseudorepresentations. Moreover the infinitesimal neighbourhood of a classical point on the eigencurve has an interpretation in terms of a (suitably refined) Galois deformation space. This allows the usage of tools like Galois cohomology in the study of the local geometry of the eigencurve.
Since the works of Serre, Katz, Coleman and Mazur, \(p\)-adic forms have been introduced for other reductive groups \(G\) over a number field \(F\) by different techniques. In particular the definition of overconvergent modular forms has been generalised to the setting of more general Shimura varieties. Furthermore we have constructions of so-called eigenvarieties generalising the Coleman-Mazur eigencurve. Let us emphasise (as this is relevant below) that we have a good theory of \(p\)-adic families of Hilbert modular forms at hand.
### A modular generating series for RM values
While a priori singular moduli for real quadratic fields do not enjoy the same algebraicity properties as their imaginary quadratic counterparts, they can be packaged into modular generating series, which one can hope to relate to derivatives of modular forms in the \(p\)-adic setting. In [4], Darmon, Pozzi and Vonk imitate the strategy of [11] outlined in SS4.1 by replacing the real analytic family of Hilbert Eisenstein series appearing in (7) with a \(p\)-adic family parametrised by the weight, and relate it to a modular generating series for RM values of theta cocycles. We will describe their result and compare it with the formalism presented in SS4.3.
Let \(F\) be a real quadratic field of discriminant \(D\), let \(\chi\) be an odd character of \(\operatorname{Cl}^{+}(\mathcal{O}_{F})\), and let \(p\) be a prime unramified in \(F\). Given an odd integer \(k\in\mathbb{Z}_{\geq 1}\), consider the parallel weight \(k\) Hilbert Eisenstein series in the variable \(\mathbf{z}=(z_{1},z_{2})\) in \(\mathcal{H}_{\infty}^{2}\) with Fourier expansion
\[E_{F,\chi,k}^{(p)}(\mathbf{z})=L^{(p)}(F,\chi,1-k)+4\sum_{\nu\in\mathfrak{h} ^{-1}_{*}}\sum_{\mathfrak{n}\mid(\nu\mathfrak{h}),\ p\mid\operatorname{Nm}( \mathfrak{n})}\chi(\mathfrak{n})\operatorname{Nm}(\mathfrak{n})^{k-1}e^{2\pi i \operatorname{Tr}(\nu\cdot\mathbf{z})} \tag{9}\]
where \(\operatorname{Tr}(\nu\cdot\mathbf{z}):=\nu_{1}z_{1}+\nu_{2}z_{2}\) for the real embeddings \(\nu_{1},\nu_{2}\) of \(\nu\in F\) and
\[L^{(p)}(F,\chi,s)=\sum_{\mathfrak{n},\ p\mid\operatorname{Nm}(\mathfrak{n})} \chi(\mathfrak{n})\operatorname{Nm}(\mathfrak{n})^{s}.\]
These modular forms are obtained as \(p\)-stabilisations of Hilbert modular forms of level \(\operatorname{SL}_{2}(\mathcal{O}_{F})\) defined similarly to (6) at primes above \(p\) in \(F\). The \(p\)-stabilisation procedure allows their Fourier coefficients to interpolate into continuous functions of the variable \(k\in\mathcal{W}\), at the cost of introducing \(p\) into the level. This can be verified directly for the Hecke coefficients attached to \(\nu\in\mathfrak{h}^{-1}_{+}\), and can be deduced for the constant term following Serre's approach to the analyticity of the Kubota-Leopoldt \(p\)-adic \(\zeta\)-function. We denote the family as \(\mathcal{E}_{F,\chi,k}\), where the weight \(k\) is thought of as a variable \(k\in\mathcal{W}\), as explained in SS5.1.
Darmon, Pozzi and Vonk translate the steps outlined in SS4.3 to the current setting. In analogy to Gross-Zagier, who consider a family of constant weight \(1\), it is natural to analyse the specialisation of the family \(\mathcal{E}_{F,\chi,k}\) at \(k=1\), in view of applying a derivative. While the Fourier expansion of (9) may not vanish at \(k=1\), there is no harm in first considering the pullback of the family \(\mathcal{E}_{F,\chi,k}\) along the diagonal embedding \(\Delta\colon\mathcal{H}_{\infty}\hookrightarrow\mathcal{H}_{\infty}^{2}\). For every classical weight \(k\), the diagonal restriction of the weight \(k\)-specialisation \(E_{F,\chi,k}^{(p)}\) gives a classical elliptic modular form of weight \(2k\), and its Fourier coefficients interpolate \(p\)-adically as functions of \(k\in\mathcal{W}\). This yields a \(p\)-adic family of elliptic modular forms \(\Delta^{*}\mathcal{E}_{F,\chi,k}\). Its behaviour at \(k=1\) depends on the splitting of \(p\) in \(F\) (see [4, Thm. A]).
* When \(p\) splits in \(F\), the specialisation of \(\Delta^{*}\mathcal{E}_{F,\chi,k}\) need not vanish. The coefficients of its Fourier expansion can be expressed in terms of intersection pairings of the geodesic attached to real quadratic points of discriminant \(D\) on the modular curve of level \(\Gamma_{0}(p)\) and the path between \((0,\infty)\) on the modular curve of level \(\Gamma_{0}(p)\).
* When \(p\) is inert in \(F\), the specialisation at \(k=1\) vanishes identically.
The split setting is rather classical in flavour. In the inert setting, the vanishing happens for similar reasons as in [10], as the weight \(1\) specialisation of \(\Delta^{*}\mathcal{E}_{F,\chi,k}\) can itself be viewed as the \(p\)-stabilisation of the diagonal restriction of an Eisenstein series of weight \(1\) and trivial level, which must be identically \(0\). In the latter setting, it is tempting to consider the derivative \(\Delta^{*}\mathcal{E}^{\prime}_{F,\chi,k}\) of the above family at \(k=1\). Exploiting the vanishing of \(\Delta^{*}\mathcal{E}_{F,\chi,k}\) at \(k=1\), one can show that this derivative is a \(p\)-adic (and, in fact, even overconvergent) modular form of weight \(2\) and trivial tame level [12, Thm. 2.1]. Following the Gross-Zagier strategy, one wishes to tweak the resulting \(p\)-adic modular form and produce a classical one. This can be achieved by means of Hida's ordinary projector, constructed by taking limits of suitable iterates of the \(U_{p}\) operator. We denote the resulting modular form by \((\Delta^{*}\mathcal{E}^{\prime}_{F,\chi,k})_{\mathrm{ord}}\). It turns out to be a modular generating series for the RM values of the winding theta cocycle \(J_{(0,\infty)}\) at real quadratic points of discriminant \(D\).
**Theorem 5.2.1**.: _[_12_, Thm. B]_ _Suppose that \(p\) is inert in \(F\). The ordinary projection of the diagonal restriction of \(\mathcal{E}^{\prime}_{F,\chi,k}\) is a classical modular form of weight \(2\) and level \(\Gamma_{0}(p)\) with \(q\)-expansion_
\[(\Delta^{*}\mathcal{E}^{\prime}_{F,\chi,k})_{\mathrm{ord}}=L^{\prime}_{p}(F, \chi,0)+\sum_{n\geq 1}q^{n}\sum_{[\tau],\ \mathrm{disc}\tau=D}\chi(\mathfrak{c}_{\tau})\log_{p}\left(\mathrm{Nm}_{ \mathbb{Q}_{p}}^{\mathbb{Q}_{p^{2}}}(T_{n}J_{(0,\infty)})[\tau]\right)\]
_where the sum runs over \(\mathrm{SL}_{2}(\mathbb{Z})\)-classes of points of discriminant \(D\), \(\mathfrak{c}_{\tau}\) denotes the fractional ideal \(\mathbb{Z}+\tau\mathbb{Z}\) if \(\tau-\tau^{\prime}>0\) and \(\sqrt{D}(\mathbb{Z}+\tau\mathbb{Z})\) otherwise, and \(\mathbb{Q}_{p^{2}}\) is the unique unramified quadratic extension of \(\mathbb{Q}_{p}\)._
Note that the points \(\tau\) in the above sum belong to \(\mathcal{H}_{p}\) precisely because \(p\) is inert in \(F\). Unlike in Thm. 4.1.2, the constructed modular form has no a priori reason to vanish since it has level \(\Gamma_{0}(p)\).
_Remark 10_.: The modular form in Theorem 5.2.1 is in general non-zero. Its constant coefficient is the leading term of a \(p\)-adic \(L\)-function which can be expressed as the \(p\)-adic logarithm of a certain global units, which will be the subject of Theorem 5.3.1.
_Remark 11_.: The values \(J_{(0,\infty)}[\tau]\) of the winding theta cocycle at RM points \(\tau\) in \(\mathcal{H}_{p}\) should be thought of as multiplicative analogues of the differences of singular moduli \(j(\tau_{1})-j(\tau_{2})\) appearing in Gross-Zagier [10], where the pair \((0,\infty)\) corresponds to the split quadratic form \(Q_{(0,\infty)}(X,Y)=XY\). However, this setting is degenerate, because the solutions to \(Q_{(0,\infty)}\) lie at the boundary of the \(p\)-adic upper half plane. In particular, the corresponding theta cocycle is not expected to have algebraic values at real quadratic points in general. In the general framework described in SS4.3, this setting should correspond to the diagram of biquadratic \(\mathbb{Q}\)-algebras of the form
where the leftmost and rightmost algebras in the middle row correspond to quadratic forms of discriminant \(D\) and \(Q_{(0,\infty)}\), respectively. The analogy with the settings of Gross-Zagier [10] and [10] is imperfect: in their work the archimedean place does not split in the quadratic fields \(K_{1}\) and \(K_{2}\). By contrast, in the setting of [10] the prime \(p\) splits in the algebra \(\mathbb{Q}\times\mathbb{Q}\). This is related to the fact that the Eisenstein series appearing in [10] is attached to an arbitrary totally odd character of the class group \(\operatorname{Cl}^{+}(\mathcal{O}_{F})\), while in [10] only genus characters are considered.
### The values of the Dedekind-Rademacher theta cocycle
In the archimedean setting, we discussed how modular generating series for Heegner cycles on the Jacobian of modular curves can be leveraged into arithmetic applications for the Heegner constructions discussed in SS2. Particularly, the main result in [10] towards the Birch and Swinnerton-Dyer Conjecture is obtained by projecting a suitable modular generating series into the \(f\)-isotypic component for a newform \(f\) corresponding to an elliptic curve via the Eichler-Shimura construction.
Similarly, we will now discuss an arithmetic application of the ideas appearing in SS5.2 to the algebraicity of the RM values of the Dedekind-Rademacher theta cocycle \(J_{\operatorname{DR}}\) defined in SS3.4. The values of the Dedekind-Rademacher cocycle at real multiplication points should be thought of as real quadratic analogues of the elliptic units, in light of Conjecture 2. The cocycle \(J_{\operatorname{DR}}\) is the Eisenstein class in the module of analytic theta cocycles \(H^{1}(\Gamma,\mathcal{A}^{\times}/\mathbb{C}_{p}^{\times})\), as its definition involves the periods of the weight \(2\) Eisenstein series of level \(\Gamma_{0}(p)\). Accordingly, the main result of [10] is obtained by projecting a suitable modular generating series similar to the one appearing in Theorem 5.2.1 onto its Eisenstein subspace.
However, a significant new element occurs in this setting that does not have an archimedean counterpart. The modular generating series in question is obtained by considering the derivative of a \(p\)-adic family of _cusp forms_ deforming over an appropriate weight space for Hilbert modular forms for a real quadratic field \(F\). The coefficients of its derivative are richer than those of Eisenstein series and encode information about the arithmetic of the Hilbert class field of \(F\).
The main result [10] can be formulated as follows.
**Theorem 5.3.1**.: _Let \(D>0\) be a fundamental discriminant, and let \(p\nmid D\) be a prime number. Let \(\tau\in\mathcal{H}_{p}\) be a point of discriminant \(D\). Let \(F=\mathbb{Q}(\tau)\) and let \(H\) be the narrow Hilbert class field of \(F\). The values of the Dedekind-Rademacher cocycle at \(\tau\) satisfy_
\[J_{\operatorname{DR}}[\tau]=u_{\tau}^{12}\text{ modulo }(p^{\mathbb{Z}} \times\text{ roots of unity in }\mathbb{Q}_{p^{2}}^{\times})\]
_for an element \(u_{\tau}\in\mathcal{O}_{H}[1/p]^{\times}\otimes\mathbb{Q}\)._
Prior to this work, the equality of local norms
\[\operatorname{Nm}_{\mathbb{Q}_{p}}^{\mathbb{Q}_{p}^{2}}(J_{\operatorname{DR}}( \tau))\doteq\operatorname{Nm}_{\mathbb{Q}_{p}}^{\mathbb{Q}_{p}^{2}}(u_{\tau}^{1 2}) \tag{10}\]
up to \(p\)-powers and roots of unity in \(\mathbb{Q}_{p^{2}}\) was already known, as both sides of (10) had been related to the derivative of a \(p\)-adic \(L\)-function (for a definition, see [11, SS4]). More precisely, Thm. 4.1 in _loc. cit._ relates the derivative of this \(p\)-adic \(L\)-function to the norm of the RM values of the Dedekind-Rademacher cocycle. On the other hand, the \(p\)-unit \(u_{\tau}\) is an example of the Gross-Stark units, appearing in the conjectures of Stark and Gross about derivatives of Artin \(L\)-functions and their \(p\)-adic analogues. In particular, the formula for the derivative of the \(p\)-adic zeta function above and the norm of the \(p\)-unit \(u_{\tau}\) was proved by Darmon, Dasgupta and Pollack in their work on the Gross-Stark Conjecture.
The main contribution of the refinement of (10) in Theorem 5.3.1 is proving the _algebraicity_ of the RM values of the Dedekind-Rademacher cocycle itself, for which the equality up to local norms would not suffice. This establishes cases of Conjecture 2, providing theoretical evidence in favour of pursuing a theory of real multiplication via \(p\)-adic methods.
_Remark 12_.: The unit \(u_{\tau}\) in the statement of Thm. 5.3.1 can be characterised more precisely. It is trivial if \(F\) has a unit of negative norm, or equivalently, if the narrow class field of \(H\) is totally real. Otherwise, it is a non-trivial element in the subspace of \(\mathcal{O}_{H}[1/p]^{\times}\otimes\mathbb{Q}\) on which the complex conjugation in \(H\) acts as \(-1\). The recent breakthroughs of Dasgupta and Kakde on the Brumer-Stark Conjecture imply that this unit in fact lies in \(\mathcal{O}_{H}[1/p]^{\times}\otimes\mathbb{Z}[1/2]\).
The crucial intuition behind the strategy in [10] is that removing the norm in (10) could be achieved by removing the norms in the modular generating series in Theorem 5.2.1 obtained from the derivative of the \(p\)-adic family of Eisenstein series. This approach requires finding a suitable family of \(p\)-adic modular forms and exploiting its derivatives to produce the desired modular generating series.
Fortunately, the setting allows a lot of flexibility due to a lucky coincidence. The weight \(1\) Eisenstein series \(E_{F,\chi,1}^{(p)}\) considered in (9) happens to be cuspidal when viewed as an overconvergent \(p\)-adic Hilbert modular form. In fact, it is an example of a critical \(p\)-stabilisation of a classical Eisenstein series (considered, for example by Bellaiche and Chenevier for elliptic modular forms [1]). As such, it defines a point on the eigenvariety parametrising \(p\)-adic families of Hilbert cusp forms for \(F\). Unlike Eisenstein series, which only vary in parallel weight, Hilbert families of cusp forms range over a larger weight space parametrising pairs of weights \((k_{1},k_{2})\).
Cuspidal variations of Hilbert modular forms are, in general, entirely inexplicit. However, their attached Galois representations provide a useful tool to tackle a local description. In particular, they can be exploited to entirely determine the Fourier expansion of the derivative of a family \(\mathcal{F}_{k,2-k}\) specialising to \(E_{F,\chi,1}^{(p)}\) at \(k=1\), in the _antiparallel_ direction of weights \((k_{1},k_{2})\) satisfying \(k_{1}+k_{2}=2\). The diagonal restriction of the derivative \(\mathcal{F}_{k,2-k}\) with respect to \(k\), denoted by \(\Delta^{\star}\mathcal{F}^{\prime}\), is again an overconvergent \(p\)-adic modular form. Its ordinary projection \((\Delta^{\star}\mathcal{F}^{\prime})_{\operatorname{ord}}\), (up to an explicit correction term obtained from the diagonal restriction of explicit Eisenstein series) yields the following theorem, which is the key stepping towards Thm. 5.3.1.
**Theorem 5.3.2**.: _[_46_, Thm. C]_ _There is a classical modular form of weight 2 and level \(\Gamma_{0}(p)\) with \(q\)-expansion_
\[\log_{p}(u_{\tau})+\sum_{n\geq 1}\log_{p}(T_{n}J_{(0,\infty)}[\tau])q^{n},\]
_for a unit \(u_{\tau}\in\mathcal{O}_{H}[1/p]^{\times}\otimes\mathbb{Q}\)._
From this result, the relation between the unit \(u_{\tau}\) and the RM values of the Dedekind-Rademacher cocycle can easily be deduced by decomposing the winding cocycle \(J_{(0,\infty)}\) with respect to an eigenbasis of the space of analytic theta cocycles, and in particular determining its projection onto the Eisenstein eigenspace spanned by \(J_{\mathrm{DR}}\).
_Remark 13_.: The relevance of cuspidal deformations of the Eisenstein series \(E_{F,\chi,1}^{(p)}\) towards Thm. 5.3.1 should not be surprising. A cuspidal variation of this Eisenstein series in the parallel weight direction played a crucial role in the proof of the Gross-Stark Conjecture in [46] implying the equality (10). This fits into a general theme relating congruences between cusp forms and Eisenstein series to \(L\)-functions, initiated by Ribet in [13] and often referred to as _Ribet's method_. Theorem 5.3.1 is a special case of a far more general result of Dasgupta and Kakde [11], providing explicit \(p\)-adic formulae for units in abelian extensions of totally real fields. Their result can be seen as the culmination of a program of Dasgupta and collaborators settling Hilbert's Twelfth problem for totally real fields via \(p\)-adic methods. Their approach makes use of generalisations of Ribet's method. However, unlike in [46], the strategy is applied to variations of Hilbert modular forms over finite group rings, rather than over a \(p\)-adic weight space.
|
2306.17558 | Towards the extraction of robust sign embeddings for low resource sign
language recognition | Isolated Sign Language Recognition (SLR) has mostly been applied on datasets
containing signs executed slowly and clearly by a limited group of signers. In
real-world scenarios, however, we are met with challenging visual conditions,
coarticulated signing, small datasets, and the need for signer independent
models. To tackle this difficult problem, we require a robust feature extractor
to process the sign language videos. One could expect human pose estimators to
be ideal candidates. However, due to a domain mismatch with their training sets
and challenging poses in sign language, they lack robustness on sign language
data and image-based models often still outperform keypoint-based models.
Furthermore, whereas the common practice of transfer learning with image-based
models yields even higher accuracy, keypoint-based models are typically trained
from scratch on every SLR dataset. These factors limit their usefulness for
SLR. From the existing literature, it is also not clear which, if any, pose
estimator performs best for SLR. We compare the three most popular pose
estimators for SLR: OpenPose, MMPose and MediaPipe. We show that through
keypoint normalization, missing keypoint imputation, and learning a pose
embedding, we can obtain significantly better results and enable transfer
learning. We show that keypoint-based embeddings contain cross-lingual
features: they can transfer between sign languages and achieve competitive
performance even when fine-tuning only the classifier layer of an SLR model on
a target sign language. We furthermore achieve better performance using
fine-tuned transferred embeddings than models trained only on the target sign
language. The embeddings can also be learned in a multilingual fashion. The
application of these embeddings could prove particularly useful for low
resource sign languages in the future. | Mathieu De Coster, Ellen Rushe, Ruth Holmes, Anthony Ventresque, Joni Dambre | 2023-06-30T11:21:40Z | http://arxiv.org/abs/2306.17558v2 | # Towards the extraction of robust sign embeddings for low resource sign language recognition
###### Abstract
Isolated Sign Language Recognition (SLR) has mostly been applied on datasets containing signs executed slowly and clearly by a limited group of signers. In real-world scenarios, however, we are met with challenging visual conditions, coarticulated signing, small datasets, and the need for signer independent models. To tackle this difficult problem, we require a robust feature extractor to process the sign language videos. One could expect human pose estimators to be ideal candidates. However, due to a domain mismatch with their training sets and challenging poses in sign language, they lack robustness on sign language data and image-based models often still outperform keypoint-based models. Furthermore, whereas the common practice of transfer learning with image-based models yields even higher accuracy, keypoint-based models are typically trained from scratch on every SLR dataset. These factors limit their usefulness for SLR. From the existing literature, it is also not clear which, if any, pose estimator performs best for SLR. We compare the three most popular pose estimators for SLR: OpenPose, MMPose and MediaPipe. We show that through keypoint normalization, missing keypoint imputation, and learning a pose embedding, we can obtain significantly better results and enable transfer learning. We show that keypoint-based embeddings contain cross-lingual features: they can transfer between sign languages and achieve competitive performance even when fine-tuning only the classifier layer of an SLR model on a target sign language. We furthermore achieve better performance using fine-tuned transferred embeddings than models trained only on the target sign language. The embeddings can also be learned in a multilingual fashion. The application of these embeddings could prove particularly useful for low resource sign languages in the future.
## 1 Introduction
Sign language recognition (SLR) has various applications, including automatic sign language corpus annotation [25], sign language information retrieval [14] and Sign Language Translation (SLT) [6]. Within these applications, the task of the SLR model is to extract syntactic and semantic information from video data that can be used in the downstream task. In order to generalize to as
many people and surroundings as possible, an SLR model should be invariant to the camera's position and intrinsics, to environments and lighting, and to the specific characteristics of the person that is signing. These characteristics not only include age, gender, and ethnicity, but also clothing, accessories, individual body morphology, signing speed, and any other features that do not contribute to the linguistic content. Sign language datasets are typically limited in size, which means that SLR models trained on these datasets are prone to exacerbating bias and not having robust feature extractors.
In recent years, several open source human pose estimation tools have been released. Among the most popular are OpenPose [7], MMPose [8], and BlazePose [3] (used in MediaPipe [15]). These tools extract landmarks, also known as keypoints, from image data: these are estimates of the 2D or 3D positions of joints in the human body and other important landmarks of the face and head. These models tend to be trained on a large variety of images of human poses, so they are supposedly robust against the factors that may be confusing for SLR models [26]. Recent work has shown that models trained with pose estimator features are more signer independentsign language recognizers than image-based models, especially in low resource scenarios [17]. This is important as in out-in-the-wild applications, individuals will differ from those in the training set.
In the current scientific literature, keypoint-based SLR appears to be limited by three factors. First, any errors made by the pose estimator will be carried over to any downstream SLR model. Second, pose estimators are often trained on general data (i.e., not on sign language data). This can result in sub-optimal performance when applied to sign language data that often involves fast and precise hand movements. For instance, MediaPipe Holistic may fail to predict hand keypoints in certain cases of inter-manual or manual-facial interaction [26], and OpenPose may produce noisy keypoints [10, 28]. A third factor is the way keypoints are used in SLR models. The predicted keypoint coordinates are often used as raw inputs to sequential models such as LSTMs or Transformers [10, 28]. However, the exact positions of keypoints are not as informative to SLR models as the _relations_ between keypoints.
These limitations can be tackled with a more tailored approach to pre-processing of raw keypoint predictions. For the first two, we propose a novel post-processing method that normalizes the predicted keypoints and imputes any missing values. For the final limitation we introduce a "pose embedding" (SignPose2Vec), a non-linear transformation that can be transferred and fine-tuned to specific datasets. SignPose2Vec captures morphological properties of signs and is generic and language agnostic. By being based on keypoints, it is furthermore not tied to specific datasets and can be used as the starting block of any keypoint-based SLR or SLT model.
The contributions of this paper are as follows. First, we compare three pose estimation tools for SLR, a comparison distinctly lacking in the SLR literature. Second, we show that transfer learning is possible with keypoint embeddings, even without fine-tuning. Transfer learning in SLR was previously only attempted with image-based models. Third, we present a method to train sign representations from multilingual data. Fourth, we show why performing isolated SLR on real-world data is more challenging than isolated SLR on individual productions of signs and how it can nonetheless be tackled.
## 2 Related work
### Human pose estimation for SLR
Human pose estimation is the task of predicting the positions of specific points in the human body. These _keypoints_ are often related to joints (e.g., the shoulders, elbows and wrists), but also include other pose landmarks such as the positions of the eyes and ears. A multitude of human pose estimation models exist; here we give an overview of toolkits that are commonly used in SLR research.
We compare two approaches towards pose estimation: top-down and bottom-up. A top-down pose estimator first detects the individual(s) in an image, and then detects and predicts keypoints for the constituent body parts. A bottom-up pose estimator detects and predicts keypoints for body parts and assigns them to individuals afterwards.
In 2017, the first open source real time full body multi person human pose estimation model OpenPose was released [7]. It uses a bottom-up approach, detecting body parts and aligning them to individuals using Part Affinity Fields (PAFs). First, body part heatmaps are predicted
along with the PAFs that indicate the direction from one body part to another. A greedy algorithm is used to match body parts to the individuals in the image. Finally, the keypoints are derived from the body part heatmaps.
MediaPipe Holistic is a top-down pose estimator that combines several neural networks into a unified pipeline for body, face, and hand pose recognition. BlazePose [3] is used to detect and track the pose throughout video data. The pose keypoints are used to crop hand and face images that are passed on to specialized pipelines. The hand model first detects the presence of the palm to refine the region of interest. Then, coordinate regression is performed to obtain the hand keypoint coordinates. Finally, all landmarks are merged into a single result.
MMPPose is a software library that includes several computer vision algorithms related to pose estimation [8]. For example, it is often used as a top-down pose estimator. Faster R-CNN [31] detects and crops the individual in the video, and HRNet [38] predicts keypoints. HRNet uses heatmap regression to predict keypoint coordinates and confidence values for the full body.
Given a single (i.e., monocular) input image, OpenPose and HRNet predict 2D keypoint coordinates. Even though it is an ill-posed problem, techniques exist to "lift" these 2D coordinates into 3D [23]. Alternative approaches directly predict depth from monocular data, as is the case in BlazePose which regresses keypoints from images [3] (as part of the MediaPipe software package).
OpenPose, MediaPipe, and MMPPose have all been applied in domains such as action recognition [35, 45], gesture recognition [30, 44], and SLR [20, 9, 22, 10, 18, 19, 21, 11, 26]. To the best of our knowledge, there exists no comparative analysis of the accuracy or run-time performance of these three toolkits on sign language data.
### Keypoint-based SLR
SLR is the process of extracting syntactic and/or semantic information from video data containing signing for use in downstream tasks. _Isolated_ SLR is one way to perform this extraction. It is a video classification task: given a video clip of a single sign, the goal is to predict which sign is performed. Individual signs are distinct meaningful elements in sign language. They map to concepts such as nouns and verbs, and in some cases to clauses [32]. Isolated SLR models can be used for the automatic annotation of sign language corpora [9, 27, 25]. These models can also be used in a transfer learning set-up, for example as feature extractors for SLT models [1, 34].
SLR has evolved similarly to other subfields of computer vision, in that early works used handcrafted features [46], but in the past decade the field has moved towards automatic feature extraction with deep learning [29]. Even more recently, SLR research has moved towards using only monocular RGB data, instead of RGB-D data extracted with sensors such as the Microsoft Kinect. The results of an SLR challenge held at CVPR 2021 show that RGB models can achieve accuracy on par with RGB-D models [36].
These RGB videos are typically processed using Convolutional Neural Networks (CNNs). A first approach factorizes the spatial and temporal processing. De Coster et al. use a ResNet to extract a feature vector for every frame in the video clip, and then process the sequence of feature vectors with an LSTM or transformer [10]. Li et al. use VGG16 as a feature extractor, and a GRU to process the sequences [22]. In contrast, 3D CNNs allow for spatio-temporal processing. I3D is a popular 3D CNN architecture for SLR [1, 33, 41]. Mino et al. have shown that both kinds of models (factorized and spatio-temporal) reach similar levels of accuracy on the same data [24]. Yet, the optimal model is of course dataset specific, especially in low resource settings. Note that factorized models allow processing variable length sequences without resorting to average- or max-pooling: this is especially useful because of the large differences in sample durations in real-world sign language data.
Due to the low resource nature of many SLR datasets, human pose estimators (as pre-trained models) are a popular alternative for CNNs (trained on sign language data). Li et al. use graph convolutional networks for SLR on the WLASL dataset [22]. The state of the art pose-based SLR on the AUTSL dataset [37] is reached by Vazquez et al., who use "Multi-Scale Graph Convolutional Networks" [43], and Jiang et al., who propose a similar "Sign Language Graph Convolutional Network" [19]. Ko et al. use OpenPose keypoints in combination with a GRU network and SVM classifier on the KETI dataset [20]. Moryossef et al. compare OpenPose and MediaPipe Holistic, and conclude that they perform similarly on the AUTSL dataset, but that both have failure cases that negatively impact the performance of their downstream SLR
models [26]. De Coster et al. extract motion features from OpenPose keypoints to augment visual representations, but do not evaluate the performance of these features on their own [11]. Konstantinidis et al. use OpenPose keypoints as inputs to an SLR model and compare the performance to image-based models on the LSA64 dataset [21]. De Coster et al. also perform such a comparison on the Corpus VGT [40], and, like Konstantinidis et al., find that their image-based models outperform their pose based models [10]. This is somewhat unexpected, especially with smaller datasets, as extracting pose keypoints should alleviate some of the difficulty with learning signs from RGB data and remove much of the background noise associated with signing in-the-wild. Our paper shows why this is the case and how to make keypoint-based SLR models more powerful than image-based SLR models.
## 3 Keypoint-based SLR
### Dataset
We use the Flemish sign language corpus (Corpus VGT) [40] for our experiments. This corpus is used for linguistics research and contains gloss (a transcription of a sign which can be used as a label in SLR) level annotations as well as, to a lesser extent, translations into written Dutch. We select all annotations for which the corresponding gloss has at least 20 occurrences, and only keep glosses belonging to the established lexicon1. We obtain 26,639 examples belonging to 348 classes. We then perform a stratified (on label) and grouped (on signer identity) split. Because the vocabulary distributions vary between individual signers, we now have several glosses that do not occur in our training, validation, _and_ test set. We remove those glosses and are left with 24,968 examples belonging to 292 classes. The resulting dataset has a long-tailed class distribution, that is shown in Figure 1, similar to the true sign distribution in the corpus [5]. The head of the distribution consists of pointing signs, more specifically WG-1 ("I/me"), WG-2 ("you") and WG-3 ("he/she/they/it"). The dataset details are shown in Table 1.
Footnote 1: Signs in the established lexicon have a known form, in contrast to productive signs which are created “on-the-fly”.
Despite every video corresponding to a single sign, the signs in this dataset are still influenced by coarticulation, which represents changes in the articulation of the sign due to the presence of other signs around it (e.g., the transitions between signs). This is an important difference with datasets such as AUTSL where signs are also _performed_ in isolation. Coarticulation makes the isolated SLR problem more challenging. Compared to working with datasets such as AUTSL or WLASL, the challenge is further compounded by the class imbalance. Moreover, in our dataset, there are different camera angles, whereas in many other SLR datasets, the camera is positioned in front of the signer.
We process every video with three keypoint extractors. For every frame in each video, we extract keypoints with OpenPose, MMPose, and MediaPipe Holistic. We keep only the upper body and hand keypoints. In the case of MediaPipe Holistic, we also remove the face mesh, which is high-dimensional and contains a large number of re
\begin{table}
\begin{tabular}{l l l} \hline Subset & Number of examples & Number of signers \\ \hline Train & 19267 & 88 \\ Validation & 2702 & 12 \\ Test & 2998 & 11 \\ \hline \end{tabular}
\end{table}
Table 1: Subset statistics for the VGT dataset.
Figure 1: The class distribution of the entire VGT dataset shows a large class imbalance.
dundancies. In future work, keypoint selection from the face mesh could allow us to model facial expressions and mouthings, which are disregarded in our current pipeline. We obtain 54 2D keypoints for OpenPose, 53 2D keypoints for MMPose, and 67 3D keypoints for MediaPipe.
### Post-processing keypoints
We post-process the keypoint data before using them as inputs to the SLR model. For MediaPipe, this post-processing is applied to the 3D keypoints. When we choose to use 2D keypoints only, we drop the depth _after_ post-processing. Our post-processing pipeline has two stages. We first perform missing keypoint imputation (only for MediaPipe and OpenPose) and then normalize the keypoints.
ImputationThe imputation consists of linear interpolation, extrapolation, and zero-based imputation. For MediaPipe, we do this separately for the right hand, the left hand, and the pose keypoints (for each of these subsets, MediaPipe will either return estimations for the entire subset, or return none at all, i.e., these are the smallest subsets of keypoints that MediaPipe provides). For OpenPose, we perform imputation on a per-keypoint level, because individual keypoints can be missing.
Interpolation is applied to impute frames with missing data that are positioned between two other frames with predicted keypoints. Every missing frame is imputed by linearly interpolating between the nearest non-missing previous and subsequent frames.
Extrapolation is applied for missing frames that occur in the beginning or at the end of a sequence. When no previous or subsequent frame exists, interpolation cannot be performed and we simply select the first and last non-missing frame respectively and copy it over the missing frames. There can also be samples where no predictions were available for a subset of the keypoints for the entire sequence. In these cases, we impute with zeros.
Figure 2 shows an example of a case where linear interpolation can impute missing keypoints. Despite minor visual differences, the pose estimator has failed to predict keypoints in the middle frame. We correct this using the above approach.
NormalizationFor normalization, we account for differences in translation and scale of the keypoints. We first translate the pose to be centered on the chest, and then re-scale all keypoints by dividing them by the Euclidean distance between the shoulders. We do the same for each hand, centering on the wrist and scaling by the Euclidean distance between the wrist and the knuckle of the middle finger. This normalization step has a significant impact on the performance of keypoint-based models [4].
LimitationsThe selected pose estimation pipeline only allows imputing keypoints that were not predicted. It is not possible to correct errors in predicted keypoints. This is left for future research. To implement keypoint correction, confidence values (which are provided by OpenPose and MMPose but not by MediaPipe) are of great importance.
Furthermore, our imputation algorithm is limited. Extrapolation naively copies frames. Linear interpolation fails to capture changes in acceleration and may impute intermediate hand shapes and body poses incorrectly for longer missing sub-sequences. Interpolation is performed in the keypoint space (Cartesian coordinates). Linearly interpolating between these coordinates may result in physically impossible poses. Context-aware imputation should be investigated in future work.
Our normalization approach does not account for the rotation of the pose. Despite MediaPipe providing us with 3D keypoint data, giving us the ability to fully standardize the poses by rotating every pose to face the camera, we find that the depth predictions are not sufficiently robust to do this in a reliable way.
Figure 2: For cases such as these (the right hand is missing in the second frame but present in the first and third), linear interpolation can impute missing keypoints.
### Model architecture
Our models follow a three stage architecture that is illustrated in Figure 3. They consist of a frame embedder, a sequence embedder, and a softmax classifier.
The frame embedder varies based on the model being utilized. For keypoint-based models, we employ a pose estimator followed by post-processing and finally by a dense pose embedding (see below). In the case of image-based models (to which we compare our keypoint-based models), we utilize ResNet [16] pre-trained on ImageNet [12]. Both the ResNet and pose embedding are applied to each individual frame in the sequence, resulting in a new sequence that is equally long as the original input.
The sequence embedder is a self-attention [42] network, which outperforms recurrent neural networks for isolated SLR [10]. This self-attention network accommodates sequences of variable length by zero padding shorter sequences in a batch and masking the padded frames in the attention computations. We prepend to every sequence a learned vector, similar to the [CLS] token in BERT [13]. The corresponding transformer output vector serves as the input to the classifier.
Unlike previous research using keypoint data as inputs for SLR models [10, 26, 21, 20], we propose learning an embedding (SignPose2Vec) on top of the keypoints within the architecture described above. The purpose is to learn the non-linear relations between keypoints. We will show that this embedding can be transferred between languages and tasks, which is useful for pre-training purposes or for downstream translation models. The pose embedding is implemented as a dense network of four blocks. The first three blocks consist of a linear layer, normalized using layer normalization, followed by a ReLU activation, and finally dropout (\(p=0.125\)). The final block has a hyperbolic tangent activation function instead of the ReLU and dropout. We apply L1 regularization to the input layer of the pose embedding (\(\lambda=0.002\)) to focus on select keypoints. We finally linearly transform the keypoints into the same dimension as the embedding, and add the resulting vector element-wise to the embedding as a residual connection.
### Experiments
Monolingual SLR modelsWe use a Video Transformer Network (VTN)--not to be confused with Video Vision Transformers [2]--as a baseline classifier. We use the architecture and hyperparameters proposed by De Coster et al. [10]. As feature extractor we use ResNet-34 pre-trained on ImageNet [12], which produces a 512 dimensional frame embedding. The sequence of frame embeddings is processed using self-attention: in particular 4 transformer layers with 8 self-attention heads. We classify the output of the transformer (i.e., the [CLS] token embedding) using a softmax classifier.
The Pose Transformer Network (PTN) is not directly adapted from De Coster et al. [10]. Instead, we prepend a pose embedding network to the self-attention network. This novel network is applied to every element of the sequence, producing a 128-dimensional embedding per frame. Then, this embedding is processed similarly to the way it is in the VTN. We compare MediaPipe Holistic (with 3D and 2D keypoints), OpenPose, and MMPose by using their post-processed predicted keypoints as input to the PTN.
Variable length sequencesFor the VTN, we are limited to fixed length sequences due to the memory impact of processing many frames in parallel with the ResNet. The PTN has a smaller memory footprint, allowing us to consider variable length sequences. We zero-pad sequences that are shorter than the longest sequence in a batch and use a padding mask to avoid that the model attends to padding.
Transfer learningWe investigate the generalization ability of SignPose2Vec by first training a network on the NGT corpus, and then transferring the learned weights
Figure 3: Generic three-stage model architecture for isolated SLR.
of the SLR model (except the classifier weights) to the VGT model. We also do this for the image-based VTN. This dataset is constructed in a similar manner to the VGT dataset (Section 3.1); it contains 68,854 samples for 458 classes in Dutch sign language (NGT). We only investigate transfer learning with the existing architecture and do not optimize the architecture for transfer learning: the performance reported here is a lower bound.
We perform three experiments. First, we freeze the entire network except the classification layer, which we train on VGT. Second, we do the same, but after the model has converged on VGT, we fine-tune the sequence embedding and classifier until early stopping. In a third experiment, we fine-tune the entire model after convergence. This illustrates the potential gains in accuracy with transfer learning.
Multilingual SLR modelsWe consider training the model on multiple languages at the same time to learn more generic sign representations. We train the model on the NGT dataset and the VGT dataset jointly. Our approach is based on a paper on multilingual speech recognition by Toshiniwal et al. [39]. We compare two approaches to multilingual SLR in our first experiments. In _joint multilingual SLR_, we merge the vocabularies of NGT and VGT, and train a single classifier on the union of the vocabularies. This does not add additional hyperparameters, but signs that are shared or similar between languages will be mapped to different labels and this can introduce confusion. In _conditioned multilingual SLR_ we pass the language ID to an embedding; the language embedding is an additional input to the classifier layer. Conditioning on the language identity should reduce confusions between signs from different languages that are similar. For the conditioned model, we use a 4-dimensional language ID embedding.
## 4 Analysis
### Comparison of pose estimators
We compare MediaPipe Holistic, OpenPose, and MMPose on runtime performance and differences in outputs.
We measure the runtime performance of MediaPipe, OpenPose and MMPose as follows: we select a subset of 100 clips from our dataset. We then run all three pose estimators on these clips and measure the execution time. We divide the execution time by the number of frames in the video. We process a single video at a time to simulate runtime performance at inference time. These experiments were performed on an Intel Xeon Silver 4216 CPU (2.10GHz) and an NVIDIA GeForce RTX 3090. In these experiments, MediaPipe runs on the CPU, whereas OpenPose and MMPose leverage the GPU. The results are shown in Figure 4. MediaPipe Holistic processes 4.8 frames per second (FPS), OpenPose 1.1 FPS and MMPose 1.3 FPS.
OpenPose and MMPose output confidence values per keypoint. MediaPipe Holistic does not. Instead, if a hand could not be detected or the pose estimation has failed, no values are returned for the entire hand. The same is true for the body pose. In the Corpus VGT (see Section 3.1), we find that the body keypoints are missing in 0.005% of the frames, the left hand keypoints in 11% of the frames and the right hand keypoints in 8% of the frames.
We observe in Figure 5 that the hand keypoints predicted by OpenPose have lower confidence values than the body keypoints. This aligns with previous research that stated that OpenPose body keypoints are more robust than hand keypoints [11]. For the hands and the body, we observe peaks at 0 confidence. These are keypoints that are mapped to the value \((0,0,0)\) as a fallback; they are essentially "missing values".
Figure 6 shows that MMPose is more confident in its
Figure 4: Runtime performance for the three pose estimators.
predictions than OpenPose and there are no peaks at zero confidence. Because in some cases there will be keypoints that MMPose should not be able to predict (e.g., when a hand is out of the frame), this suggests that MMPose may be more confident in its mistakes than OpenPose. For the purpose of SLR, where precise keypoint predictions are important, we argue that having missing values that can be imputed or corrected, is more useful than having the pose estimator hallucinate keypoint coordinates.
The three pose estimators were trained on different data and have different architectures. Hence, their predictions are different as well. Figure 7 is illustrative of the difference between keypoints predicted by OpenPose, MediaPipe Holistic, and MMPose. OpenPose fails to predict any keypoints for the left arm and hand. The hand keypoints in particular show noise and variable predictions. Especially the right hand keypoints, which are the distinguishing keypoints for this sign, are accurately predicted only by MediaPipe. MediaPipe Holistic has a dedicated hand pose estimation model, whereas OpenPose and MMPose use full-body models. As a result, the hand keypoints are typically better in MediaPipe predictions (see the figure).
These differences in keypoint predictions also appear in the downstream performance. From Table 2, it is clear that MediaPipe outperforms OpenPose and MMPose. Interestingly, adding depth predictions from MediaPipe results in worse performance. Visual analysis informs us that these are not robust and they may simply introduce noise instead of additional information. Considering that the test accuracies between MediaPipe with and without depth are more similar, it may be that the depth predictions are worse for certain individuals in the validation set.
### Variable length sequences
We find that variable length sequences give significantly better performance than fixed length sequences of window size 16 (the median number of frames in a sign) with stride 2. The PTN trained on fixed length sequences achieves 40.56% accuracy on the validation set, whereas the PTN trained on variable length sequences achieves
\begin{table}
\begin{tabular}{l l l l l} \hline Model & Pose estimator & Train & Validation & Test \\ \hline VTN & N/A & 52.42\% & 39.93\% & 38.93\% \\ PTN & MediaPipe (2D) & 76.61\% & **48.15\%** & 45.70\% \\ PTN & MediaPipe (3D) & 80.77\% & 45.93\% & 45.26\% \\ PTN & OpenPose & 77.90\% & 39.27\% & 35.86\% \\ PTN & MMPose & 62.72\% & 36.49\% & 36.00\% \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy values for the models. The low training accuracy is due to heavy regularization to avoid overfitting.
Figure 5: Distribution of confidence values returned by OpenPose.
Figure 6: Distribution of confidence values returned by MMPose.
Figure 7: Keypoints predicted by OpenPose, MediaPipe, and HRNet. For OpenPose, the left elbow, wrist, and hand keypoints are missing.
48.15%. This is beneficial because it means that for inference, we can input entire sequences instead of using sliding windows.
### Pose post-processing
We analyze the impact of our pose post-processing steps by performing an ablation study, removing imputation (for MediaPipe and OpenPose) and normalization from the pipeline. The results are shown in Table 3. Normalization is clearly a crucial step in the pipeline. Missing data imputation is important for MediaPipe, but less so for OpenPose. Whereas in OpenPose, individual keypoints can be missing, in MediaPipe it is always an entire hand or more. Therefore, the impact of missing data is larger for MediaPipe.
Interpolating shorter sequences works better than interpolating longer sequences. The longer the sequence, the higher the probability that the interpolation is incorrect. Figure 8 shows an example: an intermediate hand pose was missed due to fast movements. At the same time, when we only need to interpolate a short sequence, this missing data may not have as large of an impact. This is reflected in the small difference in score when using imputation and when not using imputation.
### Transfer learning with pose embeddings
Table 4 shows the results of our transfer learning experiments. The frame and sequence embeddings can be transferred from NGT to VGT. Even when not fine-tuning the embeddings, we achieve competitive (but slightly lower) accuracy values. This means that transfer learning to languages without labeled data or with very limited amounts of labeled data is possible. When fine-tuning the sequence embedding alone, and the sequence embedding and frame embedding together, we obtain better results. The best results are obtained when fine-tuning the entire network after first fine-tuning the classifier layer.
We obtain a larger increase in accuracy when fine-tuning the VTN (compared to the PTN). This is to be expected, because the ResNet has more trainable parameters and thus can benefit significantly more from pre-training than our pose embedding. Note that the VTN also needs to adapt to the different visual characteristics of the VGT dataset, unlike the PTN. Nevertheless, this score (46.78%) is still below that of a PTN trained only on VGT data (48.15%). For low resource sign languages, transfer learning of keypoint-based models specifically will prove the most useful.
### Multilingual SLR models
The results of our multilingual learning are shown in Table 5. The joint multilingual model performs worse than
\begin{table}
\begin{tabular}{l l l l l l} \hline Model & FT FE & FT SE & NGT & VGT & \(\Delta\) \\ \hline _PTN_ & _N/A_ & _N/A_ & _N/A_ & _48.15\%_ & _N/A_ \\ PTN & ✗ & ✗ & 44.19\% & 46.19\% & -1.96\% \\ PTN & ✗ & ✓ & 44.19\% & 47.93\% & -0.22\% \\ PTN & ✓ & ✓ & 44.19\% & **49.30\%** & +1.15\% \\ \hline _VTN_ & _N/A_ & _N/A_ & _N/A_ & _39.93\%_ & _N/A_ \\ VTN & ✗ & ✗ & 38.81\% & 39.05\% & -0.88\% \\ VTN & ✗ & ✓ & 38.81\% & 39.64\% & -0.29\% \\ VTN & ✓ & ✓ & 38.81\% & 46.78\% & **+6.85\%** \\ \hline \end{tabular}
\end{table}
Table 4: Transfer learning validation accuracy. \(\Delta\) is the difference in accuracy between the model trained with transfer learning (NGT to VGT) and the model trained only on VGT data. FT FE: Fine-tune frame embedding. FT SE: Fine-tune sequence embedding. Baselines are italized.
\begin{table}
\begin{tabular}{l l l l} \hline Pose estimator & Norm. & Imputation & Accuracy \\ \hline MediaPipe (2D) & ✓ & ✓ & 48.15\% \\ MediaPipe (2D) & ✓ & ✗ & 47.08\% \\ MediaPipe (2D) & ✗ & ✗ & 40.01\% \\ OpenPose & ✓ & ✓ & 39.23\% \\ OpenPose & ✓ & ✗ & 39.27\% \\ OpenPose & ✗ & ✗ & 35.53\% \\ MMPose & ✓ & N/A & 36.49\% \\ MMPose & ✗ & N/A & 21.47\% \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study results (validation accuracy).
\begin{table}
\begin{tabular}{l c c} \hline Method & NGT accuracy & VGT accuracy \\ \hline Joint & 42.61\% & 38.27\% \\ Conditioned & 44.29\% & 49.52\% \\ \hline \end{tabular}
\end{table}
Table 5: Multilingual SLR accuracy.
the monolingual models (-1.58% for NGT and -9.88% for VGT). However, the conditioned multilingual model outperforms the monolingual models (+0.19% for NGT and +1.37% for VGT). The difference reduces when fine-tuning the monolingual model (0.22%). Likely, there are similar signs in the NGT and VGT dataset which introduce confusions in the joint model. These confusions can be eliminated by conditioning on the language embedding.
The multilingual models do not significantly outperform the monolingual models with transfer learning, but training them is easier as it does not require a three step approach (pre-train, transfer, fine-tune) and as only a single model needs to be trained instead of one for every considered combination of languages.
## 5 Conclusion
This paper tackles low resource spontaneous sign language recognition (SLR). SLR models can be used for various tasks, such as automatic sign language corpus annotation, sign language information retrieval, and sign language translation. This paper investigates the extraction of robust sign embeddings, a crucial step for these tasks.
In previous research, image-based models have typically outperformed keypoint-based models for SLR. However, image-based models are prone to exacerbating bias, especially in low resource scenarios. We leverage human pose estimation tools as feature extractors, because these pose estimators have been trained on much larger and more varied datasets than what is available for SLR, and they should be less biased as a result. Comparing the three most popular pose estimators OpenPose, MMPose, and MediaPipe, we find that MediaPipe is the fastest and most robust. Nevertheless, it still makes errors in its predictions (approximately 10% of the frames have missing hand keypoints). We perform post-processing on the keypoints in the form of normalization and missing value imputation and show the importance of these steps. By applying these steps, we outperform image-based models that were previously more powerful than keypoint-based models for SLR.
This paper furthermore introduces a pose embedding that enables transfer learning for keypoint models across datasets and even sign languages. Transferring the pose embedding to another language can even be done without fine-tuning with minimal loss of accuracy. When fine-tuning _is_ applied, we observe an increase in accuracy for the downstream language.
Training, transferring and fine-tuning SLR models for different languages can be cumbersome. We propose to learn language-conditioned multilingual SLR models. These models do not significantly outperform the monolingual models when the latter are fine-tuned, but they only require a single training loop and result in a single model: this saves on computational cost during training and reduces model storage requirements.
Future research still needs to tackle the large class imbalance in SLR datasets and handle larger vocabularies. Even though we are able to drastically improve performance using our pipeline, more research is still required into pose estimation specifically for sign language data and how to optimally use the resulting keypoint data.
## Acknowledgements
The images of signers in Figures 2, 3, and 7 are reproduced from the Corpus VGT project [40] (CC BY-NC-SA).
Mathieu De Coster's research is funded by the Research Foundation Flanders (FWO Vlaanderen): file num
Figure 8: Example of a failure case for linear interpolation. The transparent hand is the ground truth, and the opaque hand is interpolated.
ber 77410. This work has been conducted within the SignON project. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101017255.
|
2309.13805 | Identifying Vulnerabilities in Smart Contracts using Interval Analysis | This paper serves as a progress report on our research, specifically focusing
on utilizing interval analysis, an existing static analysis method, for
detecting vulnerabilities in smart contracts. We present a selection of
motivating examples featuring vulnerable smart contracts and share the results
from our experiments conducted with various existing detection tools. Our
findings reveal that these tools were unable to detect the vulnerabilities in
our examples. To enhance detection capabilities, we implement interval analysis
on top of Slither [3], an existing detection tool, and demonstrate its
effectiveness in identifying certain vulnerabilities that other tools fail to
detect. | Ştefan-Claudiu Susan, Andrei Arusoaie | 2023-09-25T01:17:56Z | http://arxiv.org/abs/2309.13805v1 | # Identifying Vulnerabilities in Smart Contracts
###### Abstract
This paper serves as a progress report on our research, specifically focusing on utilizing interval analysis, an existing static analysis method, for detecting vulnerabilities in smart contracts. We present a selection of motivating examples featuring vulnerable smart contracts and share the results from our experiments conducted with various existing detection tools. Our findings reveal that these tools were unable to detect the vulnerabilities in our examples. To enhance detection capabilities, we implement interval analysis on top of Slither [4], an existing detection tool, and demonstrate its effectiveness in identifying certain vulnerabilities that other tools fail to detect.
## 1 Introduction
The term "smart contract" was originally used to describe automated legal contracts, whose content cannot be negotiated or changed. Nowadays, the term is most commonly known as programs that are executed by special nodes in a decentralised network or a blockchain. Indeed, the blockchain technology captures the initial meaning of the term: contracts are encoded as an immutable piece of code, and the terms of the contract are predetermined and automatically enforced by the contract itself.
This immutability property also implies more effort on the contract developers side: they have to be very careful about what gets deployed because that code is (1) public and anyone can see it and (2) it cannot be changed/updated as an ordinary program. Ethereum1 is a very popular blockchain platform which has been affected by the most significant attacks based on vulnerabilities in the deployed code. For instance, "The DAO attack" 2 was based on the fact that a smart contract could be interrupted in the middle of its execution and then called again. This is known as the _reentrancy_ vulnerability. An attacker noticed that in a withdrawal function of the smart contract, the transfer of digital assets was performed before updating the balance (i.e., decrementing the balance with the withdrawn amount) of a contract party. The attacker first deposited cryptocurrency into the smart contract. Then, by creating a scenario where the withdrawal function called itself just before updating its own balance, the attacker managed to drain the funds of the smart contract as long as its balance exceeded the amount withdrawn.
Footnote 1: Vitalik Buterin, _A Next-Generation Smart Contract and Decentralized Application Platform_, 2014, URL: [https://ethereum.org/en/whitepaper/](https://ethereum.org/en/whitepaper/)
Footnote 2: David Siegel, 2016: Understanding The DAO Attack. URL:[https://www.coindesk.com/learn/2016/06/25/understanding-the-dao-attack/](https://www.coindesk.com/learn/2016/06/25/understanding-the-dao-attack/)
Such mistakes are unfortunate and researchers and practitioners started to propose methods and tools for detecting them. For example, Slither [4] is an easy to use static analysis tool for smart contracts written in Solidity 3; Mythril4 is a security analysis tool for EVM bytecode based on symbolic execution;
Solhint5 is a linter for Solidity code. The list of tools is long and it was explored in various papers (e.g., [6, 8, 7, 1, 5]). These tools are indeed very useful, but in the same time they are not perfect and they can fail to detect problematic situations in smart contracts code.
Footnote 5: Solhint official website: [https://protofire.github.io/solhint/](https://protofire.github.io/solhint/)
In this paper we provide several examples of smart contracts which contain vulnerabilities and we find that vulnerability detection tools are not as precise as we expect, and they fail to detect vulnerabilities in our examples. We attempt to enhance one of them (Slither) with an existing static analysis method called interval analysis. This method allows us to better approximate the values interval for each program variable. Based on the experiments that we performed, interval analysis proves to be very useful in detecting problematic situations in smart contracts. For example, integer division in Solidity ignores the reminder. In a situation where an amount of cryptocurrency must be divided and transferred to a number of recipients, a division where the remainder is ignored could lead to funds that remain locked in the smart contract. Another example is related to uninitialised variables: such variables are initialised with default values and it may be the case that the default value is not suitable for the purpose of that variable. By keeping track of all the possible values for each program variable, interval analysis allows us to signal such situations in smart contracts.
**Summary of contributions.**
1. We provide several examples of vulnerable smart contracts, in which the vulnerabilities prove to be challenging to detect using state-of-the-art detection tools.
2. We implement an existing analysis technique called interval analysis on top of Slither.
3. We evaluate our implementation.
**Paper organisation.** In Section 2 we present several examples of smart contract vulnerabilities. In Section 3 we show how state-of-the-art tools behave on these examples. The interval analysis technique is presented in Section 4 together with our implementation. We conclude in Section 5.
## 2 Vulnerabilities in Smart Contracts
This section contains several examples of smart contracts vulnerabilities written in Solidity. These were selected from a larger taxonomy [7]. The whole classification includes 55 vulnerabilities split among 10 categories. Both literature and existing community taxonomies were taken into account when selecting these defects. We selected these vulnerabilities because state of the art tools are not able to detect most of them and could be detectable using interval analysis.
### Tautologies or Contradictions in assert or require Statements
The Solidity statements assert and require are typically used to validate boolean conditions. According to the Solidity documentation6, assert is meant for checking internal errors, while require should be used to test conditions that cannot be determined until runtime. Both statements throw exceptions and revert the corresponding transactions. In their intended use, the conditions in assert should never be false as it signals contract level errors while the conditions in require can be false as they signal input errors. No matter what level of error a statement specifies, it is an issue if the conditions that they contain are tautologies or contradictions. These make the statement useless in the case of tautologies and make the transaction impossible to complete in the case of contradictions as illustrated by the following code:
```
1:functionnotGonnaExecute(uintparameter)externalpurereturns(uint)
2:{
3:require(parameter<0);//uintcannotbe<0
4:returnparameter;
5:}
1:functionuselessAssertUint(uintparameter)externalpurereturns(uint)
2:{
3:require(parameter>=0);//uintisalways>=0
4:returnparameter;
5:}
### Division by Zero
This is a classic arithmetic issue that is common among most programming languages. The Solidity compiler does not allow direct division by zero. However, the compiler cannot detect situations when the denominator could evaluate to zero. The following code snippet contains an example. The length of the recipients array is not checked before computing the amount that should be sent to each recipient (line 4):
```
1:functionsplit(address[]calldatarecipients)externalpayable
2:{
3:require(msg.value>0,"Pleaseprovidecurrencytobesplitamongrecipients");
4:uintamount=msg.value/recipients.length;//problemhereiflengthis0
5:for(uintindex=0;index<recipients.length;index++)
6:{
7:(boolsuccess,)=payable(recipients[index]).callvalue:amount("");
8:require(success,"Couldnotsendethertorecipient");
9:}
10:} ```
### Integer Division Remainder
This is another arithmetic issue that is common among many programming languages. Solidity performs integer division which means that the result of the division operation is truncated. This could lead to situations where ignoring the remainder of the division could lead to logic errors. The snippet below contains an example: if the provided amount does not exactly divide by the number of recipients then that amount of cryptocurrency could remain locked in the contract.
```
1:functionsplit(address[]calldatarecipients)externalpayable
2:{
3:require(recipients.length>0,"Emptyrecipientslist");
4:uintamountPerRecipient=msg.value/recipients.length;//remainder???
5:require(amountPerRecipient>0,"Amountmustbespositive");
6:for(uintindex=0;index<recipients.length;index++)
7:{
8:payable(recipients[index]).transfer(amountPerRecipient);
9:}
10:} ```
### Uninitialised Variable
Uninitialized variables could lead to logical errors or exceptions. If a variable is not initialised, there is a great chance that the default value assigned to the variable (according to its type) is not suitable for
the purpose of that variable. The following code contains an access modifier which relies on the owner state variable. The variable is private, and thus, it cannot be accessed or assigned outside the contract. Also, there is no explicit initialisation of owner within a constructor. This makes the variable stuck to the default value, and thus, all the functions marked with the onlyOwner modifier cannot be executed.
```
1:addressprivateowner;
2:
3:modifieronlyOwner(){
4:require(msg.sender==owner, "Onlytheownerofthecontracthasaccess");
5:-;
6:}
```
### User Input Validation
Parameter validation or "sanitisation" is a process that must be implemented at the beginning of every method. This ensures that the method will always execute as expected. End users should not be trusted to always provide valid parameters. If validation is missing and the end user is unaware, or worse, malicious, it could cause critical errors that produce unexpected results or halt contract execution all together. The following example contains a getter method for an internal array. The user can provide an index that is not validated, thus having the possibility of going out of bounds.
```
1:uint256[]private_array=[10,20,30,40,50];
2:
3:functiongetArrayElement(uint256index)externalviewreturns(uint256)
4:{
5:return_array[index];
6:}
```
### Unmatched Type
In Solidity, enums are stored as unsigned integers. Thus, they can be compared and assigned with variables of type uint. Situations like these can become tricky since the value domain of an enum is likely to be much smaller than the value domain of unsigned integers. If a variable with a greater value than the range of the enum is assigned to an enum variable, than the transaction will be reverted. While it is true that reverting the transaction is considered safe, such situations signal a faulty logic in the contract code and it is preferable to be avoided.
```
1:contractUnmatchedType{
2:enumOptions{Candidate1,Candidate2,Candidate3}
3:mapping(address=>Options)private_votes;
4:mapping(Options=>uint)private_votesCount;
5:functionvote(uintoption)external{
6:_votes[msg.sender]=Options(option);
7:_votesCount[Options(option)]++;
8:}
9:functiongetStatisticsForOption(uintoption)externalviewreturns(uint){
10:return_votesCount[Options(option)];
11:}
12:}
```
## 3 Detecting Vulnerabilities Using Dedicated Analysis Instruments
This section briefly presents the results of some experiments that we performed. Basically, we used a few tools for analysing smart contracts in order to check how they behave on our examples presented in
Section 2. The tools that we selected are presented below. It is worth noting that we picked tools which implement different techniques (e.g., static analysis, symbolic execution, linter). For a tool to be eligible for our study, it has to be open source, active and compatible with the latest version of Solidity.
Slither is a static analysis tool written in Python. It provides vulnerability detection and code optimization advice. It features many detectors that target different issues. Its analysis runtime is very low compared to the other tools. It analyses Solidity code by transforming the EVM bytecode into an intermediary representation called S lithIR. Being an open source project, it allows anyone to contribute and improve it, being the foundation for our implementation (discussed later in Section 4). Slither was able to detect uninitialized variables as well as trivial tautologies and contradictions in our examples.
Solhint is a linter for Solidity code. An open source project, it is able to detect possible vulnerabilities, optimization opportunities and abidance to style conventions. The tool also features a customisable set of detection rules that can be employed, along with predefined configurations. The user can define its own configurations and decide which issue wants to target. Unfortunately, Solhint was not able to detect any of the issues in our examples.
Remix7 also features a static analysis plugin. We were unable to find any information about the analysis process performed by this tool. Moreover, it was unable to detect any problems in our examples.
Footnote 7: Remix Docs: [https://remix-ide.readthedocs.io/en/latest/](https://remix-ide.readthedocs.io/en/latest/)
Mythril is a tool that leverages symbolic execution to simulate multiple runs of a contract's methods. It has a fairly long runtime compared to the others. We even encountered executions that took more than a few hours. Mythril was unable to detect any of the issues presented above.
In Table 1, we present a summary of the results that we obtained. The results indicate that nearly all tools fail to detect the vulnerabilities in our examples. This does not mean that these tools are not useful or very bad at signaling issues in smart contract code. The way we interpret these results is that these tools need to be enhanced with more powerful techniques that could increase their detection capabilities.
## 4 Interval Analysis for Vulnerability Detection
### Interval Analysis
Interval Analysis [9] is a static analysis technique that approximates the values interval for every variable in a program for a certain instruction. The technique is not limited to predicting the values interval of a variable, it can also be used to predict certain properties that can be derived from the value of the variable. For instance, instead of working with integer intervals, an analysis can target the parity of variables and work only with 2-valued intervals (even, odd).
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Examples** & Slither & Solhint & Remix & Mythril \\ \hline \hline Tautologies/Contradictions & ✓ & ✗ & ✗ & ✗ \\ Division by zero & ✗ & ✗ & ✗ & ✗ \\ Integer division & ✗ & ✗ & ✗ & ✗ \\ Uninitialised variable & ✓ & ✗ & ✗ & ✗ \\ User input validation & ✗ & ✗ & ✗ & ✗ \\ Unmatched type & ✗ & ✗ & ✗ & ✗ \\ \hline \end{tabular}
\end{table}
Table 1: A summary of the evaluation of the tools when executed on our examples (Section 2).
We present interval analysis via the Unmatched Type example from Section 2.6. Moreover, we show how interval analysis can help us detect a problem in this example, more precisely in the vote function:
```
5:function vote(uint option) external{
6:__votes[msg.sender] = Options(option); //Statement 1
7:__votesCount[Options(option)]++; //Statement 2
8:}
```
The function registers the vote of an user and increases the total vote count for its option. The problem is at line 2: the input is of type uint and it could easily be outside the values range [0,1,2] of the enum.
Interval analysis provides an approximation of the values interval for every program variable at each program location. In Table 2 we show how these intervals are computed for our example. Each line of the table presents the intervals for the program variables (displayed in columns) _before_ the execution of each statement in the first column. For example, before the execution of Statement 1, we do not have any information about the option variable, so its range of values will correspond to the values domain for uint. For _votes[msg.sender], the value interval changes before Statement 2 in case of normal execution (otherwise, the transaction is reverted) to [0,2], that is, the only possible range for Option. Interval analysis performs this calculation using the Worklist Algorithm, an algorithm which traverses the program control flow graph, and updates the intervals for these variables until a fixpoint is reached. This algorithm is shown in Section 4.2.
Recall that the problem we are trying to detect using interval analysis is a mismatch of domains between the variable assigned and the variable whose value is assigned. Since interval analysis computes the interval for option and _votes[msg.sender], a close inspection of the difference between the intervals is sufficient to reveal the problem. A require statement that checks upfront the values for the option parameter would solve the problem. Also, our detection technique would not signal an issue.
### An Implementation of Interval Analysis on Top of Slither
We built our implementation using Python modules provided by Slither. These are the same modules that are used internally by Slither for its own detectors. During execution, Slither fills some of its internal data structures with useful information, such as contract CFG (control flow graph), an intermediary SSA (single statement assignment) representation of the code, and information about each variable (e.g., type, scope and name). We use the information in these data structures to implement interval analysis.
The Worklist Algorithm shown in Figure 1 works by processing every edge in the contract CFG. These edges are added into a list (the "worklist"). It is an iterative algorithm that processes existing elements until the list is empty. When new information is added to the current state, new edges are also added to the worklist. The algorithm stops when no more new information can be discovered.
We implemented a modular Worklist algorithm. Essential information such as extreme labels8, order
\begin{table}
\begin{tabular}{|l|c|c|} \hline _Statements_ & option & _votes[msg.sender] \\ \hline \hline
1 & [0, max] & [0, max] \\
2 & [0, max] & [0, 2] \\ End & [0, max] & [0, 2] \\ \hline \end{tabular}
\end{table}
Table 2: Interval analysis for the vote function.
function9 and flow function10 are all provided as parameters to the Worklist algorithm. This allow us to perform multiple types of analysis using the same base implementation. Our implementation leverages the CFG provided by Slither to split the code of a function into multiple parts. Each node is then split even further into SlithIR SSA [3]lines that are analyzed individually. Along with basic types such as uint and bool, our implementation is able to model complex types such as arrays, mappings and structs.
Footnote 9: A function that receives two elements of the same domain and determines the greater one.
We defined our own data type to encapsulate information about a variable such as type, scope, name and, most importantly, values interval. The program state is represented as a dictionary having variable names as keys and an object of our own defined type as values. Complex types are defined as recursive dictionaries, for example, the interval for a struct is modeled as a dictionary containing intervals for each of its fields or even other dictionaries if the structs are nested.
Current status.Our implementation is now able to successfully analyze programs containing assignments and arithmetic expressions for both elementary and complex types. It takes into consideration state variables, function parameters, and local variables. We are now capable of detecting issues such as:
* Arithmetic issues including Division By Zero and Integer Division Remainder;
* Issues related to variables initialization;
* Issues related to parameter validation.
## 5 Conclusions
In this paper we identified some vulnerabilities that are not handled by state of the art tools for smart contract analysis. These vulnerabilities vary in their severity, but no matter the impact, defects and potential errors should be identified as soon as possible. We attempt to improve Slither with a more powerful technique called interval analysis. We explain why this technique is a good fit for detecting these issues and how it could detect them. We built a custom interval analysis on top of Slither, leveraging
Figure 1: The Worklist Algorithm.
the information that Slither already provides about a contract, its attributes, methods, method parameters, program flow and many more. Currently, our implementation detects vulnerabilities that other tools miss.
### Future Work
We are now handling only on a subset of expressions in the Solidity programming language, which covers expressions including integers, booleans, arrays, structures, and mappings. However, more elaborate work needs to be done to tackle addresses and operations over addresses, more complex loops or conditional statements, etc. Right now, our code can be executed on every smart contract written in Solidity, but it will perform interval analysis only for the subset that we cover.
Intraprocedural analysis would significantly improve the precision of our analysis. An example of a vulnerability that we are not yet able to detect is _Short Address_. This could be detected by monitoring the length attribute of the payload. Another example is _Tautologies and Contradictions in Assert or Require Statements_: it could be detected by approximating the result of the boolean expression and checking if the interval contains only one value: _true_ for tautologies and _false_ for contradictions.
Being able to handle more complex conditional statements and loops would also be of great help in obtaining a more accurate monitoring of the program state by interpreting the semantics of boolean expressions. Once we identify multiple possible states based on conditional branches, we can leverage unifying techniques such as _Trace Partitioning_. Additionally, monitoring implicit state variables that are contract-level or function-level, like balance or msg.sender, would be beneficial in identifying balance-related issues and user interaction problems.
|
2310.20374 | Ncorpi$\mathcal{O}$N : A $\mathcal{O}(N)$ software for N-body
integration in collisional and fragmenting systems | Ncorpi$\mathcal{O}$N is a $N$-body software developed for the time-efficient
integration of collisional and fragmenting systems of planetesimals or moonlets
orbiting a central mass. It features a fragmentation model, based on crater
scaling and ejecta models, able to realistically simulate a violent impact. The
user of Ncorpi$\mathcal{O}$N can choose between four different built-in modules
to compute self-gravity and detect collisions. One of these makes use of a
mesh-based algorithm to treat mutual interactions in $\mathcal{O}(N)$ time.
Another module, much more efficient than the standard Barnes-Hut tree code, is
a $\mathcal{O}(N)$ tree-based algorithm called FalcON. It relies on fast
multipole expansion for gravity computation and we adapted it to collision
detection as well. Computation time is reduced by building the tree structure
using a three-dimensional Hilbert curve. For the same precision in mutual
gravity computation, Ncorpi$\mathcal{O}$N is found to be up to 25 times faster
than the famous software REBOUND. Ncorpi$\mathcal{O}$N is written entirely in
the C language and only needs a C compiler to run. A python add-on, that
requires only basic python libraries, produces animations of the simulations
from the output files. The name Ncorpi$\mathcal{O}$N, reminding of a scorpion,
comes from the French $N$-corps, meaning $N$-body, and from the mathematical
notation $\mathcal{O}(N)$, due to the running time of the software being almost
linear in the total number $N$ of moonlets. Ncorpi$\mathcal{O}$N is designed
for the study of accreting or fragmenting disks of planetesimal or moonlets. It
detects collisions and computes mutual gravity faster than REBOUND, and unlike
other $N$-body integrators, it can resolve a collision by fragmentation. The
fast multipole expansions are implemented up to order six to allow for a high
precision in mutual gravity computation. | Jérémy Couturier, Alice C. Quillen, Miki Nakajima | 2023-10-31T11:35:12Z | http://arxiv.org/abs/2310.20374v3 | Ncorpi\(\mathcal{O}\)N : A \(\mathcal{O}(N)\) software for N-body integration in collisional and fragmenting systems.
###### Abstract
Ncorpi\(\mathcal{O}\)N is a N-body software developed for the time-efficient integration of collisional and fragmenting systems of planetesimals or moonlets orbiting a central mass. It features a fragmentation model, based on crater scaling and ejecta models, able to realistically simulate a violent impact.
The user of Ncorpi\(\mathcal{O}\)N can choose between four different built-in modules to compute self-gravity and detect collisions. One of these makes use of a mesh-based algorithm to treat mutual interactions in \(\mathcal{O}(N)\) time. Another module, much more efficient than the standard Barnes-Hut tree code, is a \(\mathcal{O}(N)\) tree-based algorithm called FalcON. It relies on fast multipole expansion for gravity computation and we adapted it to collision detection as well. Computation time is reduced by building the tree structure using a three-dimensional Hilbert curve. For the same precision in mutual gravity computation, Ncorpi\(\mathcal{O}\)N is found to be up to 25 times faster than the famous software REBOUND.
Ncorpi\(\mathcal{O}\)N is written entirely in the C language and only needs a C compiler to run. A python add-on, that requires only basic python libraries, produces animations of the simulations from the output files. The name Ncorpi\(\mathcal{O}\)N, reminding of a scorpion, comes from the French _N-corps_, meaning N-body, and from the mathematical notation \(\mathcal{O}(N)\), due to the running time of the software being almost linear in the total number \(N\) of moonlets. Ncorpi\(\mathcal{O}\)N is designed for the study of accreting or fragmenting disks of planetesimal or moonlets. It detects collisions and computes mutual gravity faster than REBOUND, and unlike other N-body integrators, it can resolve a collision by fragmentation. The fast multipole expansions are implemented up to order six to allow for a high precision in mutual gravity computation.
keywords: N-body, Fast Multipole Method, Mesh, Fragmentation, Collision, Ncorpi\(\mathcal{O}\)N, FalcON +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
Ncorpi\(\mathcal{O}\)N is an open-source \(N\)-body software specialized in disk simulation, published under the GNU General Public License. It has its own website available here1 and the source code is publicly distributed on github2. By construction, Ncorpi\(\mathcal{O}\)N is able to integrate a system where one body (the central body) dominates in mass all the others (the orbiting bodies). The user also has the possibility of perturbing the system with a distant star, for example a star around which the central body may be orbiting if it is a planet, or a binary star if the central body is a star. For the rest of this work, we refer to the central body of the simulation as the Earth and to the distant star as the Sun.
Footnote 1: [https://ncorpi.com](https://ncorpi.com)
The development of the software began in parallel of our work on the formation of the Moon, and as such, we hereafter refer to the orbiting bodies as _moonlets_. Ncorpi\(\mathcal{O}\)N is particularly adapted to the simulation of systems where the mean free path is short, typically less than the semi-major axis, but also of systems where self-gravity plays an important role. The Moon is thought to have formed from a disk generated by a
giant impact, and previous works on the formation of the Moon decide upon collision if the moonlets should bounce back or merge depending on the impact parameters (_e.g._ Ida et al. [1], Salmon and Canup [2]), but never consider the fact that a violent collision may lead to their fragmentation. In order to address this issue, Ncorpi\(\mathcal{ON}\) features a built-in fragmentation model that is based on numerous studies of impact and crater scaling (Holsapple and Housen [3], Stewart and Leinhardt [4], Housen and Holsapple [5], Leinhardt and Stewart [6], Suetsugu et al. [7]) to properly model a violent collision. Our study of the Moon formation makes extensive use of Ncorpi\(\mathcal{ON}\) and will be published after the present work.
Ncorpi\(\mathcal{ON}\) comes with four different built-in modules of mutual interactions management, one of which uses the efficient fast multipole method-based falcON algorithm3(Dehnen [8], Dehnen [9]). Each of the four modules is able both to detect collisions and to compute self gravity. Overall, Ncorpi\(\mathcal{ON}\) was developed with time-efficiency in mind, and its running time is almost linear in the total number \(N\) of moonlets, which allows for more realistic disks to be simulated. Low-performance CPUs can be used to run Ncorpi\(\mathcal{ON}\).
Footnote 3: An algorithm faster than the standard Barnes-Hut tree that considers cell\(-\)cell instead of cell\(-\)body interactions.
In Sect. 2, we present the structure of the code of Ncorpi\(\mathcal{ON}\). In Sect. 3, the challenging task of time-efficiently considering moonlet\(-\)moonlet interactions is carried out and the four built-in modules of mutual interactions management are presented. In Sect. 4, we go over the speed performances of Ncorpi\(\mathcal{ON}\)'s four built-in modules of mutual interactions management. Finally, Sect. 5 deals with the resolution of collisions, where we present, among other things, the fragmentation model of Ncorpi\(\mathcal{ON}\). For convenience to the reader, we gather in Table A.2 of A the notations used throughout the article. Section 3 only concerns mutual interactions between the moonlets. Other aspects of orbital dynamics, that are not moonlet\(-\)moonlet interactions, such as interactions with the equatorial bulge, are pushed in B in order to prevent the paper from being too long.
Hereafter, \(\mathcal{O}\) denotes the center of mass of the Earth, and in a general fashion, the mass of the Earth and of the Sun are denoted by \(M_{\oplus}\) and \(M_{\odot}\), respectively. Let \(N\) be the total number of moonlets orbiting the Earth and for \(1\leq j\leq N\), \(m_{j}\) is the mass of the \(j^{\text{th}}\) moonlet. The geocentric inertial reference frame is \((\mathcal{O},\mathbf{i},\mathbf{j},\mathbf{k})\), while the reference frame attached to the rotation of the Earth is \((\mathcal{O},\mathbf{I},\mathbf{J},\mathbf{K})\), with \(\mathbf{k}=\mathbf{K}\). The transformation from one to another is done through application of the rotation matrix \(\mathbf{\Omega}\), which is the sideral rotation of the Earth. All vectors and tensors of this work are bolded, whereas their norms, as well as scalar quantities in general, are unbolded.
## 2 Structure of Ncorpi\(\mathcal{ON}\) and how to actually run a simulation
The website of Ncorpi\(\mathcal{ON}\)1 features a full documentation4 as well as a section where the structure of the code is discussed in details5. As such, it can be considered as an integral part of this work and we will refrain here from giving too much details. Instead, we stay succinct and the interested reader is invited to visit Ncorpi\(\mathcal{ON}\)'s website.
Footnote 4: [https://ncorpion.com/#setup](https://ncorpion.com/#setup)
Footnote 5: [https://ncorpion.com/#structure](https://ncorpion.com/#structure)
Moonlets are stored in an array of structures that holds their cartesian coordinates. The moonlet structure is defined such that its size is 64 bytes, so it fits exactly on a single cache line, which is best for cache friendliness. When arrays of dynamical size are needed, Ncorpi\(\mathcal{ON}\) makes use of a hand-made unrolled linked list, that we call _chain_. Unrolled linked lists are linked lists6 where more than one value is stored per node. Storing many values per node reduces the need for pointer dereferences and increases the locality of the storage, making unrolled linked lists significantly faster than regular linked lists.
Footnote 6: A linear data structure where each node holds a value and a pointer towards the next value.
When the mesh \(\mathcal{O}(N)\) algorithm is used to detect collisions and compute mutual gravity, chains are used to store the ids (in the moonlet array) of the moonlets in the different cells of the hash table. When either falcON \(\mathcal{O}(N)\) fast multipole method or the standard \(\mathcal{O}(N\ln N)\) tree code is used to detect collisions and compute mutual gravity, then chains are used to store the moonlets' ids in each cell of the octree.
The different structures used to built and manipulate the octrees are explained in the website. After the tree is built with the general construction based on pointers, it is translated into a flattree where the cells are stored in a regular array. This procedure allows for a significant CPU time to be saved (Sect. 3.4.6).
Among all existing \(N\)-body softwares, the one closest to Ncorpi\(\mathcal{ON}\) is REBOUND, although REBOUND does not implement falcON algorithm for mutual gravity computation and does not handle fragmentations. REBOUND is however more multi-purpose than Ncorpi\(\mathcal{ON}\). GyrfalcON on NEMO is also similar to Ncorpi\(\mathcal{ON}\) since it uses falcON algorithm for mutual gravity computation, but it is galaxy oriented and does not handle collisions.
The installation of Ncorpi\(\mathcal{ON}\) from the github repository is straightforward7. The initial conditions of the simulation, the different physical quantities, and the choice of which module is to be used for mutual interactions, is decided by the user in the parameter file of Ncorpi\(\mathcal{ON}\). Then, the simulation is run and an animation created from the command line. The complete documentation is provided both in the website and the github repository.
Footnote 7: At least on Linux and MacOS systems, we did not try to use Ncorpi\(\mathcal{ON}\) on Windows.
## 3 Mutual interactions between the moonlets
We consider in this section mutual interactions between the moonlets. The general aspects of orbital dynamics, those not related to moonlet\(-\)moonlet mutual interactions, are treated in B. As long as mutual interactions between the moonlets are disregarded, the simulation runs effectively in \(\mathcal{O}(N)\) time. However, the moonlets can interact through collisions and mutual gravity, and managing these interactions in a naive way results in a very slow integrator. Hereafter, a mutual interaction denotes either a collision or a gravitational mutual interaction. We review in this section the four modules implemented in Ncorpi\(\mathcal{ON}\) that can deal with mutual interactions between the moonlets, namely
* \(\mathcal{O}(N^{2})\) brute-force method.
* \(\mathcal{O}(N\ln N)\) standard tree code.
* \(\mathcal{O}(N)\) falcON fast multipole method.
* \(\mathcal{O}(N)\) mesh-grid algorithm.
Each of the four modules is able to treat both the detection of collision and the computation of self gravity (Ncorpi\(\mathcal{ON}\) adapts Dehnen's falcON algorithm so it can also detect collisions).
### Detectiong a collision between a pair of moonlet
Before delving into the presentation of the four mutual interaction modules, we describe how Ncorpi\(\mathcal{ON}\) decides if two given moonlets will collide in the upcoming timestep. Note that apart from the brute-force method, these modules rarely treat mutual interactions in a pair-wise way.
Given two moonlets with positions \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) and masses \(m_{1}\) and \(m_{2}\), all four modules rely on the following procedure to determine if the moonlets will be colliding during the upcoming timestep.
Let \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) be the velocities of the moonlets and \(R_{1}\) and \(R_{2}\) their radii. Let us denote \(\Delta\mathbf{v}=\mathbf{v}_{1}-\mathbf{v}_{2}\) and \(\Delta\mathbf{r}=\mathbf{r}_{1}-\mathbf{r}_{2}\). Approximating the trajectories by straight lines, we decide according to the following procedure if the moonlets will collide during the upcoming timestep. We first compute the discriminant
\[\Delta=\left(\Delta\mathbf{r}\cdot\Delta\mathbf{v}\right)^{2}+\Delta v^{2}\left[ \left(R_{1}+R_{2}\right)^{2}-\Delta r^{2}\right]. \tag{1}\]
Then, the time \(\Delta t\) until the collision is given by
\[\Delta t=-\frac{\Delta\mathbf{r}\cdot\Delta\mathbf{v}+\sqrt{\Delta}}{\Delta v^{2}}. \tag{2}\]
A collision will occur between the moonlets in the upcoming timestep if, and only if, \(\Delta t\in\mathbb{R}\) and \(0\leq\Delta t\leq dt\), where \(dt\) is the size of the timestep. If that is the case, the collision is resolved using results from Sect. 5.
### Brute-force \(\mathcal{O}(N^{2})\) algorithm
The most straightforward way of treating mutual interactions is through a brute-force algorithm where all \(N(N-1)/2\) pairs of moonlets are considered. At each timestep, the mutual gravity between all pairs is computed, and the algorithm decides if a collision will occur between the two considered moonlets in the upcoming timestep. However, this greedy procedure yields a \(\mathcal{O}(N^{2})\) time complexity, limiting the total number of moonlets to a few thousands at best on a single-core implementation (_e.g._\(1000\leq N\leq 2700\) in Salmon and Canup [2]).
### The mesh \(\mathcal{O}(N)\) algorithm
Khuller and Matias [10] described in 1995 a \(\mathcal{O}(N)\) algorithm based on a mesh grid to find the closest pair in a set of points in the plane. Their algorithm is not completely straightforward to implement and only allows for the closest pair of moonlets to be identified. Here, I describe a mesh-based three-dimensional simplified version of their algorithm able to detect collisions in \(\mathcal{O}(N)\) time.
For a real number \(\gamma>0\), we build a \(\gamma\)-mesh. At each timestep, we only look for collisions between moonlets that are in each other neighborhood, and we only compute the gravitational interactions between moonlets in each other neighborhood. In Fig. 1, we provide a schema of a \(\gamma\)-mesh and the definition of neighbourhood. If \(\gamma\) is chosen as a function of \(N\) and such that, on average, each moonlet has \(\mathcal{O}(1)\) moonlets in its neighborhood, then the algorithm runs in \(\mathcal{O}(N)\) time.
This procedure disregards gravitational interactions between moonlets not in each other neighborhood, and while the mesh algorithm is very efficient in detecting collisions, it only poorly approximates mutual gravity. In order to improve the mesh algorithm, Ncorpi\(\mathcal{O}\)N also computes mutual gravity between any moonlets and one of the three largest moonlets. Unless the three largest moonlets account for the majority of the total moonlet mass8, the mesh algorithm is poorly adapted to mutual gravity computation.
Figure 1: Schematic representation of a two-dimensional \(\gamma\)-mesh around the Earth. The neighbourhood of the red moonlet, defined as the cell containing it plus all the adjacent cells, is shown with a red square.
In practise in Ncorpi\(\mathcal{O}\)N, when the mesh algorithm is used to treat mutual interactions, moonlets are put in the mesh-grid one after the other, and moonlets already populating their neighbourhood are identified. A hash table of chains is used to remember which moonlets occupy which cells of the grid. This procedure ensures that pairs are only treated once.
In order to choose the mesh-size \(\gamma\), let us assume that initially, all the moonlets are located in a disk of constant aspect ratio \(h/r\), at a radius \(r\leq R_{\text{max}}\). Then they occupy a volume
\[\mathcal{V}=\frac{4}{3}\pi R_{\text{max}}^{3}\sin\varsigma=\frac{4}{3}\pi R_{ \text{max}}^{3}\sqrt{\frac{h^{2}/r^{2}}{1+h^{2}/r^{2}}}, \tag{3}\]
where \(\tan\varsigma=h/r\). In order for each moonlet to have, on average, \(x\) moonlets in its neighborhood, the mesh size must verify \(\left(3\gamma\right)^{3}\leq x\mathcal{V}/N\), that is
\[\gamma\leq\left(\frac{4\pi x}{81N}\right)^{1/3}\left(\frac{h^{2}/r^{2}}{1+h^{ 2}/r^{2}}\right)^{1/6}R_{\text{max}}. \tag{4}\]
With \(h/r=0.05\), \(R_{\text{max}}=10R_{\oplus}\), \(N=10^{5}\) and \(x=8\), this gives \(\gamma=0.08526R_{\oplus}\), or \(\gamma=543.2\) km. If the \(N\) moonlets have, let's say, a total mass that of the Moon, then their average radius is \(R=R_{\left\{\mathbb{C}\right\}}/N^{1/3}\). For \(N=10^{5}\) this gives \(R=37.4\) km. The condition that the moonlets are smaller than \(\gamma\) is \(2R\leq\gamma\). Choosing \(x=8\) and \(R_{\text{max}}=10R_{\oplus}\), this gives
\[\tan\frac{h}{r}\geq\frac{162}{\pi x}\left(\frac{R_{\mathbb{C}}}{R_{\text{max} }}\right)^{3}\approx 0.0001307, \tag{5}\]
that is, \(h/r\gtrsim 1.3\;10^{-4}\). Choosing for \(\gamma\) the critical value given by Eq. (4), and for \(h/r\) a value much larger than that predicted by Eq. (5) ensures that most of the moonlets are smaller than the mesh-size.
In the parameter file of Ncorpi\(\mathcal{O}\)N, the user indicates the desired number \(x\) of neighbours for the simulation and Eq. (4) is used at the first time-step to estimate a suitable value of the mesh-size \(\gamma\). Then, at each time-step, the value of \(\gamma\) is updated according to the expected number \(x^{\prime}\) of neighbours computed at the previous timestep, in order to match the user's requirement. More precisely, if \(\gamma^{\prime}\) denotes the mesh-size at the previous timestep, then the new value of \(\gamma\) for the current timestep is given by
\[\gamma=\gamma^{\prime}\left(\frac{x}{x^{\prime}}\right)^{1/3}. \tag{6}\]
The largest moonlets of the simulation can sometimes be larger than \(\gamma\). When this happens, the corresponding moonlet is not put in the hash table but instead, mutual interactions between that moonlet and any other moonlet are treated. The user indicates in the parameter file of Ncorpi\(\mathcal{O}\)N the number of cells along each axis9 and the minimal sidelength of the total mesh-grid, which is translated into a minimal value for the mesh-size \(\gamma\).
Footnote 9: The code code is available at [https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals](https://github.com/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/journals/journals/journals/j/journals/journals/journals/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/j/journals/journals/journals/j/journals/journals/j/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/j/journals/journals/journals/j/journals)
#### 3.4.1 Tree building
Both algorithms use an octree, whose building procedure is not detailed here, but described in Barnes and Hut [11] and schematically represented in Fig. 2. Cells containing at most \(s\) moonlets are not divided into children cells (\(s=1\) in Barnes and Hut [11]). As the same tree is used for both collision search and mutual gravity computation, it can be built only once per timestep to reduce the computational effort10.
Footnote 10: Care must be taken to ensure that the hash table fits into the RAM.
Footnote 10: The tree-based algorithms have a different optimal value for \(s\) according to whether they are used to detect collisions or to compute mutual gravity. It could be interesting to build distinct trees with a different \(s\) for collisions and gravity. Time is lost by building two trees but also saved by using the optimal \(s\). We did not investigate what was best.
Hereafter, we adopt the naming convention of Dehnen [8] where a node, or cell, is a cubic subdivision of the space, a child refers to a direct subnode of a node, and a descendant refers to any subcell of a cell. A leaf is a childless node and we abusively refer to the moonlets contained by a leaf as children nodes of that leaf. On the left panel of Fig. 2, we provide a schematic representation of a two-dimensional tree (quadtree) around the Earth with \(s=5\).
#### 3.4.2 Tree climbing
The tree climbing procedure consists in computing several quantities for all cells of the tree, recursively from the children nodes. The tree climbing procedure differs slightly for collision search or mutual gravity computation.
Collision searchWhen searching for collisions, we define the center \(\bar{\mathbf{s}}_{A}\) of cell \(A\) as the average position of the \(N_{A}\) children nodes it contains
\[\bar{\mathbf{s}}_{A}=\sum_{\text{child node $a$ of $A$}}\frac{\bar{\mathbf{s}}_{a}}{N_{A}}, \tag{7}\]
Figure 2: _Left_ : Schematic representation of a quadtree around the Earth built with \(s=5\). The root cell is shown with a thick black square, whereas its descendants are shown with colored lines whose thicknesses decrease with their depth into the tree. The children of the root cell are blue, the grandchildren are red, etc... _Right_ : A zoom-in image showing the South-East child of the root cell, its maximal radius \(r_{\text{max}}\), and the maximal radii of its descendants. Cells with one moonlet have a zero maximal radius, and those are not shown. For clarity, the maximal circles are shown with the color and thickness of their corresponding cell, and diagonal crosses show their centers \(\bar{\mathbf{s}}\).
and its maximal and critical radii recursively as
\[r_{\max,A}=\max_{\text{child node $a$ of $A$}}\left(r_{\max,a}+|\bar{\mathbf{s}}_{a}- \bar{\mathbf{s}}_{A}|\right), \tag{8}\]
and
\[r_{\text{crit},A}=r_{\max,A}+\max_{\text{child node $a$ of $A$}}\left(r_{\text{ crit},a}-r_{\max,a}\right). \tag{9}\]
If a child node \(a\) is a moonlet (that is, if \(A\) is a leaf), then \(\bar{\mathbf{s}}_{a}=\mathbf{r}_{a}\) is the moonlet's position, \(r_{\max,a}=0\) and \(r_{\text{crit},a}=R_{a}+dt\,v_{a}\), where \(R_{a}\) is the moonlet's radius, \(dt\) the timestep (common to all moonlets in Norcri\(\mathcal{O}\)N), and \(v_{a}\) the moonlet's scalar velocity. Starting from the leaf cells, we go up the tree and use Eqs. (7), (8) and (9) to compute recursively from the children nodes the center \(\bar{\mathbf{s}}\), the maximal radius \(r_{\text{max}}\) and the critical radius \(r_{\text{crit}}\) of each cell. On the right panel of Fig. 2, we show the centers and maximal radii of the South-East child of the root cell and of its descendants.
Mutual gravity computationWhen computing mutual gravity, we define the expansion center \(\mathbf{s}_{A}\) of cell \(A\) of mass \(M_{A}\) as the center of mass of the children nodes it contains
\[\mathbf{s}_{A}=\frac{1}{M_{A}}\sum_{\text{child node $a$ of $A$}}M_{a}\mathbf{s}_{a}. \tag{10}\]
The maximal radius \(r_{\max,A}\) is defined recursively in the same way as in collision search
\[r_{\max,A}=\max_{\text{child node $a$ of $A$}}\left(r_{\max,a}+|\mathbf{s}_{a}- \mathbf{s}_{A}|\right). \tag{11}\]
However, the critical radius is defined differently as
\[r_{\text{crit},A}=r_{\max,A}/\theta(M_{A}), \tag{12}\]
where the opening angle \(\theta(M_{A})\) is given by Eq. (13) of Dehnen [8]. If a child node \(a\) is a moonlet (that is, if \(A\) is a leaf), then \(\mathbf{s}_{a}=\mathbf{r}_{a}\) is the moonlet's position and \(r_{\max,a}=0\). Starting from the leaf cells, we go up the tree and use Eqs. (10), (11) and (12) to compute recursively from the children nodes the expansion center \(\mathbf{s}\), the maximal radius \(r_{\max}\) and the critical radius \(r_{\text{crit}}\) of each cell.
For both collision search and mutual gravity computation, if the distance from the center (or expansion center) of a cell to its farthest corner is smaller than \(r_{\max}\), then \(r_{\max}\) is replaced by this distance. Two cells are said to be well-separated if the distance between their centers (or expansion centers) is larger than the sum of their critical radii, that is
\[A\text{ and }B\text{ well-separated }\Leftrightarrow r_{\text{crit},A}+r_{ \text{crit},B}\leq|\bar{\mathbf{s}}_{A}-\bar{\mathbf{s}}_{B}|\,. \tag{13}\]
The same definition applies with \(\mathbf{s}\) instead of \(\bar{\mathbf{s}}\) for mutual gravity computation. When treating mutual gravity, the multipole moments \(\mathbf{M}^{(n)}\) are also computed for all cells recursively from the children cells during the tree climbing. More details on doing so are provided in C.
#### 3.4.3 Multipole expansion
How the octree can be used to look for collisions is trivial : Given the definition of \(r_{\text{crit}}\), it is straightforward to verify that moonlets of cell \(A\) will not collide with moonlets of cell \(B\) in the upcoming timestep if \(A\) and \(B\) are well-separated. However, computing mutual gravitational interactions with the octree relies on multipole expansions. The existing literature provides only little insight on the mathematical framework, and we recall the theory here. In Fig. 3, let us say that we want to compute the acceleration of moonlets of \(A\) due to their gravitational interaction with moonlets of \(B\). Since the critical circles do not intersect, these two cells are well-separated. From the point of view of moonlets of \(A\), moonlets of \(B\) can thus be seen as a
whole, and at lowest order, it is as if they were all reunited at their center of mass11\(\mathbf{s}_{B}\). At higher order, the mass distribution inside cell \(B\) can be taken into account through the multipole moments \(\mathbf{M}^{(n)}\) of the cell (Eq. (20)) to reach a better precision. The gravitational potential at location \(\mathbf{x}\) in cell \(A\) due to the gravity of cell B is given by12
Footnote 11: This is the approximation made by Barnes and Hut [11] in their original description of the standard tree code, corresponding to \(p=1\) in Eq. (25).
Footnote 12: We use here the sign convention \(\ddot{\mathbf{x}}=\nabla\phi(\mathbf{x})\), in order to have \(\nabla_{\mathbf{s}_{A}}\mathbf{C}^{(m-1)}=\mathbf{C}^{(m)}\) in Eq. (25).
\[\phi(\mathbf{x})=\sum_{i\in B}\mu_{i}\,g(\mathbf{x}-\mathbf{x}_{i}), \tag{14}\]
where \(\mu_{i}=\mathcal{G}m_{i}\), \(m_{i}\) and \(\mathbf{x}_{i}\) are the mass and position of the \(i^{\rm th}\) moonlet of \(B\), and \(g(\mathbf{x})=1/x\) is the Green function of the Laplace operator. As suggested in Fig. 3, we write
\[\mathbf{x}-\mathbf{x}_{i}=\mathbf{\Delta}+\mathbf{R}+\mathbf{s}_{B}-\mathbf{x}_{i}, \tag{15}\]
and we Taylor expand the Green function \(g(\mathbf{x}-\mathbf{x}_{i})\) around the cell separation \(\mathbf{R}\) up to a certain order \(p\).
\[g(\mathbf{x}-\mathbf{x}_{i})=\sum_{n=0}^{p}\frac{(-1)^{n}}{n!}\mathbf{\nabla}^{(n)}g(\mathbf{ R})\odot(\mathbf{x}_{i}-\mathbf{s}_{B}-\mathbf{\Delta})^{(n)}+\mathcal{O}\left(\frac{r_{ \max,A+B}}{R}\right)^{p+1}. \tag{16}\]
In this expression, the \(n^{\rm th}\) order tensor \(\mathbf{\nabla}^{(n)}g(\mathbf{R})\) is the \(n^{\rm th}\) gradient of \(1/R\) defined recursively as
\[\nabla^{(0)}g(\mathbf{R})=g(\mathbf{R})\quad\text{ and }\quad\left(\mathbf{\nabla}^{(n)}g( \mathbf{R})\right)^{i_{1},i_{2},\cdots,i_{n}}=\frac{\partial}{\partial R_{i_{n}}} \left(\mathbf{\nabla}^{(n-1)}g(\mathbf{R})\right)^{i_{1},i_{2},\cdots,i_{n-1}}, \tag{17}\]
with \((i_{1},i_{2},\cdots,i_{n})\in\left\{1,2,3\right\}^{n}\) and \(\mathbf{R}=(R_{1},R_{2},R_{3})\). The quantity \((\mathbf{x}_{i}-\mathbf{s}_{B}-\mathbf{\Delta})^{(n)}\) is the \(n\)-fold outer product of the vector \(\mathbf{x}_{i}-\mathbf{s}_{B}-\mathbf{\Delta}\) with itself. The inner and outer product between two tensors are defined respectively as
\[\left(\mathbf{T}_{1}^{(n)}\odot\mathbf{T}_{2}^{(n-k)}\right)^{i_{1},i_{2},\cdots,i_{ k}}=\sum_{1\leq j_{1},\cdots,j_{n-k}\leq 3}T_{1}^{i_{1},\cdots,i_{k},j_{1}, \cdots,j_{n-k}}T_{2}^{j_{1},\cdots,j_{n-k}}, \tag{18}\]
Figure 3: Multipole expansion between two interacting cells \(A\) and \(B\).
\[\left(\mathbf{T}_{1}^{(k)}\otimes\mathbf{T}_{2}^{(n-k)}\right)^{i_{1},i_{2},\cdots,i_{n}}= T_{1}^{i_{1},\cdots,i_{k}}T_{2}^{i_{k+1},\cdots,i_{n}}. \tag{19}\]
Hereafter, we will disregard the remainder \(\mathcal{O}(r_{\max,A+B}/R)^{p+1}\), where \(r_{\max,A+B}=r_{\max,A}+r_{\max,B}\). If we define the \(n^{\text{th}}\) multipole moment of cell \(B\) as the \(n^{\text{th}}\) order tensor
\[\mathbf{M}_{B}^{(n)}(\mathbf{s}_{B})=\sum_{i\in B}\mu_{i}\left(\mathbf{x}_{i}-\mathbf{s}_{B} \right)^{(n)}, \tag{20}\]
then Eqs. (14) and (16) yield
\[\phi(\mathbf{x})=\sum_{n=0}^{p}\frac{(-1)^{n}}{n!}\mathbf{\nabla}^{(n)}g(\mathbf{R})\odot \mathbf{M}_{B}^{(n)}(\mathbf{s}_{B}+\mathbf{\Delta}). \tag{21}\]
The idea is to expand \(\mathbf{M}_{B}^{(n)}(\mathbf{s}_{B}+\mathbf{\Delta})\) as to make appear the multipole moments \(\mathbf{M}_{B}^{(n)}(\mathbf{s}_{B})\). However, since \(\mathbf{s}_{B}\) and \(\mathbf{\Delta}\) do not commute (\(\mathbf{s}_{B}\otimes\mathbf{\Delta}\neq\mathbf{\Delta}\otimes\mathbf{s}_{B}\) in general), such expansion is not given by Newton's binomial. We can however use the symmetry of tensor \(\mathbf{\nabla}^{(n)}g(\mathbf{R})\) to get around this difficulty. We say that a tensor \(\mathbf{T}^{(n)}\) is symmetrical if for any permutation \(\sigma\) of \(\{1,2,\cdots,n\}\), we have
\[T^{i_{1},\cdots,i_{n}}=T^{i_{\sigma(1)},\cdots,i_{\sigma(n)}}. \tag{22}\]
Due to Schwarz rule, \(\mathbf{\nabla}^{(n)}g(\mathbf{R})\) is symmetrical and we have (this is easily verified from Eqs. (18) and (19))
\[\mathbf{\nabla}^{(n)}g(\mathbf{R})\odot\mathbf{M}_{B}^{(n)}(\mathbf{s}_{B}+\mathbf{\Delta})=\mathbf{ \nabla}^{(n)}g(\mathbf{R})\odot\left(\sum_{m=0}^{n}(-1)^{m}\binom{n}{m}\mathbf{\Delta} ^{(m)}\otimes\mathbf{M}_{B}^{(n-m)}(\mathbf{s}_{B})\right), \tag{23}\]
although Eq. (23) cannot be "simplified" by \(\mathbf{\nabla}^{(n)}g(\mathbf{R})\). Using Eq. (23) and the equality
\[\mathbf{T}_{1}^{(n)}\odot\left(\mathbf{T}_{2}^{(m)}\otimes\mathbf{T}_{3}^{(n-m)}\right)= \mathbf{T}_{1}^{(n)}\odot\mathbf{T}_{2}^{(m)}\odot\mathbf{T}_{3}^{(n-m)} \tag{24}\]
for any symmetrical tensors \(\mathbf{T}_{1}^{(n)}\), \(\mathbf{T}_{2}^{(m)}\) and \(\mathbf{T}_{3}^{(n-m)}\), Eq. (14) can be written (Warren and Salmon [13])
\[\begin{split}&\phi(\mathbf{x})=\sum_{m=0}^{p}\frac{1}{m!}\mathbf{\Delta}^{ (m)}\odot\mathbf{C}^{(m)}(\mathbf{s}_{A}),\\ &\mathbf{C}^{(m)}(\mathbf{s}_{A})=\sum_{n=0}^{p-m}\frac{(-1)^{n}}{n!}\mathbf{ \nabla}^{(n+m)}g(\mathbf{R})\odot\mathbf{M}_{B}^{(n)}(\mathbf{s}_{B}).\end{split} \tag{25}\]
The tensors \(\mathbf{C}^{(0)}\), \(\mathbf{C}^{(1)}\) and \(\mathbf{C}^{(2)}\) are respectively the gravitational potential, the acceleration and the tidal tensor at \(\mathbf{s}_{A}\) due to the gravity of cell \(B\). More generally, the \(\mathbf{C}^{(m)}\) are the interaction tensors due to the gravity of cell \(B\) on the center of mass \(\mathbf{s}_{A}\) of cell \(A\).
In the standard tree code (Sect. 3.4.4), instead of computing interactions between cells, we compute interactions between a cell \(B\) and a moonlet well-separated13 from \(B\) at location \(\mathbf{x}\). In that case, instead of \(\mathbf{R}=\mathbf{s}_{A}-\mathbf{s}_{B}\), we take \(\mathbf{R}=\mathbf{x}-\mathbf{s}_{B}\) and we only compute \(\mathbf{C}^{(1)}\) in Eq. (25), since we are only interested in the acceleration of the moonlet.
Footnote 13: The well-separation in that case is defined as \(|\mathbf{x}-\mathbf{s}_{B}|\geq r_{\text{crit},B}\).
In falcON, once the \(\mathbf{C}^{(m)}\) due to interactions between cells of the tree have been accumulated by the tree walk (described in Sect. 3.4.5), they are passed down and accumulated by the descendants until reaching the moonlets. Since the children are not located at the expansion center \(\mathbf{s}_{0}\) of their parent, their parent's
are translated to their own expansion center \(\mathbf{s}_{1}\) (or position if the child is a moonlet). Using the equality \(\nabla_{\mathbf{s}_{0}}\mathbf{C}^{(m-1)}(\mathbf{s}_{0})=\mathbf{C}^{(m)}(\mathbf{s}_{0})\), this is done via a \(p^{\text{th}}\) order Taylor expansion14
Footnote 14: There is a sign error in Eq. (8) of Dehnen [s], where \(\mathbf{s}_{0}-\mathbf{s}_{1}\) is written instead of \(\mathbf{s}_{1}-\mathbf{s}_{0}\).
\[\mathbf{C}^{(m)}(\mathbf{s}_{1})=\sum_{n=0}^{p-m}\frac{1}{n!}\nabla_{\mathbf{s}_{0}}^{(n)} \mathbf{C}^{(m)}(\mathbf{s}_{0})\odot(\mathbf{s}_{1}-\mathbf{s}_{0})^{(n)}=\sum_{n=0}^{p-m} \frac{1}{n!}\mathbf{C}^{(m+n)}(\mathbf{s}_{0})\odot(\mathbf{s}_{1}-\mathbf{s}_{0})^{(n)}\,. \tag{26}\]
This tree descent is performed from the root cell. Once a cell has accumulated the \(\mathbf{C}^{(m)}\) of its parent, it transmits its own \(\mathbf{C}^{(m)}\) to its children using Eq. (26), until the leaves have received the \(\mathbf{C}^{(m)}\) from all of their ancestors. Then the accelerations \(\mathbf{C}^{(1)}\) of the moonlets are computed from the \(\mathbf{C}^{(m)}\) of their parent leaf using Eq. (26) (Dehnen [s], Sect. 3.2.2).
In the parameter file of Ncorpi\(\mathcal{O}\)N, the user chooses the desired expansion order \(p\) used for the multipole expansion in falcON or in the standard tree code (if the user wants to use a tree-based method for mutual interactions). In the original description of falcON by Dehnen [s], \(p\) was three, whereas \(p\) was one in the original description of the standard tree code by Barnes and Hut [11]. Ncorpi\(\mathcal{O}\)N allows expansion orders up to \(p=6\). Since the expansion center is the center of mass, the dipole \(\mathbf{M}^{(1)}\) vanishes by construction. Therefore, orders \(p=1\) and \(p=2\) are identical for the standard tree code, since only \(\mathbf{C}^{(1)}\) is ever computed in this case.
In practise in Ncorpi\(\mathcal{O}\)N, when using falcON, we treat the interaction of cell \(B\) on cell \(A\) at the same time as we treat the interaction of cell \(A\) on cell \(B\). The advantages of doing so are two-folds. First, we can take advantage of the relations
\[\begin{split} M_{A}\mathbf{C}_{B\to A}^{(p)}&=(-1)^{p}M_ {B}\mathbf{C}_{A\to B}^{(p)},\\ M_{A}\mathbf{C}_{B\to A}^{(p-1)}&=(-1)^{p-1}M_{B}\mathbf{C}_ {A\to B}^{(p-1)},\end{split} \tag{27}\]
to speed up the algorithm. Second, doing so ensures that the total momentum is preserved up to machine precision, since Newton's third law is verified up to machine precision, which is not true with the standard tree code. A speed up is also achieved by noticing that the highest order multipole moment \(\mathbf{M}^{(p)}\) only affects \(\mathbf{C}^{(0)}\) in Eq. (25). Furthermore, in Eq. (26), \(\mathbf{C}^{(m)}(\mathbf{s}_{1})\) is only affected by \(\mathbf{C}^{(k)}(\mathbf{s}_{0})\) for \(k\geq m\). Since we are only interested in computing the accelerations of the moonlets, \(\mathbf{C}^{(0)}\) never has to be computed for any cell, and as a consequence, the highest order multipole moment \(\mathbf{M}^{(p)}\) is never used and does not have to be computed when climbing the tree.
Another significant speed up comes from the fact that all the manipulated tensors are symmetrical in the sense of Eq. (22). In three-dimensional space, a symmetrical tensor of order \(n\) has only \(\left(n+1\right)\left(n+2\right)/2\) independent components out of the possible \(3^{n}\). In Ncorpi\(\mathcal{O}\)N, order \(n\) tensors are therefore stored in an array of size \(\left(n+1\right)\left(n+2\right)/2\), and to compute them, we only compute \(\left(n+1\right)\left(n+2\right)/2\) distinct quantities. Similarly, when computing the inner product \(\mathbf{T}_{1}^{(n)}\odot\mathbf{T}_{2}^{(n-k)}\), the total number of multiplications can be reduced from \(3^{n}\) down to only \(\frac{1}{4}\left(k+1\right)\left(k+2\right)\left(n-k+1\right)\left(n-k+2\right)\) using the symmetry of the tensors.
The choice of the expansion order \(p\) is an obvious parameter affecting the precision of the expansion. Another parameter is how large the critical radius \(r_{\text{crit}}\) of a cell is, determined by Eq. (12). In the parameter file of Ncorpi\(\mathcal{O}\)N, the user chooses the value of \(\theta_{\text{min}}\), corresponding to the ratio \(r_{\text{max}}/r_{\text{crit}}\) of the root cell. Then, this same ratio for the descendants of the root cell is determined by Eq. (13) of Dehnen [s]. Sensible values are \(0.3\leq\theta_{\text{min}}\leq 0.8\), and highest precisions are achieved with small values. In the standard tree code, a common practise is to consider the same \(\theta\) for all cells, but here, we consider a \(\theta\) dependent on the cell's mass for both falcON and the standard tree code.
In Fig. 4, we plot the relative error \(\left|a-\bar{a}\right|/\bar{a}\), where \(a\) is the scalar acceleration computed with our implementation of falcON, whereas \(\bar{a}\) is the true scalar acceleration, computed in a brute-force way. We show the probability density of the relative error, for various choices of \(\theta_{\text{min}}\) and \(p\) going from \(0\) to \(6\). With thick lines, we plot the relative error excluding the Earth, whereas the relative error including the Earth is shown with thin lines. Since the acceleration due to the Earth is computed without error15, including it
decreases greatly the relative error. Choosing \(p=0\) is equivalent to neglecting the mutual gravity between the moonlets, and the corresponding error distribution, plotted with a thin black line in Fig. 4, makes sense only if the Earth is included.
#### 3.4.4 Standard tree code
We provide here our implementation of the standard tree code first described by Barnes and Hut [11]. When an instruction differs according to whether the algorithm is used for mutual gravity computation of collision search, the instruction relative to gravity is given first in regular font, followed by the instruction relative to collision search in italic font. To treat the interactions between all the moonlets, the procedure StandardTree is called with the root cell in a for loop going over all the moonlets once. Each call to the function is resolved in time \(\mathcal{O}(\ln N)\), hence the overall \(\mathcal{O}(N\ln N)\) time complexity. The thresholds \(N_{\mathrm{cb,pre}}\) and \(N_{\mathrm{cb,post}}\) are parameters given by the user. Possible values are discussed in Sect. 4.
Figure 4: Distribution of the relative errors of the moonlets’ accelerations with our implementation of falcON. These densities of probability were computed with \(N=400\,000\) moonlets around the Earth distributed between \(r=R_{\oplus}\) and \(r=10R_{\oplus}\), each with a random mass, for a total mass \(0.024M_{\oplus}\). The relative errors are similar with the standard tree code.
#### 3.4.5 FalcON : An efficient tree walk
We give here the tree walk procedure of falcON algorithm used after the tree climbing and before the tree descent. Like in Sect. 3.4.4, the instruction relative to gravity computation is given first in regular font,
```
procedureTreeWalk(cell \(A\), cell \(B\))\(\triangleright\) Called once on (root, root) \((N_{a},N_{b})\leftarrow\) number of moonlets in cells \(A\) and \(B\) if\(A=B\)then \(\triangleright\) Self interaction if\(N_{a}\leq N_{\mathrm{cs}}\) or \(A\) is a leaf then Treat the interaction of cell \(A\) with itself brute-forcely else for all pairs \((a,b)\) of children of \(A\)do \(\triangleright\) Up to 36 such pairs \(\textsc{TreeWalk}(a,b)\) else \(\triangleright\) Interaction between different cells if\(N_{a}N_{b}<N_{\mathrm{cc,pre}}\)then Treat the interaction between cell \(A\) and cell \(B\) brute-forcely else if\(A\) and \(B\) are well-separated then For \(1\leq n\leq p\), accumulate \(\mathbf{C}^{(n)}\) for cells \(A\) and \(B\) (Eq. (25)) or Do nothing. else if\(N_{a}N_{b}<N_{\mathrm{cc,post}}\) or both \(A\) and \(B\) are leaves then Treat the interaction between cell \(A\) and cell \(B\) brute-forcely else if\(r_{\mathrm{crit},A}>r_{\mathrm{crit},B}\) or \(B\) is a leaf then \(\triangleright\) Subdividing \(A\) for all child node \(a\) of \(A\)do \(\textsc{TreeWalk}(a,B)\) else for all child node \(b\) of \(B\)do \(\textsc{TreeWalk}(A,b)\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely else for all child node \(b\) of \(B\)do \(\textsc{StandardTree}(a,b)\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,pre}}\)then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,pre}}\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,pre}}\)then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\)then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,pre}}\)then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\)
```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between moonlet \(a\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}\) ```
**procedure** StandardTree(moonlet \(a\), cell \(B\))\(\triangleright\) Called on (moonlet \(a\), root) \(N_{b}\leftarrow\) number of moonlets in cell \(B\) if\(N_{b}<N_{\mathrm{cb,post}}\) or \(B\) is a leaf then Treat the interaction between cell \(A\) and cell \(B\) brute-forcely \(N_{\mathrm{cb,post}}
#### 3.4.6 Peano-Hilbert order and cache efficiency
The practical construction of a tree generally involves a structure containing relevant informations for the current cell (number of children, mass, multipole moments, etc \(\cdots\)), and pointers towards the children nodes, that can be either NULL or contain the address in memory of a child. Such a construction is easy to implement but yields poor memory locality (children have no reason to be next to each other in memory) and travelling in the tree requires multiple pointer dereeferences. These issues are responsible for many cache misses and the processor wastes a lot of clock cycles waiting for data in memory.
A much better implementation can be achieved by storing the tree in a regular array, in such a way that children are contiguous in memory, and such that cells close in space are likely to be close in memory. To this aim, we use the space filling curve discovered by David Hilbert. At a given generation (or level) in the tree, cells are ordered according to a three-dimensional version of Hilbert's 1891 space filling curve, skipping non-existing cells. For illustration purposes, we show this order in two dimensions in Fig. 5 for the tree presented in Fig. 2. For two cells \(A\) and \(B\), the order that we define verifies the following properties
* If \(\text{level}(A)>\text{level}(B)\) then \(\text{order}(A)>\text{order}(B)\).
* If \(\text{order}(A)>\text{order}(B)\) then for all child \(a\) of \(A\) and \(b\) of \(B\), \(\text{order}(a)>\text{order}(b)\).
However, when building the tree, its final structure as well as the number of cells it contains are still unknown and it is not possible to build the tree in a regular array. Therefore, we use the general representation based on pointers to build the tree. Then, the final tree is copied in an array indexed by Hilbert order, hereafter called the flat tree, and the tree is freed. Falcon algorithm (climbing, walk and descent) and the standard tree code (climbing and standard tree) are performed on the flat tree.
Instead of putting the moonlets in the tree in a random order, an impressive speed-up for the tree building can be achieved by putting the moonlets in the Hilbert order of the previous timestep. We define
Figure 5: Hilbert order of all the cells of the tree of Fig. 2. The Hilbert order of a cell is written in its middle. Cells empty of moonlet do not exist and are not assigned an order. In practise, this tree would be stored in an array indexed from 0 to 100.
the Hilbert order of a moonlet as the Hilbert order of its parent leaf. When the moonlets are put in the tree in the Hilbert order of the previous timestep, a spacial coherence is maintained during the tree construction, increasing the probability that the data needed by the processor are already loaded in the cache, and reducing cache misses. In Table 1, we give the time taken by our CPU17 to build the tree when the moonlets are added in the tree in random order and when they are added in the Hilbert order of the previous timestep. The procedure to build the tree is exactly the same in both cases, yet, cache-efficiency makes the building procedure two to three times faster.
Footnote 17: Whichever is faster for the given mutual interaction management method.
In their implementation of the standard tree code in REBOUND (Rein and Liu [12]), the authors do not rebuild the tree from scratch at each timestep, but instead update it by locating moonlets that left their parent leaf. In Norpi\(\mathcal{ON}\), we prefer to build the tree from scratch at each timestep, but subsequent builds are two to three times faster than the first build thanks to Hilbert order. The authors of REBOUND do not mention the speed-up they achieved with their update procedure, and it is unknown which method is best.
## 4 Numerical performances of Norpi\(\mathcal{ON}\)
### Numerical integration
In the parameter file of Norpi\(\mathcal{ON}\), the user chooses how moonlets interact (through collisions, mutual gravity, both of them or none of them). In case of interactions, the user also chooses how interactions should be treated (either brute-forcely, with the mesh algorithm, with falcON, or with the standard tree code). If the mesh-algorithm is used, then only mutual gravity with the neighbouring moonlets and with the three largest moonlets are taken into account. All other long range gravitational interactions between moonlets are discarded. This is generally a poor approximation, unless the three largest moonlets account for the majority of the total moonlet mass. If either falcON or the standard tree code is used, then long range mutual gravity is considered, with a precision depending on \(p\) and \(\theta_{\min}\) (Fig. 4).
We use a leapfrog integrator to run the numerical simulations. Depending on the method chosen for mutual interaction treatment, Norpi\(\mathcal{ON}\) uses either a \(SABA_{1}\) (_half drift_\(+\)_kick_\(+\)_half drift_) or \(SBAB_{1}\) (_half kick_\(+\)_drift_\(+\)_half kick_) symplectic integrator16(Laskar and Robutel [14]). When outputs do not occur at every timestep (this is generally the case for a long simulation), time is saved by combining the last step of a timestep with the first step of the next timestep, since they are identical. For example, the \(SABA_{1}\) integrator takes in that case the form _half drift_\(+\)_kick_\(+\)_drift_\(+\)_kick_\(+\)_drift_\(+\)\(\cdots\), until an output has to occur. When an output occurs, the last drift is undone by half (on a copy of the simulation, as to not interfere with it) and the simulation's state is written on a file. Similar considerations are valid for the \(SBAB_{1}\) integrator. Collisions are searched and resolved during the drift phase, whereas mutual gravity is computed during the kick phase.
Footnote 16: Whichever is faster for the given mutual interaction management method.
### Performances
In order to test the performances of Norpi\(\mathcal{ON}\), we ran numerical simulations with both collisions and mutual gravity, for different values of the number of moonlets \(N\). In order for \(N\) to be constant during a
\begin{table}
\begin{tabular}{l c c c c c} \hline \(N\) & \(2^{10}\) & \(2^{14}\) & \(2^{18}\) & \(2^{22}\) & \(2^{26}\) \\ \hline Random order & 75 \(10^{-6}\) & 19 \(10^{-4}\) & 0.049 & 1.5 & 45 \\ Hilbert order & 28 \(10^{-6}\) & 7.1 \(10^{-4}\) & 0.033 & 0.65 & 13 \\ Speed-up factor & 2.7 & 2.7 & 1.5 & 2.3 & 3.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Time (in seconds) needed to build the tree as a function of the number of moonlets \(N\). When moonlets are added in random order, the tree building takes up to a factor 3.5 longer than when they are added in the Hilbert order of the previous timestep. The subdivision threshold is \(s=26\) and the moonlets are distributed between \(r=2.9R_{\oplus}\) and \(r=12R_{\oplus}\).
simulation, we resolved the collisions elastically. We measured the time taken by our CPU17 to run one timestep (averaged over the first eight timesteps) with each of the four mutual interaction management methods (brute-force, falcON, standard tree code and mesh algorithm, each with the exact same initial conditions for a given \(N\)). We also ran the same simulations with Rein and Liu's REBOUND software (using brute-force and standard tree) in order to compare Ncorpi\(\mathcal{O}\)N with REBOUND. In Fig. 6, we show the results of our tests for \(2^{7}\leq N\leq 2^{25}\).
Footnote 17: Clock : \(\sim 4.5\) GHz. Cache L\({}_{1}\), L\({}_{2}\), L\({}_{3}\) : 80 KB, 1.25 MB, 24 MB. RAM : 32 GB DDR5 4800 MT/s
The runs with a tree-based method (falcON or the standard tree code) were performed with \(\theta_{\rm min}=0.5\) (\(\theta=0.5\) for REBOUND, which uses a constant \(\theta\)), leading to a relative error in the acceleration of the moonlets of the order of \(\sim 10^{-5}\) when \(p=3\) and \(\sim 10^{-7}\) when \(p=6\) (Fig. 4). The subdivision threshold \(s\) is the main parameter (not precision altering) influencing the speed of the tree-based methods. For falcON, we chose \(s=17\) with \(p=3\) and \(s=52\) with \(p=6\), whereas we used \(s=102\) for the standard tree code with \(p=3\), as these values were optimal18 with our material17. For collision search, we used the thresholds \(N_{\rm cs}=12\), \(N_{\rm cc,pre}=0\), \(N_{\rm cc,post}=16\), \(N_{\rm cb,pre}=0\) and \(N_{\rm cb,post}=16\), whereas we used \(N_{\rm cs}=64\), \(N_{\rm cc,pre}=8\), \(N_{\rm cc,post}=64\), \(N_{\rm cb,pre}=6\) and \(N_{\rm cb,post}=16\) for mutual gravity computation (\(p=3\)). For falcON with \(p=6\), we used \(N_{\rm cs}=N_{\rm cc,pre}=256\) and \(N_{\rm cc,post}=512\). These thresholds influence the performances much less than the subdivision threshold \(s\), and the values we report here, while among the most efficient, should not be considered as optimal.
Footnote 18: Optimal only if the same tree is used for collision detection and mutual gravity computation. See footnote 10.
With the brute-force method, Ncorpi\(\mathcal{O}\)N and REBOUND turn out to run almost equally as fast (solid black and dashed purple curve in Fig. 6). REBOUND being slightly slower than Ncorpi\(\mathcal{O}\)N can easily be attributed to the versatility of REBOUND, which requires larger data structures and increases the likelihood of a cache miss. On both softwares, the brute-force method is slower than any other method for \(N\geq 2^{8}\).
Figure 6: Time (in seconds) taken by our CPU to run one timestep (_kick_ + _drift_ or _drift_ + _kick_), as a function of the number \(N\) of moonlets. For clarity, the times are divided by \(N\). The \(N\) moonlets were given initial semi-major axes between \(r=2.9R_{\oplus}\) and \(r=12R_{\oplus}\), eccentricities between 0 and 0.2, and inclinations between \(0^{\circ}\) and \(10^{\circ}\). The mean anomaly, argument of pericenter and longitude of the ascending node were distributed uniformly between 0 and \(2\pi\). All simulations were run with the same material17, in the same conditions (all processor cores were idle except the one running).
FalcON on Ncorpi\(\mathcal{ON}\) turns out to be three to ten times faster than the standard tree code (blue and green curve in Fig. 6). Even with \(p=6\) (red curve), falcON is still faster than the standard tree code with \(p=3\), while also being two orders of magnitudes more precise. Our implementation of the standard tree code is also faster than that of REBOUND (2.76 times faster for \(N=2^{23}\)), which can be attributed to Ncorpi\(\mathcal{ON}\) using a mass dependent opening angle \(\theta\) (Dehnen [8], Eq. (13)), whereas REBOUND uses a constant \(\theta\). For the same precision, Ncorpi\(\mathcal{ON}\) with falcON runs 25 times faster than REBOUND with the standard tree code when \(N=2^{23}\) (solid blue and dashed yellow curves).
Without much surprise, the mesh algorithm turns out to be the fastest method of all on Ncorpi\(\mathcal{ON}\) for \(N\leq 2^{24}\), mainly due to the simplicity of its implementation. This comes at the cost of a much worse precision on the acceleration of the moonlets, since long-range gravity is ignored (unless if with one of the three largest moonlets). The dramatic increase in the running time of the method for \(N\geq 2^{22}\) is due to the fact that it is impossible to keep constant the average number of neighbours past a certain value of \(N\). Indeed, the mesh-size \(\gamma\) is attributed a minimal value to prevent the whole mesh grid (whose number of cells is constant and chosen by the user) to shrink below a certain threshold (also chosen by the user). Above this value for \(N\), the average number of neighbours increases linearly instead of being constant, and the mesh algorithm behaves in \(\mathcal{O}\left(N^{2}\right)\). Therefore, falcON should be preferred to the mesh algorithm if no moonlet account for the majority of the moonlet mass, or if \(N\) is too large. FalcON should always be preferred to the standard tree code. Although the mesh algorithm is faster than falcON over a full timestep due to its simplistic way of handling gravity, falcON outperforms the mesh algorithm for the drift phase, since collision search is faster with falcON.
In Fig. 6, a \(\mathcal{O}(N)\) algorithm would have a constant curve. Therefore, none of the four mutual interaction management modules of Ncorpi\(\mathcal{ON}\) is strictly \(\mathcal{O}(N)\) (although falcON is really close to it for \(N\geq 2^{19}\), especially at order 6). Indeed, even if an algorithm is \(\mathcal{O}(N)\) in the total number of operations, when implemented on an actual CPU, the limited size of the cache is such that the proportion of cache misses increases with \(N\). As a consequence, the proportion of clock cycles that the CPU spends waiting for data increases with \(N\) and the time complexity ends up being slightly worse than \(\mathcal{O}(N)\).
## 5 Resolving collisions
Ncorpi\(\mathcal{ON}\) provides several built-in ways in which collisions should be resolved. In the parameter file, the user can decide that all collisions are resolved elastically (hard-sphere collision without loss of energy), inelastically (hard-sphere collision with loss of energy), by merging the colliding moonlets together, or with the fragmentation model of Ncorpi\(\mathcal{ON}\), detailed in Sect. 5.3.4.
In this section, we consider the collision between two moonlets of masses \(m_{1}\) and \(m_{2}\) and radii \(R_{1}\) and \(R_{2}\). The positions and velocities of the moonlets, at the instant of the impact, are denoted by \(\mathbf{r}_{1}\), \(\mathbf{r}_{2}\), \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\). We also denote
\[\Delta\mathbf{r}=\mathbf{r}_{1}-\mathbf{r}_{2}\quad\text{ and }\quad\Delta\mathbf{v}=\mathbf{v}_{1}- \mathbf{v}_{2}. \tag{28}\]
In a general fashion, we refer to the largest moonlet as the target (hereafter moonlet 2) and to the smallest one as the impactor (hereafter moonlet 1). The impact angle is defined as19
Footnote 19: \(\mathcal{Q}\) is an archaic Greek letter called qoppa.
\[\mathcal{V}=\arcsin\left(\frac{b}{R_{1}+R_{2}}\right)=\arccos\sqrt{1-\frac{b^ {2}}{\left(R_{1}+R_{2}\right)^{2}}}, \tag{29}\]
where \(b\leq R_{1}+R_{2}\) is the impact parameter. We denote \(M=m_{1}+m_{2}\) and in Ncorpi\(\mathcal{ON}\), all moonlets share the same density \(\rho\), chosen by the user in the parameter file.
### Elastic collisions
We say that a collision is elastic if it conserves both energy and momentum. Let \(\mathbf{v}_{1}^{\prime}\) and \(\mathbf{v}_{2}^{\prime}\) be the moonlets velocities after the impact. If we write
\[\mathbf{v}_{1}^{\prime}-\mathbf{v}_{1}=-\frac{\mathbf{J}}{m_{1}}\quad\text{ and }\quad\mathbf{v}_{2}^{ \prime}-\mathbf{v}_{2}=\frac{\mathbf{J}}{m_{2}}, \tag{30}\]
then it is immediate to verify that the total momentum is conserved, whatever the vector \(\mathbf{J}\). Let us write
\[\mathbf{J}=\alpha_{\text{el}}\left(\Delta\mathbf{r}\cdot\Delta\mathbf{v}\right)\Delta\mathbf{ r}, \tag{31}\]
where \(\alpha_{\text{el}}\) is a real number. The scalar product \(\Delta\mathbf{r}\cdot\Delta\mathbf{v}\) traduces the violence of the impact, in the sense that, for a grazing collision, \(\Delta\mathbf{r}\cdot\Delta\mathbf{v}=0\), while for a frontal collision, it reaches an extremum \(\Delta\mathbf{r}\cdot\Delta\mathbf{v}=-\Delta r\Delta v\). The variation of kinetic energy \(\Delta E\) at the impact reads
\[\Delta E=\alpha_{\text{el}}\left(\Delta\mathbf{r}\cdot\Delta\mathbf{v}\right)^{2} \left(\frac{m_{1}+m_{2}}{2m_{1}m_{2}}\alpha_{\text{el}}\Delta r^{2}-1\right). \tag{32}\]
At the impact, we have \(\Delta r=R_{1}+R_{2}\) and the elasticity of the collision reads
\[\alpha_{\text{el}}=\frac{2m_{1}m_{2}}{\left(m_{1}+m_{2}\right)\left(R_{1}+R_{ 2}\right)^{2}}. \tag{33}\]
### Inelastic collisions
The results of Sect. 5.1 suggest a very straightforward model for non-elastic collisions. We simply write \(\mathbf{J}=\alpha\left(\Delta\mathbf{r}\cdot\Delta\mathbf{v}\right)\Delta\mathbf{r}\), and if we choose for \(\alpha\) a non-zero value different from \(\alpha_{\text{el}}\), then the collision in inelastic. Let us write
\[\alpha=\frac{fm_{1}m_{2}}{\left(m_{1}+m_{2}\right)\left(R_{1}+R_{2}\right)^{2 }}=\frac{f}{2}\alpha_{\text{el}}, \tag{34}\]
where \(f\in\mathbb{R}\). Then the variation in kinetic energy due to the impact reads
\[\Delta E=2f\left(f-2\right)\frac{m_{1}m_{2}}{m_{1}+m_{2}}\cos^{2}\theta\Delta v ^{2}. \tag{35}\]
To prevent an energy increase, we must consider \(0\leq f\leq 2\). The condition that the two moonlets gets farther away from each other after the impact reads \(\Delta\mathbf{v}^{\prime}\cdot\Delta\mathbf{r}\geq 0\). We have
\[\Delta\mathbf{v}^{\prime}\cdot\Delta\mathbf{r}=\left(1-f\right)\left(\Delta\mathbf{v} \cdot\Delta\mathbf{r}\right), \tag{36}\]
and so we take \(f\geq 1\) to prevent the moonlets from getting closer after the collision. Ncorpi\(\mathcal{O}\)N's model for non-merging and non-fragmenting collisions thus relies on the parameter \(f\) (indicated by the user in the parameter file of Ncorpi\(\mathcal{O}\)N), bounded by \(1\leq f\leq 2\), such that values of \(f\) close to \(2\) correspond to almost elastic collisions, whereas values close to \(1\) correspond to very inelastic collisions.
### Fragmentation and merging
Previous studies of Moon formation (_e.g._ Ida et al. [1], Salmon and Canup [2]) disregard the fact that, upon a violent collision, moonlets may fragment instead of just merging or bouncing back. We rely here on the existing literature about impacts and crater scaling for the velocities and sizes of the fragments in order to achieve a realistic model of fragmentation.
#### 5.3.1 Velocity distribution
We first follow the impact model of Holsapple and Housen [3] and Housen and Holsapple [5], based on dimensional analysis, to constrain the velocity distribution of the fragments resulting from the impact. Let \(M^{\star}(v):=M^{\star}\) be the mass of fragments ejected with a velocity greater than \(v\) (relative to the largest fragment). We assume the two following hypothesis:
* The region of the target where material is ejected due to the impact is large enough for the impactor to be considered point-mass20. Footnote 20: This assumption is not verified for low-velocity impacts, but the moonlets merge instead of fragmenting in this case.
* The impact is violent enough to overcome both the gravity of the target and the strength of its material.
The first hypothesis clearly implies that the target is much larger than the impactor21, and as a consequence, the outcome of the collision does not depend on the target radius \(R_{2}\). The second hypothesis implies that \(M^{\star}\) does not depend on the surface gravity of the impactor, nor on the strength of its material. Another consequence of the first hypothesis is that the outcome of the impact depends on the impactor through a unique scalar quantity, called coupling parameter and defined as22
Footnote 21: We stress that fragmentations are poorly resolved in Ncorpi\(\mathcal{O}\)N when the target and impactor are roughly of the same size.
Footnote 22: Suetsugu et al. [7] consider oblique impacts by replacing the usual \(\Delta v\) by \(\Delta v\cos\mathfrak{V}\), where \(\mathfrak{V}\) is the impact angle.
\[C=R_{1}\left(\Delta v\cos\mathfrak{V}\right)^{\mu}\rho^{\nu}. \tag{37}\]
The exponent \(\mu\) was constrained for a wide range of material assuming the accepted value23\(\nu=0.4\) and is given in Table 3 of Housen and Holsapple [5]. For a non-porous target, we have \(\mu=0.55\) whether it is liquid or solid, whereas \(\mu=0.41\) for a rubble-pile or sand-covered target. The value of \(\mu\) is to be indicated by the user of Ncorpi\(\mathcal{O}\)N if the built-in fragmentation model is used. According to these assumptions, there exists a functional dependency of the form
Footnote 23: See footnote 5 of Housen and Holsapple [5].
\[M^{\star}=f(C,\rho,v), \tag{38}\]
that is re-written using the \(\pi\)-theorem as
\[\frac{M^{\star}v^{3\mu}\rho^{3\nu-1}}{C^{3}}=kC_{1}^{3\mu}, \tag{39}\]
or equivalently using Eq. (37) (Suetsugu et al. [7], Sect. 5)
\[\frac{M^{\star}(v)}{m_{1}}=\frac{3k}{4\pi}\left(\frac{C_{1}\Delta v\cos \mathfrak{V}}{v}\right)^{3\mu}. \tag{40}\]
The constants \(k\) and \(C_{1}\) are given by Table 3 of Housen and Holsapple [5] and are chosen in the parameter file of Ncorpi\(\mathcal{O}\)N. For a non-porous target, we have \(C_{1}=1.5\) (solid or liquid) and \(k=0.2\) (_resp._\(k=0.3\)) for a liquid (_resp._ solid) target. For a rubble-pile or sand-covered target, \(C_{1}=0.55\) and \(k=0.3\).
#### 5.3.2 Mass of the largest fragment
Following Suetsugu et al. [7], we define the ejected mass \(\tilde{m}\) as the mass unbounded to the largest fragment (or target). That is, we write
\[\tilde{m}=M^{\star}(v_{\mathrm{esc}}),\text{ with }v_{\mathrm{esc}}=\sqrt{ \frac{2\mathcal{G}M}{R}}\text{ and }R=\left(\frac{3M}{4\pi\rho}\right)^{1/3}. \tag{41}\]
This yields
\[\frac{\tilde{m}}{m_{1}}=\frac{3k}{4\pi}\left(\frac{C_{1}\Delta v\cos\mathfrak{ V}}{v_{\mathrm{esc}}}\right)^{3\mu}, \tag{42}\]
and the mass \(\tilde{m}\) of the largest fragment is simply given by \(\tilde{m}=M-\tilde{m}\). For a super-catastrophic collision (defined as \(\tilde{m}<M/10\)), Eq. (42) is not valid and we use instead (Leinhardt and Stewart (2000), Eq. (44))
\[\frac{\tilde{m}}{M}=\frac{1}{10}\left(\frac{5k}{6\pi}\frac{m_{1}}{M}\right)^{-3 /2}\left(\frac{C_{1}\Delta v\cos\mathfrak{V}}{v_{\rm esc}}\right)^{-9\mu/2}. \tag{43}\]
When a super-catastrophic collision occurs, \(\mbox{Ncorpi}\mathcal{O}\!\mbox{N}\) discards the ejected mass from the simulation (assumed vaporized), and uses Eq. (43) to determine the mass of the remaining moonlet.
#### 5.3.3 Mass of successive fragments
Equation (42) gives the mass of the largest fragment, and in this section, we give an estimate of the mass of the remaining fragments. Hereafter, the tail designates the set of all the fragments, largest excluded. Leinhardt and Stewart (2000) fit the size distribution of the remaining fragments with
\[n(r)=Kr^{-(\beta+1)}, \tag{44}\]
where \(n(r)dr\) is the total number of fragments with radii between \(r\) and \(r+dr\), and \(K\) and \(\beta\) are constant. Let \(\tilde{m}_{n}\) and \(r_{n}\) be the mass and radius of the \(n^{\rm th}\) largest fragment. We assume that all fragments are spherical with density \(\rho\) and we write \(\tilde{m}_{1}:=\tilde{m}\). The total number of fragments larger than the \(n^{\rm th}\) largest fragment is
\[n=\int_{r_{n}}^{+\infty}n(r)dr=\frac{K}{\beta}r_{n}^{-\beta},\quad\mbox{which gives}\quad r_{n}=\left(\frac{n\beta}{K}\right)^{-1/\beta}. \tag{45}\]
The total mass of fragments smaller than the \(n^{\rm th}\) largest fragment is given by
\[M-\sum_{k=1}^{n-1}\tilde{m}_{k}=\int_{0}^{r_{n}}\frac{4}{3}\pi\rho r^{3}n(r)dr =\frac{4\pi\rho Kr_{n}^{3-\beta}}{3\left(3-\beta\right)}. \tag{46}\]
Equations (45) and (46) show that a realistic description verifies \(0<\beta<3\). Combining them, we obtain, for \(n\geq 2\), the mass of the \(n^{\rm th}\) largest fragment from the recursive expression
\[\tilde{m}_{n}=\frac{3-\beta}{n\beta}\left(M-\sum_{k=1}^{n-1}\tilde{m}_{k} \right). \tag{47}\]
This approach predicts an infinite number of fragments, and the partial mass \(\tilde{m}_{1}+\cdots+\tilde{m}_{n}\) slowly tends to \(M\) as \(n\) goes to infinity. Some truncation rule on the fragment sizes has to be defined to prevent a too large number of fragments. Equation (47) gives for the mass of the second largest fragment
\[\tilde{m}_{2}=\frac{3-\beta}{2\beta}\tilde{m}:=\frac{\tilde{m}}{\tilde{N}}. \tag{48}\]
Assuming that the tail is made up only of fragments of mass \(\tilde{m}_{2}\), \(\tilde{N}\) is defined as the number of fragments in the tail. From SPH simulations in the gravity regime, Leinhardt and Stewart (2000) fit \(\beta=2.85\), which yields \(\tilde{N}=38\). In order not to overcomplicate, we assume for \(\mbox{Ncorpi}\mathcal{O}\!\mbox{N}\) that all the fragments of the tail have a mass \(\tilde{m}_{2}\). The user chooses \(\tilde{N}\) and Eq. (48) is used to determine \(\tilde{m}_{2}\) and the exponent \(\beta\) of the power law.
#### 5.3.4 The fragmentation model of \(\mbox{Ncorpi}\mathcal{O}\!\mbox{N}\)
The built-in fragmentation model of \(\mbox{Ncorpi}\mathcal{O}\!\mbox{N}\) proceeds as follow. In the parameter file, the user defines a mass threshold \(m^{(0)}\) (We use \(m^{(0)}=5\times 10^{-9}M_{\oplus}\) for our work about the Moon formation), such that:
* If \(\tilde{m}_{2}\geq m^{(0)}\), then the two moonlets are broken into \(\tilde{N}+1\) pieces, where the largest fragment has a mass \(\tilde{m}=M-\tilde{m}\) given by Eq. (42), and the \(\tilde{N}\) other fragments have a mass \(\tilde{m}_{2}\) given by Eq. (48).
* If \(\tilde{m}_{2}<m^{(0)}\leq\tilde{m}\), then the tail is made up of one unique fragment of mass \(\tilde{N}\tilde{m}_{2}=\tilde{m}\).
* If \(\tilde{m}<m^{(0)}\), then the collision results in a merger.
* If \(\tilde{m}<M/10\), then the impact is super-catastrophic. Equation (43) is used and the tail is discarded.
* The largest fragment is given velocity \(\tilde{\mathbf{v}}\) and position \(\tilde{\mathbf{r}}\) determined in Sect. 5.3.6, whereas the \(\tilde{N}\) (_resp._ one) other fragments have velocities \(\tilde{\mathbf{v}}_{1},\ \tilde{\mathbf{v}}_{2},\ \cdots,\ \tilde{\mathbf{v}}_{\tilde{N}}\) (_resp._\(\tilde{\mathbf{v}}\)) determined in Sect. 5.3.5.
#### 5.3.5 Velocities of fragments
We now estimate the velocities of the fragments after the impact, using Eq. (40) for \(M^{\star}(v)\). We define
\[m^{\star}(v):=-\frac{dM^{\star}}{dv}=3\mu\tilde{m}\frac{1}{v}\left(\frac{v_{ \text{esc}}}{v}\right)^{3\mu}, \tag{49}\]
where \(m^{\star}(v)dv\) is the mass of fragments with speeds relative to the largest fragment in the range \([v,v+dv]\). Since all \(N\) fragments of the tail are unbounded to the largest fragment, the slowest of these is made up of particles having velocities between \(v_{\text{esc}}\) and some velocity \(u_{1}\). More generally, the \(k^{\text{th}}\) fastest fragment of the tail has a velocity \(\tilde{v}_{k}^{\prime}\) with respect to the largest fragment given by
\[\tilde{m}_{2}\tilde{v}_{k}^{\prime}=\int_{u_{k-1}}^{u_{k}}m(v)vdv, \tag{50}\]
where \(u_{0}=v_{\text{esc}}\), \(u_{\tilde{N}}=+\infty\) and for all \(k\leq\tilde{N}\), \(u_{k-1}<\tilde{v}_{k}^{\prime}<u_{k}\). The speeds \(u_{k}\) are found by writing
\[\tilde{m}_{2}=\int_{u_{k-1}}^{u_{k}}m(v)dv=\tilde{m}\left[\left(\frac{v_{ \text{esc}}}{u_{k-1}}\right)^{3\mu}-\left(\frac{v_{\text{esc}}}{u_{k}}\right) ^{3\mu}\right]. \tag{51}\]
If we define \(z_{k}=\left(v_{\text{esc}}/u_{k}\right)^{3\mu}\), then Eq. (51) yields \(z_{k-1}-z_{k}=\tilde{m}_{2}/\tilde{m}\), that is
\[z_{k}=1-\frac{k}{\tilde{N}},\quad\text{for }0\leq k\leq\tilde{N}. \tag{52}\]
Injecting Eq. (52) into Eq. (50), we obtain the scalar velocity of the \(k^{\text{th}}\) fastest fragment of the tail
\[\frac{\tilde{v}_{k}^{\prime}}{v_{\text{esc}}}=\frac{\tilde{N}}{\tau}\left[ \left(1-\frac{k-1}{\tilde{N}}\right)^{\tau}-\left(1-\frac{k}{\tilde{N}}\right) ^{\tau}\right], \tag{53}\]
where24\(\tau=\left(3\mu-1\right)/3\mu\). Surprisingly enough, these speeds are independent of \(\Delta v\), suggesting that a high impact velocity means more fragmentation but does not translate into a faster ejecta. When the tail is made up of one unique fragment, its scalar velocity is given by
Footnote 24: \(\tau\) is an archaic Greek letter called stigma.
\[\tilde{v}^{\prime}=\frac{1}{\tilde{m}}\int_{v_{\text{esc}}}^{+\infty}m(v)vdv= \frac{v_{\text{esc}}}{\tau}. \tag{54}\]
The existing literature gives little insight on the directions of fragments following an impact (Suo et al. [15] give some constraints but their work is limited to impacts on granular media in an intermediate regime between gravity and strength), and our model here is arbitrary. The speeds of the tail's fragments are given a direction with respect to the largest fragment according to the following schema.
We give to the \(k^{\text{th}}\) fragment of the tail the position
\[\tilde{\mathbf{r}}_{k}^{\prime}=\Delta\mathbf{r}+2p_{k}\tilde{R}_{2}\mathbf{u}+2q_{k}\tilde{ R}_{2}\mathbf{v}, \tag{55}\]
and the speed
\[\tilde{\mathbf{v}}_{k}^{\prime}=\tilde{v}_{k}^{\prime}\frac{\tilde{\mathbf{r}}_{k}^{ \prime}}{\tilde{r}_{k}^{\prime}}=\tilde{v}_{k}^{\prime}\frac{\tilde{\mathbf{r}}_{k }^{\prime}}{\sqrt{\left(R_{1}+R_{2}\right)^{2}+4\tilde{R}_{2}^{2}\left(p_{k}^ {2}+q_{k}^{2}\right)}}, \tag{56}\]
where \((p_{k},q_{k})\in\mathbb{Z}^{2}\) and \(\tilde{v}_{k}\) is given by Eq. (53). If the collision is nearly frontal, then the vector \(\mathbf{v}\) is ill-defined in this schema. In that case we take for \(\mathbf{v}\) any unit vector orthogonal to \(\Delta\mathbf{r}\). With \(\tilde{N}=15\) (or \(\beta=45/17\)), \(-1\leq p_{k}\leq 3\) and25\(-1\leq q_{k}\leq 1\), the fragmented moonlets would look like
Footnote 25: This choice ensures that more fragments are ejected forward than backward, which sounds intuitive.
While all fragments of the tail are unbounded to the largest fragment, there is no reason why the fragments of the tail should be unbounded to one another. Depending on their relative velocities, some pairs will be unbounded while some others will not. With \(\tilde{N}=15\) the scalar velocities range from \(\tilde{v}_{1}\approx 1.02\,v_{\text{esc}}\) to \(\tilde{v}_{15}\approx 13.1\,v_{\text{esc}}\), and considering that \(v_{\text{esc}}\) is the escape velocity at the surface of a complete merger, it is clear that most pairs, if not all, will be unbounded.
#### 5.3.6 Conservation of momentum and angular momentum
Choosing the position \(\tilde{\mathbf{r}}\) and velocity \(\tilde{\mathbf{v}}\) of the largest fragment completes the definition of Norpi\(\mathcal{O}\)N's fragmentation model. Indeed, the positions and speeds of the tail's moonlets are then given in the geocentric reference frame \((\mathcal{O},\mathbf{i},\mathbf{j},\mathbf{k})\) by
\[\tilde{\mathbf{r}}_{k}=\tilde{\mathbf{r}}_{k}^{\prime}+\tilde{\mathbf{r}},\quad\tilde{\bm {v}}_{k}=\tilde{\mathbf{v}}_{k}^{\prime}+\tilde{\mathbf{v}}. \tag{57}\]
When the tail is reunited into a single moonlet, its position and speed are \(\tilde{\mathbf{r}}=\tilde{\mathbf{r}}^{\prime}+\tilde{\mathbf{r}}\) and \(\tilde{\mathbf{v}}=\tilde{\mathbf{v}}^{\prime}+\tilde{\mathbf{v}}\), where \(\tilde{\mathbf{r}}^{\prime}\) and \(\tilde{\mathbf{v}}^{\prime}\) are defined by Eqs. (55) and (56) with \(p_{1}=q_{1}=0\). Let
\[\mathbf{v}_{\text{cm}}=\frac{m_{1}}{M}\mathbf{v}_{1}+\frac{m_{2}}{M}\mathbf{v}_{2}\quad \text{and}\quad\mathbf{r}_{\text{cm}}=\frac{m_{1}}{M}\mathbf{r}_{1}+\frac{m_{2}}{M}\bm {r}_{2} \tag{58}\]
be the velocity and position of the center of mass of the colliding pair. We define
\[\mathbf{G}=m_{1}\mathbf{r}_{1}\times\mathbf{v}_{1}+m_{2}\mathbf{r}_{2}\times\mathbf{v}_{2} \tag{59}\]
the angular momentum of the pair at the collision. For a merger, the conservation of the angular momentum (_resp._ the momentum) reads \(M\tilde{\mathbf{r}}\times\tilde{\mathbf{v}}=\mathbf{G}\) (_resp._\(\tilde{\mathbf{v}}=\mathbf{v}_{\rm cm}\)). It is interesting to notice that it is impossible to preserve both the momentum and the angular momentum at the collision without considering the spin. Indeed, the conservation of the angular momentum implies that \(\tilde{\mathbf{v}}\) is orthogonal to \(\mathbf{G}\). However, from
\[M\tilde{\mathbf{v}}\cdot\mathbf{G}=M\mathbf{v}_{\rm cm}\cdot\mathbf{G}=m_{1}m_{2}\mathbf{v}_{2} \cdot\Delta\mathbf{r}\times\Delta\mathbf{v}, \tag{60}\]
we conclude that it is possible to conserve both the momentum and the angular momentum only if \(\Delta\mathbf{r}\times\Delta\mathbf{v}=\mathbf{0}\), or equivalently, only if the collision is frontal (\(\mathfrak{V}=0\)). For oblique collisions, the only way to conserve both is to take into account the spin of the moonlets. However, taking into account the spin complexifies the treatment of collisions as well as the numerical implementation and slows down the code. Therefore, NorpiON does not implement the spin and if the user chooses to use the fragmentation model or to resolve all collisions by merging, then it must be decided if the momentum or the angular momentum should be preserved upon impact. If falcON is used to treat mutual interactions, then it makes more sense to preserve the momentum upon collision, since by construction, falcON preserves the total momentum when computing mutual gravity, but does not preserve the total angular momentum.
When the colliding moonlets merge, the momentum is conserved simply by taking \(\tilde{\mathbf{v}}=\mathbf{v}_{\rm cm}\), whereas we achieve the conservation of the momentum with \(\tilde{\mathbf{v}}=\mathbf{v}_{\rm cm}-\tilde{m}\tilde{\mathbf{v}}^{\prime}/M\) when the tail is reunited into one unique moonlet. Finally, when a full fragmentation occurs, we conserve the total momentum with
\[\tilde{\mathbf{v}}=\mathbf{v}_{\rm cm}-\sum_{j=1}^{\tilde{N}}\frac{\tilde{m}_{2}}{M} \tilde{\mathbf{v}}^{\prime}_{k}. \tag{61}\]
Conserving the angular momentum is not as straightforward and we present our model for doing so in A.4.
is found to be significantly slower than \(\mathrm{Ncorpi}\mathcal{ON}\) on a single-core run, it would outperform \(\mathrm{Ncorpi}\mathcal{ON}\) if heavily parallelized. Therefore, we plan to upgrade \(\mathrm{Ncorpi}\mathcal{ON}\) to a parallelized version in the future, as well as to make it a multi-purpose \(N\)-body integrator (by removing the need for a massive central mass).
\(\mathrm{Ncorpi}\mathcal{ON}\) has its own fragmentation module that relies on crater scaling and ejecta models to come up with a realistic outcome for violent collisions between moonlets. However, this model makes assumptions (_e.g._ impactor much smaller than target) that can be hard to reconcile with the reality of a simulation. Furthermore, the direction of the fragments is chosen arbitrarily after a fragmentation, and these issues could reduce the actual degree of realism of \(\mathrm{Ncorpi}\mathcal{ON}\)'s fragmentation model.
Beyond planetary or satellite formation, disks of debris are also observed by stellar occultation around some trans-Neptunian object like the dwarf planet Haumea (Ortiz et al. (2016)), or a smaller-sized body called Quaoar (Morgado et al. (2017)). Both these objects feature rings located outside of their Roche radius, and \(\mathrm{Ncorpi}\mathcal{ON}\) could be a relevant tool to understand what mechanisms prevent the rings' material from accreting.
## Acknowledgements
Author J.C. thanks Walter Dehnen for helpful transatlantic discussions about the intricacies of FalcON algorithm. This work was partly supported by NASA grants 80NSSC19K0514 and 80NSSC21K1184. Partial funding was also provided by the Center for Matter at Atomic Pressures (CMAP), the National Science Foundation (NSF) Physics Frontier Center under Award PHY-2020249, and EAR-2237730 by NSF. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the National Science Foundation. This work was also supported in part by the Alfred P. Sloan Foundation under grant number G202114194.
## Author contributions and competing interests statement
J.C. had the idea of the \(\mathrm{Ncorpi}\mathcal{ON}\) software and wrote its code, whereas A.Q. and M.N. provided ideas and insight. The authors declare that they do not have competing interests.
## Appendix A Notations
We gather for convenience all the notations used throughout this work in Table 2.
## Appendix B General orbital dynamics
This appendix focuses on aspects of orbital dynamics that are not \(\mathrm{moonlet}-\mathrm{moonlet}\) interactions (treated in Sect. 3).
### Interactions with the center of mass of the Earth
We consider here the gravitational interactions between the moonlets and the center of mass of the Earth. Let \(\mathbf{r}\) be the position of a moonlet in the geocentric reference frame. Its gravitational potential per unit mass reads
\[V=-\frac{\mathcal{G}M_{\oplus}}{r}, \tag{10}\]
where \(\mathcal{G}\) is the gravitational constant. The moonlet's acceleration is given by
\[\ddot{\mathbf{r}}=-\nabla_{\mathbf{r}}V, \tag{11}\]
that is,
\[\ddot{\mathbf{r}}=-\frac{\mathcal{G}M_{\oplus}}{r^{3}}\mathbf{r}. \tag{12}\]
\(\mathrm{Ncorpi}\mathcal{ON}\) uses dimensionless units such that \(R_{\oplus}=M_{\oplus}=1\) and \(\mathcal{G}=4\pi^{2}\). The choice \(\mathcal{G}=4\pi^{2}\), instead of the more common \(\mathcal{G}=1\), ensures that the unit of time is the orbital period at Earth surface. The user can change this in the parameter file.
### Earth flattening and interactions with the equatorial bulge
The Earth is not exactly a sphere, and under its own rotation, it tends to take an ellipsoidal shape. The subsequent redistribution of mass modifies its gravitational field, affecting the moonlets. Let \(\zeta(\theta,\varphi)\) be the altitude of the geoid of the Earth, where
\[\begin{split} X&=r\sin\theta\cos\varphi,\\ Y&=r\sin\theta\sin\varphi,\\ Z&=r\cos\theta,\end{split} \tag{100}\]
is the relation between the cartesian and spherical coordinates of \((\mathcal{O},\mathbf{I},\mathbf{J},\mathbf{K})\). If \(R_{\oplus}\) denotes the mean radius of the Earth, then the geoid is generally defined as the only equipotential surface such that
\[\int_{0}^{2\pi}\int_{0}^{\pi}\zeta(\theta,\varphi)\sin\theta d\theta d\varphi= R_{\oplus}, \tag{101}\]
that is, as the only equipotential surface whose average height is the mean radius. Expanding the geoid over the spherical harmonics as
\[\begin{split}\zeta(\theta,\varphi)&=R_{\oplus} \left[1+h(\theta,\varphi)\right],\text{ and}\\ h(\theta,\varphi)&=\sum_{l=2}^{+\infty}\sum_{m=-l }^{l}\epsilon_{lm}Y_{lm}(\theta,\varphi)\end{split} \tag{102}\]
satisfies Eq. (101). For reference, the definition of the spherical harmonics used here is given in Appendix A of Couturier (1986). If the Earth is spherical, then its potential is radial and we take \(\zeta(\theta,\varphi)=R_{\oplus}\), that is, \(\epsilon_{lm}=0\) for all \(l\) and \(m\).
\begin{table}
\begin{tabular}{l c|l c|l c} \hline Notation & Definition & Notation & Definition & Notation & Definition \\ \hline bold & vector or tensor & & \(d/dt\) & \(N\) & number of moonlets \\ \(R_{\oplus}\) & Earth radius & \(M_{\oplus}\) & Earth mass & \(M_{\odot}\) & Sun mass \\ \(R_{\mathcal{U}}\) & Moon radius & \(m_{j}\) & moonlet mass & \(\mathbf{r}\) & moonlet position \\ \(\mathbf{v}\) & moonlet speed & \(\mathcal{G}\) & gravitational constant & \(\zeta\) & geoid altitude \\ \(Y_{lm}\) & spherical harmonic & \(\mathbf{\Omega}\) & Earth rotation & \(\Omega_{\mathrm{c}}\) & \(\left(\mathcal{G}M_{\oplus}/R_{\oplus}^{3}\right)^{1/2}\) \\ \(J_{2}\) & \(2^{\mathrm{nd}}\) zonal harmonic & \(P_{2}(z)\) & \(\frac{1}{2}\left(3z^{2}-1\right)\) & \(\gamma\) & mesh-size \\ \(x\) & number of neighbours & \(s\) & subdivision threshold & \(\bar{\mathbf{s}}_{A}\) & Eq. (7) \\ \(r_{\mathrm{max}}\) & Eqs. (8) \& (11) & \(r_{\mathrm{crit}}\) & Eqs. (9) \& (12) & \(\mathbf{s}_{A}\) & Eq. (10) \\ \(\theta\) & opening angle & \(\mathbf{M}^{(n)}\) & Eq. (20) & \(\mu_{i}\) & \(\mathcal{G}m_{i}\) \\ \(\mathbf{\Delta},\mathbf{R}\) & Fig. 3 & \(g(\mathbf{z})\) & \(1/z\) & \(\mathbf{\nabla}^{(n)}g(\mathbf{R})\) & Eq. (17) \\ \(p\) & Expansion order & \(\odot\) & Eq. (18) & \(\otimes\) & Eq. (19) \\ \(\mathbf{C}^{(n)}\) & Eq. (25) & \(\mathbf{T}^{(n)}\) & Arbitrary tensor & \(N_{\mathrm{cc,cb,cs}}\) & Sect. 3.4.4 \& 3.4.5 \\ \(\Delta\mathbf{r},\Delta\mathbf{v}\) & Eq. (28) & \(\mathfrak{R}\) & Eq. (29) & \(b\) & impact parameter \\ \(\rho\) & moonlet density & \(\alpha\) & Eqs. (33) \& (34) & \(f\) & Eq. (34) \\ \(C,\mu,\nu\) & Eq. (37) & \(M^{\star}(v)\) & Eq. (40) & \(k,C_{1}\) & below Eq. (40) \\ \(v_{\mathrm{esc}}\) & Eq. (41) & \(\tilde{m}\) & Eq. (42) & \(\tilde{m}\) & below Eq. (42) \\ \(\beta\) & Eq. (44) & \(\tilde{m}_{j}\) & Eq. (47) & \(\tilde{N}\) & Eq. (48) \\ \(m^{(0)}\) & Sect. 5.3.4 & \(\tilde{\mathbf{r}},\tilde{\mathbf{v}}\) & Appendix D & \(m^{\star}(v)\) & Eq. (49) \\ \(\tilde{\tau}\) & \(\left(3\mu-1\right)/3\mu\) & \(\tilde{\mathbf{r}}_{k}^{\prime},\tilde{\mathbf{v}}_{k}^{\prime}\) & Eqs. (55) \& \(\tilde{\mathbf{r}}_{k},\tilde{\mathbf{v}}_{k}\) & Eq. (57) \\ \(\mathbf{r}_{\mathrm{cm}},\mathbf{v}_{\mathrm{cm}}\) & Eq. (58) & \(\mathbf{G}\) & Eq. (59) & \(\mathbf{\Lambda}\) & \(\mathbf{s}_{B}-\mathbf{s}_{b}\) \\ \(M\) & \(m_{1}+m_{2}\) & \(\tilde{\mathbf{g}},\tilde{\mathbf{s}},\tilde{\mathbf{u}}\) & Eq. (12) & & \\ \hline \hline \end{tabular}
\end{table}
Table 10: Notations used in this paper, ordered roughly by first appearance. Notations used right after their definition only are not included here. \(\tau\) (stigma) and \(\mathfrak{R}\) (qoppa) are archaic Greek letters.
Similarly as for the geoid, we write the potential raised by the redistribution of mass within the Earth as (_e.g._ Boue et al. [19])
\[\begin{split}& V(r,\theta,\varphi)=-\frac{\mathcal{G}M_{\oplus}}{r} \left[1+\hat{v}(r,\theta,\varphi)\right]+V_{\Omega}(r,\theta),\\ &\hat{v}(r,\theta,\varphi)=\sum_{l=2}^{+\infty}\sum_{m=-l}^{l} \left(\frac{R_{\oplus}}{r}\right)^{l}\hat{V}_{lm}Y_{lm}(\theta,\varphi),\end{split}\] (B.7)
where \(V_{\Omega}(r,\theta)=\Omega^{2}r^{2}\left(P_{2}(\cos\theta)-1\right)/3\) is the potential raised by the rotation itself. We denote \(\Omega_{\mathrm{c}}=\left(\mathcal{G}M_{\oplus}/R_{\oplus}^{3}\right)^{1/2}\) the Keplerian frequency at Earth's surface. With this notation, the potential raised by the Earth deformed by its own rotation can be rewritten
\[\begin{split}& V(r,\theta,\varphi)=-\frac{\mathcal{G}M_{\oplus}}{r} \left[1+v(r,\theta,\varphi)\right]-\frac{1}{3}\Omega^{2}r^{2},\\ & v(r,\theta,\varphi)=\sum_{l=2}^{+\infty}\sum_{m=-l}^{l}\left( \frac{R_{\oplus}}{r}\right)^{l}V_{lm}Y_{lm}(\theta,\varphi),\end{split}\] (B.8)
where \(V_{lm}=\hat{V}_{lm}\) if \((l,m)\neq(2,0)\) and
\[V_{20}=\hat{V}_{20}-\frac{1}{3}\frac{\Omega^{2}}{\Omega_{\mathrm{c}}^{2}}\frac {r^{5}}{R_{\oplus}^{5}}.\] (B.9)
If we assume \(h\ll 1\) and \(v\ll 1\) (this is equivalent to \(\Omega^{2}\ll\Omega_{\mathrm{c}}^{2}\)), then it is easy to verify from the definition of the geoid that (Wahr [20], Sect. 4.3.1)
\[\epsilon_{lm}=V_{lm}\big{|}_{r=R_{\oplus}}.\] (B.10)
This gives a relation between the figure of the Earth (the geoid) and the potential raised by the redistribution of mass. If we limit ourselves to the quadrupolar order and if we assume that the problem does not to depend on \(\varphi\) (axisymmetry), then all the \(V_{lm}\) and \(\epsilon_{lm}\) vanish for \((l,m)\neq(2,0)\). For the fluid Earth, it can be shown that (Couturier [18], Sect. 5.2.1; Wahr [20], Eq. (4.24))
\[\epsilon_{20}=-\frac{5}{6}\frac{\Omega^{2}}{\Omega_{\mathrm{c}}^{2}}.\] (B.11)
The \(J_{2}\) coefficient is defined as \(J_{2}=-\hat{V}_{20}\) (with the convention of Appendix A of Couturier [18] for the spherical harmonics). For the fluid Earth, Eqs. (B.9), (B.10) and (B.11) yield
\[J_{2}=\frac{1}{2}\frac{\Omega^{2}}{\Omega_{\mathrm{c}}^{2}}.\] (B.12)
According to Eq. (B.7), a moonlet orbiting the Earth at position \(\mathbf{r}\) in the geocentric reference frame, feels, from the equatorial bulge, the potential per unit mass27
Footnote 27: Due to the axisymmetry, we can go to the geocentric reference frame by simply removing \(V_{\Omega}\) in Eq. (B.7).
\[V_{J_{2}}=\frac{\mathcal{G}M_{\oplus}R_{\oplus}^{2}}{r^{3}}J_{2}P_{2}(\cos \theta)=-\frac{\mathcal{G}M_{\oplus}R_{\oplus}^{2}}{2r^{5}}J_{2}\left(r^{2}-3 \left(\mathbf{k}\cdot\mathbf{r}\right)^{2}\right).\] (B.13)
Writing \(\mathbf{r}=x\mathbf{i}+y\mathbf{j}+z\mathbf{k}\), we have \(\mathbf{k}\cdot\mathbf{r}=z\), and then using Eq. (B.2), the contribution of the equatorial bulge of the Earth to the acceleration of a moonlet takes the form
\[\bar{\mathbf{r}}=\frac{\mathcal{G}M_{\oplus}R_{\oplus}^{2}J_{2}}{r^{5}}\left[\frac {15z^{2}-3r^{2}}{2r^{2}}\mathbf{r}-3z\mathbf{k}\right].\] (B.14)
The user chooses the sideral rotation period of the Earth (or central body) in the parameter file of Ncorpi\(\mathcal{O}\)N. Then, Eq. (B.12) and the fluid approximation are used to determine the \(J_{2}\) of the central body.
### Interaction with the Sun
The interaction between a moonlet, located at \(\mathbf{r}\), and the Sun, located at \(\mathbf{r}_{\odot}\) in the geocentric reference frame can be taken into account in the model by adding to the moonlet the potential per unit mass
\[V_{\odot}=-\mathcal{G}M_{\odot}\left(\frac{1}{\left|\mathbf{r}-\mathbf{r}_{\odot}\right|} -\frac{\mathbf{r}\cdot\mathbf{r}_{\odot}}{r_{\odot}^{3}}\right). \tag{13}\]
To the quadrupolar order, this gives
\[V_{\odot}=-\frac{\mathcal{G}M_{\odot}}{2\mathbf{r}_{\odot}^{3}}\left(3\frac{\left( \mathbf{r}\cdot\mathbf{r}_{\odot}\right)^{2}}{\mathbf{r}_{\odot}^{2}}-\mathbf{r}^{2}\right). \tag{14}\]
Equation (12) yields, for the acceleration of the moonlet
\[\tilde{\mathbf{r}}=\frac{\mathcal{G}M_{\odot}}{\mathbf{r}_{\odot}^{3}}\left(\mathbf{r}-3 \frac{\mathbf{r}\cdot\mathbf{r}_{\odot}}{\mathbf{r}_{\odot}^{2}}\mathbf{r}_{\odot}\right). \tag{15}\]
For simplification, we assume that the Earth orbits the Sun on a circular trajectory. Without restraining the generality, we can assume that the Earth's longitude of the ascending node is \(0\) (this can be achieved by simply putting the unit vector \(\mathbf{i}\) of the geocentric frame towards the ascending node). Similarly, for a circular orbit, we can arbitrarily choose \(\omega_{\odot}=0\). We denote \(a_{\odot}\) the semi-major axis of the Earth orbit and \(\epsilon\) the obliquity of the Earth. With these notations, the vector \(\mathbf{r}_{\odot}\) reads
\[\mathbf{r}_{\odot}=\begin{pmatrix}x_{\odot}\\ y_{\odot}\\ z_{\odot}\end{pmatrix}=\begin{pmatrix}1&0&0\\ 0&\cos\epsilon&-\sin\epsilon\\ 0&\sin\epsilon&\cos\epsilon\end{pmatrix}\begin{pmatrix}a_{\odot}\cos\lambda_{ \odot}\\ a_{\odot}\sin\lambda_{\odot}\\ 0\end{pmatrix}=\begin{pmatrix}a_{\odot}\cos\lambda_{\odot}\\ a_{\odot}\sin\lambda_{\odot}\cos\epsilon\\ a_{\odot}\sin\lambda_{\odot}\sin\epsilon\end{pmatrix}, \tag{16}\]
where \(\lambda_{\odot}=n_{\odot}t=t\sqrt{\mathcal{G}M_{\odot}/a_{\odot}^{3}}\).
## Appendix C Multipole moment \(\mathbf{M^{(n)}(s_{B})}\) of a cell from those of its children
We give here for \(n\leq 5\) the expression of the multipole moment \(\mathbf{M^{(n)}(s_{B})}\) of a parent cell (Eq. 20) from the multipole moments \(\mathbf{m}^{(n)}(s_{b})\) of its children cells. The notations \(\mathbf{s}_{B}\) and \(\mathbf{s}_{b}\) are the expansion centers of the parent and of one of its children. We denote \(\mathbf{\Lambda}=\mathbf{s}_{B}-\mathbf{s}_{b}=(\Lambda_{1},\Lambda_{2},\Lambda_{3})\) and we assume that the expansion centers are the barycentres, leading to the simplification \(\mathbf{M}^{(1)}(\mathbf{s}_{B})=\mathbf{m}^{(1)}(\mathbf{s}_{b})=\mathbf{0}\). The contribution \(\mathbf{M}^{(n)}_{b\to B}\) from child \(b\) to the multipole moment of its parent \(B\) is given by
\[M^{(0)}_{b\to B}=m^{(0)}, \tag{17}\]
\[\mathbf{M}^{(2)}_{b\to B}=\mathbf{m}^{(2)}+m^{(0)}\mathbf{\Lambda}\otimes\mathbf{ \Lambda}, \tag{18}\]
\[\begin{bmatrix}\mathbf{M}^{(3)}_{b\to B}\end{bmatrix}_{ijk}=m^{(3)}_{ijk}- \left(m^{(2)}_{ij}\Lambda_{k}+m^{(2)}_{ik}\Lambda_{j}+m^{(2)}_{jk}\Lambda_{i }\right)-m^{(0)}\Lambda_{i}\Lambda_{j}\Lambda_{k}, \tag{19}\]
\[\begin{split}\left[\mathbf{M}^{(4)}_{b\to B}\right]_{ijkl}=m^{(4)}_{ ijkl}-\left(m^{(3)}_{ijk}\Lambda_{l}+m^{(3)}_{ijl}\Lambda_{k}+m^{(3)}_{ikl} \Lambda_{j}+m^{(3)}_{jkl}\Lambda_{i}\right)+m^{(0)}\Lambda_{i}\Lambda_{j} \Lambda_{k}\Lambda_{l}\\ +\left(m^{(2)}_{ij}\Lambda_{k}\Lambda_{l}+m^{(2)}_{ik}\Lambda_{j} \Lambda_{l}+m^{(2)}_{il}\Lambda_{j}\Lambda_{k}+m^{(2)}_{jk}\Lambda_{i} \Lambda_{l}+m^{(2)}_{jl}\Lambda_{i}\Lambda_{k}+m^{(2)}_{kl}\Lambda_{i}\Lambda_{ j}\right),\end{split} \tag{20}\]
\[\begin{split}\left[\mathbf{M}^{(5)}_{b\to B}\right]_{ijklm}=m^{(5)}_{ ijklm}-\left(m^{(4)}_{ijkl}\Lambda_{m}+m^{(4)}_{ijkm}\Lambda_{l}+m^{(4)}_{ ijlm}\Lambda_{k}+m^{(4)}_{iklm}\Lambda_{j}+m^{(4)}_{jklm}\Lambda_{i} \right)\\ +\left(m^{(3)}_{ijk}\Lambda_{l}\Lambda_{m}+m^{(3)}_{ijl}\Lambda_{ k}\Lambda_{m}+m^{(3)}_{ijm}\Lambda_{k}\Lambda_{l}+m^{(3)}_{iklm}\Lambda_{j} \Lambda_{m}+m^{(3)}_{ikm}\Lambda_{j}\Lambda_{l}+m^{(3)}_{ilm}\Lambda_{j} \Lambda_{k}\\ +m^{(3)}_{jkl}\Lambda_{i}\Lambda_{m}+m^{(3)}_{jkm}\Lambda_{i} \Lambda_{l}+m^{(3)}_{jlm}\Lambda_{i}\Lambda_{k}+m^{(3)}_{klm}\Lambda_{i} \Lambda_{j}\right)-\left(m^{(2)}_{ij}\Lambda_{k}\Lambda_{l}\Lambda_{m}+m^{(2 )}_{ik}\Lambda_{j}\Lambda_{l}\Lambda_{m}\right.\\ +\left.m^{(2)}_{il}\Lambda_{j}\Lambda_{k}\Lambda_{m}+m^{(2)}_{im} \Lambda_{j}\Lambda_{k}\Lambda_{l}+m^{(2)}_{jk}\Lambda_{i}\Lambda_{l}\Lambda_{ m}+m^{(2)}_{jl}\Lambda_{i}\Lambda_{k}\Lambda_{m}+m^{(2)}_{jm}\Lambda_{i} \Lambda_{k}\Lambda_{l}\\ +\left.m^{(2)}_{kl}\Lambda_{i}\Lambda_{j}\Lambda_{m}+m^{(2)}_{km} \Lambda_{i}\Lambda_{j}\Lambda_{l}+m^{(2)}_{lm}\Lambda_{i}\Lambda_{j}\Lambda_{k} \right)-m^{(0)}\Lambda_{i}\Lambda_{j}\Lambda_{k}\Lambda_{l}\Lambda_{m},\end{split} \tag{21}\]
where \((i,j,k,l,m)\in\{1,2,3\}\)5. The multipole moment of the parent is then obtained by summing over its children
Footnote 27: This comes from \(\mathbf{a}\times\left(\mathbf{b}\times\mathbf{a}\right)=a^{2}\mathbf{b}-\left(\mathbf{a}\cdot\mathbf{b} \right)\mathbf{a}\).
\[\mathbf{M}^{(n)}(\mathbf{s}_{B})=\sum_{\text{children }b}\mathbf{M}_{b\to B}^{(n)}. \tag{120}\]
## Appendix D Conservation of the angular momentum upon fragmenting or merging impact
We present here the method that Ncorpi\(\mathcal{O}\)N uses to preserve the angular momentum up to machine precision, when the user requires so. We recall that
\[\mathbf{v}_{\text{cm}}=\frac{m_{1}}{M}\mathbf{v}_{\mathbf{1}}+\frac{m_{2}}{M}\mathbf{v}_{\mathbf{2 }}\quad\text{and}\quad\mathbf{r}_{\text{cm}}=\frac{m_{1}}{M}\mathbf{r}_{\mathbf{1}}+\frac {m_{2}}{M}\mathbf{r}_{\mathbf{2}}, \tag{121}\]
are the velocity and position of the center of mass of the colliding pair, whereas
\[\mathbf{G}=m_{1}\mathbf{r}_{1}\times\mathbf{v}_{1}+m_{2}\mathbf{r}_{2}\times\mathbf{v}_{2}, \tag{122}\]
is the angular momentum to be conserved.
### Case of a merger
If the collision results in a merger, then the outcome is a single moonlet of mass \(M=m_{1}+m_{2}\). The conservation of the total angular momentum reads
\[\mathbf{G}=M\tilde{\mathbf{r}}\times\tilde{\mathbf{v}}. \tag{123}\]
Equation (123) only has solutions if \(\tilde{\mathbf{r}}\) is perpendicular to \(\mathbf{G}\). Therefore, we write
\[\tilde{\mathbf{r}}=\mathbf{r}_{\text{cm}}+\delta\tilde{\mathbf{r}}, \tag{124}\]
and we choose the smallest possible value of \(\delta\tilde{\mathbf{r}}\) that verifies
\[\mathbf{G}\cdot\tilde{\mathbf{r}}=\mathbf{G}\cdot\delta\tilde{\mathbf{r}}+\frac{m_{1}m_{2}}{M} \mathbf{r}_{2}\cdot\Delta\mathbf{r}\times\Delta\mathbf{v}=0. \tag{125}\]
Equation (125) is of the form \(\mathbf{a}\cdot\mathbf{w}=\mathfrak{b}\) with unknown \(\mathbf{w}=\delta\tilde{\mathbf{r}}\). We are lead to minimize under the constraint \(\mathbf{a}\cdot\mathbf{w}=\mathfrak{b}\). We write
\[\mathcal{L}(\lambda,\mathbf{w})=\left|\mathbf{w}\right|^{2}+\lambda\left(\mathbf{a}\cdot \mathbf{w}-\mathfrak{b}\right), \tag{126}\]
where \(\lambda\) is a Lagrange multiplier. The gradient of \(\mathcal{L}\) vanishes when \(\mathbf{w}=\mathfrak{ba}/\mathfrak{a}^{2}\), and therefore we take
\[\delta\tilde{\mathbf{r}}=\frac{m_{1}m_{2}}{M}\mathbf{r}_{2}\cdot\left(\Delta\mathbf{v} \times\Delta\mathbf{r}\right)\frac{\mathbf{G}}{G^{2}}. \tag{127}\]
Once \(\tilde{\mathbf{r}}\) is known, Eq. (123) has the form \(\mathbf{a}\times\mathbf{w}=\mathbf{b}\) with unknown \(\mathbf{w}=\tilde{\mathbf{v}}\). Since \(\mathbf{a}\cdot\mathbf{b}=0\), this equation has solutions given by28\(\mathbf{w}=\left(\mathbf{b}\times\mathbf{a}\right)/a^{2}+\alpha\mathbf{a}\) for any \(\alpha\in\mathbb{R}\). Therefore, we take
Footnote 28: This comes from \(\mathbf{a}\times\left(\mathbf{b}\times\mathbf{a}\right)=a^{2}\mathbf{b}-\left(\mathbf{a}\cdot\mathbf{ b}\right)\mathbf{a}\).
\[\tilde{\mathbf{v}}=\frac{1}{M\tilde{r}^{2}}\mathbf{G}\times\tilde{\mathbf{r}}+\alpha\tilde{ \mathbf{r}}, \tag{128}\]
where we choose the real number \(\alpha\) in order to minimize \(\left|\tilde{\mathbf{v}}-\mathbf{v}_{\text{cm}}\right|\). We have
\[\left|\tilde{\mathbf{v}}-\mathbf{v}_{\text{cm}}\right|^{2}=\alpha^{2}\tilde{r}^{2}-2 \alpha\tilde{\mathbf{r}}\cdot\mathbf{v}_{\text{cm}}+K, \tag{129}\]
where \(K\) does not depend on \(\alpha\), and the minimal value of \(\left|\tilde{\mathbf{v}}-\mathbf{v}_{\text{cm}}\right|\) is thus reached at \(\alpha=\left(\tilde{\mathbf{r}}\cdot\mathbf{v}_{\text{cm}}\right)/\tilde{r}^{2}\). Finally, we achieve the conservation of the total angular momentum by giving to the unique moonlet resulting from the merger the position and velocity
\[\begin{cases}\tilde{\mathbf{r}}=\mathbf{r}_{\text{cm}}+\frac{m_{1}m_{2}}{M}\mathbf{r}_{2} \cdot\left(\Delta\mathbf{v}\times\Delta\mathbf{r}\right)\frac{\mathbf{G}}{G^{2}},\\ \tilde{\mathbf{v}}=\frac{1}{M\tilde{r}^{2}}\mathbf{G}\times\tilde{\mathbf{r}}+\frac{ \tilde{\mathbf{r}}\cdot\mathbf{v}_{\text{cm}}}{\tilde{r}^{2}}\tilde{\mathbf{r}}.\end{cases} \tag{130}\]
### Case of a fragmentation
If the collision results in a full fragmentation (\(\tilde{m}_{2}\geq m^{(0)}\)), then the conservation of the total angular momentum reads
\[\mathbf{G}=\tilde{m}\mathbf{\tilde{r}}\times\tilde{\mathbf{v}}+\tilde{m}_{2}\sum_{k=1}^{ \tilde{N}}\left(\mathbf{\tilde{r}}+\mathbf{\tilde{r}}_{k}^{\prime}\right)\times\left( \mathbf{\tilde{v}}+\mathbf{\tilde{v}}_{k}^{\prime}\right). \tag{11}\]
In the case of a partial fragmentation (\(\tilde{m}_{2}<m^{(0)}\leq\tilde{m}\)), the tail is reunited into a single moonlet and the sum in Eq. (11) has only one term. We define29
Footnote 29: For a partial fragmentation, the sums are reduced to one term and \(\tilde{m}_{2}\) has to be replaced by \(\tilde{m}\).
\[\tilde{\mathbf{g}}=\tilde{m}_{2}\sum_{k=1}^{\tilde{N}}\tilde{\mathbf{r}}_{k}^{\prime} \times\tilde{\mathbf{v}}_{k}^{\prime},\qquad\tilde{\mathbf{s}}=\tilde{m}_{2}\sum_{k=1 }^{\tilde{N}}\tilde{\mathbf{r}}_{k}^{\prime},\qquad\tilde{\mathbf{u}}=\tilde{m}_{2} \sum_{k=1}^{\tilde{N}}\tilde{\mathbf{v}}_{k}^{\prime}, \tag{12}\]
and Eq. (11) can be rewritten
\[\mathbf{G}=M\mathbf{\tilde{r}}\times\tilde{\mathbf{v}}+\mathbf{\tilde{r}}\times\tilde{\mathbf{u}} +\tilde{\mathbf{s}}\times\tilde{\mathbf{v}}+\tilde{\mathbf{g}}, \tag{13}\]
with unknowns \(\mathbf{\tilde{r}}\) and \(\tilde{\mathbf{v}}\). If \(\mathbf{\tilde{r}}\) is known, then \(\tilde{\mathbf{v}}\) is given by the equation
\[\mathbf{a}\times\tilde{\mathbf{v}}=\mathbf{b},\quad\text{where}\quad\mathbf{a}=M\mathbf{\tilde{r }}+\tilde{\mathbf{s}}\ \ \text{and}\ \ \mathbf{b}=\mathbf{G}-\mathbf{\tilde{r}}\times\tilde{\mathbf{u}}-\tilde{\mathbf{g}}. \tag{14}\]
Equation (14) only has solutions if \(\mathbf{a}\cdot\mathbf{b}=0\), and we first constrain \(\tilde{\mathbf{r}}\) with the equation \(\mathbf{a}\cdot\mathbf{b}=0\). Then, we obtain \(\tilde{\mathbf{v}}\) from Eq. (14). There are infinitely many choices for both \(\tilde{\mathbf{r}}\) and \(\tilde{\mathbf{v}}\), and in each case we choose them in order to be as close as possible from the conservation of the total momentum, that is, as close as possible to
\[\begin{split}&\tilde{m}\mathbf{\tilde{r}}+\tilde{m}_{2}\sum_{k=1}^{ \tilde{N}}\left(\mathbf{\tilde{r}}+\mathbf{\tilde{r}}_{k}^{\prime}\right)=M\mathbf{\tilde {r}}+\tilde{\mathbf{s}}=M\mathbf{r}_{\text{cm}},\\ &\tilde{m}\mathbf{\tilde{v}}+\tilde{m}_{2}\sum_{k=1}^{\tilde{N}} \left(\mathbf{\tilde{v}}+\mathbf{\tilde{v}}_{k}^{\prime}\right)=M\mathbf{\tilde{v}}+ \tilde{\mathbf{u}}=M\mathbf{v}_{\text{cm}}.\end{split} \tag{15}\]
In order to determine \(\tilde{\mathbf{r}}\), we thus write \(M\tilde{\mathbf{r}}+\tilde{\mathbf{s}}=M\left(\mathbf{r}_{\text{cm}}+\delta\tilde{\mathbf{r}}\right)\) and we choose the smallest \(\delta\tilde{\mathbf{r}}\) that verifies \(\mathbf{a}\cdot\mathbf{b}=0\). We have
\[\mathbf{a}\cdot\mathbf{b}=(M\mathbf{\tilde{r}}+\tilde{\mathbf{s}})\cdot(\mathbf{G}-\tilde{\mathbf{g}} )+\mathbf{\tilde{r}}\cdot(\tilde{\mathbf{s}}\times\tilde{\mathbf{u}})=(\mathbf{r}_{\text{cm}} +\delta\tilde{\mathbf{r}})\cdot(M\mathbf{G}-M\tilde{\mathbf{g}}+\tilde{\mathbf{s}}\times \tilde{\mathbf{u}})=0. \tag{16}\]
We are left to minimize \(|\delta\tilde{\mathbf{r}}|\) under a constraint of the form \(\mathbf{a}\cdot\delta\tilde{\mathbf{r}}=\mathfrak{b}\). This was already done in the merger case with the theory of Lagrange multiplier and we have
\[\delta\tilde{\mathbf{r}}=\frac{\mathfrak{b}\mathbf{a}}{\mathfrak{a}^{2}}=-\frac{\left( \mathbf{r}_{\text{cm}}\cdot\mathbf{a}\right)\mathbf{a}}{\mathfrak{a}^{2}}\quad\text{where} \ \ \mathbf{a}=M\left(\mathbf{G}-\tilde{\mathbf{g}}\right)+\tilde{\mathbf{s}}\times\tilde{\mathbf{u}}. \tag{17}\]
Now that \(\tilde{\mathbf{r}}\) is known, we can obtain \(\tilde{\mathbf{v}}\) from Eq. (14). The solutions of Eq. (14) are given by28
Footnote 28: For a partial fragmentation, the sums are reduced to one term and \(\tilde{m}_{2}\) has to be replaced by \(\tilde{m}\).
\[\tilde{\mathbf{v}}=\frac{\mathbf{b}\times\mathbf{a}}{a^{2}}+\alpha\mathbf{a}, \tag{18}\]
where \(\alpha\in\mathbb{R}\). We choose for the real number \(\alpha\) the value that is closest from preserving the total momentum, that is, we choose the value of \(\alpha\) that minimizes \(|M\left(\tilde{\mathbf{v}}-\mathbf{v}_{\text{cm}}\right)+\tilde{\mathbf{u}}|\) (see Eq. (15)). We have
\[\frac{1}{M^{2}}\left|M\left(\tilde{\mathbf{v}}-\mathbf{v}_{\text{cm}}\right)+\tilde{ \mathbf{u}}\right|^{2}=a^{2}\alpha^{2}-2\alpha\left(\mathbf{v}_{\text{cm}}-\frac{\tilde{ \mathbf{u}}}{M}\right)\cdot\mathbf{a}+K, \tag{19}\]
where \(K\) does not depend on \(\alpha\) and therefore, we choose
\[\alpha=\frac{\left(\mathbf{v}_{\text{cm}}-\tilde{\mathbf{u}}/M\right)\cdot\mathbf{a}}{a^{2}}. \tag{20}\]
We uniquely determined \(\tilde{\mathbf{r}}\) and \(\tilde{\mathbf{v}}\) in such a way that the total angular momentum is conserved upon impact up to machine precision, whether the collision results in a merger or in a fragmentation. |
2309.07870 | Agents: An Open-source Framework for Autonomous Language Agents | Recent advances on large language models (LLMs) enable researchers and
developers to build autonomous language agents that can automatically solve
various tasks and interact with environments, humans, and other agents using
natural language interfaces. We consider language agents as a promising
direction towards artificial general intelligence and release Agents, an
open-source library with the goal of opening up these advances to a wider
non-specialist audience. Agents is carefully engineered to support important
features including planning, memory, tool usage, multi-agent communication, and
fine-grained symbolic control. Agents is user-friendly as it enables
non-specialists to build, customize, test, tune, and deploy state-of-the-art
autonomous language agents without much coding. The library is also
research-friendly as its modularized design makes it easily extensible for
researchers. Agents is available at https://github.com/aiwaves-cn/agents. | Wangchunshu Zhou, Yuchen Eleanor Jiang, Long Li, Jialong Wu, Tiannan Wang, Shi Qiu, Jintian Zhang, Jing Chen, Ruipu Wu, Shuai Wang, Shiding Zhu, Jiyu Chen, Wentao Zhang, Xiangru Tang, Ningyu Zhang, Huajun Chen, Peng Cui, Mrinmaya Sachan | 2023-09-14T17:18:25Z | http://arxiv.org/abs/2309.07870v3 | # Agents: An Open-source Framework
###### Abstract
Recent advances on large language models (LLMs) enable researchers and developers to build autonomous language agents that can automatically solve various tasks and interact with environments, humans, and other agents using natural language interfaces. We consider language agents as a promising direction towards artificial general intelligence and release Agents, an open-source library with the goal of opening up these advances to a wider non-specialist audience. Agents is carefully engineered to support important features including _planning_, _memory_, _tool usage_, _multi-agent communication_, and _fine-grained symbolic control_. Agents is user-friendly as it enables non-specialists to build, customize, test, tune, and deploy state-of-the-art autonomous language agents without much coding. The library is also research-friendly as its modularized design makes it easily extensible for researchers. Agents is available at [https://github.com/aiwaves-cn/agents](https://github.com/aiwaves-cn/agents).
## 1 Introduction
+
Footnote †: _Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents [Franklin and Graesser, 1996]_
Large Language Models (LLMs) [Brown et al., 2020, Ouyang et al., 2022, OpenAI, 2023] such as ChatGPT make it possible to build autonomous agents that can automatically solve complicated tasks and interact with the environment, humans, or other agents by perceiving, reasoning, planning, and acting in the world [Weng, 2023]. Language agents are a promising step towards artificial general intelligence (AGI) and can help reduce human effort in certain roles such as customer service, consulting, programming, writing, teaching, etc. Some recent demos such as AutoGPT [Richards and et al., 2023] and BabyAGI [Nakajima, 2023] have demonstrated the potential of language agents and have gained massive interest from developers, researchers, as well as more non-technical audiences.
While intriguing, most of these demos or repositories are not friendly for _customizing_, _tuning_, and _deploying_ new agents even for experienced developers or researchers. This limitation comes from the fact that these demos typically proof-of-concepts showcasing the possibility of language agents, instead of being larger frameworks that can be used to build and customize language agents over time. Moreover, most of these open-source repositories only cover a small portion of the core abilities of language agents including task decomposition [Nye et al., 2022], long-short term memory [Zhou
et al., 2023a], web navigation [Nakano et al., 2021], tool usage [Schick et al., 2023], and multi-agent communication [Foerster et al., 2016]. In addition, most (if not all) existing language agent frameworks solely depend on a short task description and rely completely on the abilities of LLMs to plan and act. This results in significant randomness and inconsistency across different runs, delivering an unsatisfactory user experience and making it hard to customize and tune language agents.
We believe the aforementioned limitations are important barriers for recent advances in language agents to reach a broader non-specialist audience and impact our society in a positive way. To this end, we release Agents, an open-source library and framework for language agents dedicated to supporting LLM-powered language agents. Agents's philosophy is to make customizing, tuning, and deploying language agents as simple as possible even for non-specialists while also remaining easily extensible for developers and researchers. In addition, the library also provides the following key features that make it a versatile framework for language agents:
Long-short term memoryAccording to Franklin and Graesser [1996], a key difference between autonomous agents and computer programs (or machine learning models) is that machine learning models only need to respond to a single input/query, while autonomous agents need to interact with environments or other agents over time. Therefore, the ability to maintain long-short term memory is very important for autonomous agents. Agents integrates the memory components in [Zhou et al., 2023a] and enables language agents to store and retrieve long-term memory with VectorDB and semantic search, and regularly update a short-term working memory with a scratchpad. Users can choose to equip an agent with long-term memory, short-term memory, or both of them by simply filling in a field in the config file.
Tool usage & Web navigationAnother important feature for autonomous agents is the ability to use external tools and surf the internet. This is especially important for language agents because they rely on the language interface and thus need to use external tools to interact with environments beyond language communication and navigate the web to gather useful information. Following [Patil et al., 2023], Agents supports a few commonly used external APIs and provides an abstract class that enables developers to integrate other tools with ease. We also enable agents to navigate the internet and gather information by defining web search and web navigation as specialized APIs.
Multi-agent communicationIn addition to single-agent abilities, Agents also supports customizing multi-agent systems, which can be helpful for certain applications such as games [Park et al., 2023], social experiments [Li et al., 2023], software development [Qian et al., 2023], etc. One new feature for multi-agent communication in Agents is the "dynamic scheduling" feature. Instead of
Figure 1: Illustration of the Agents framework.
scheduling the order for the agents to act with hard-coded rules, dynamic scheduling provides an option to define a controller agent that acts as a "moderator" and decides which agent to perform the next action considering their roles and the current history. Dynamic scheduling has the potential to make communication between multiple agents more natural and flexible. Developers can easily customize the controller by specifying its rule in the config file using natural language.
Human-agent interactionOne limitation in existing agent frameworks is that while they enable agents, or multi-agents, to automatically solve tasks, it's not easy or even possible for human users to interact with the agents, especially in the multi-agent scenario. Agents seamlessly supports human-agent interaction in both single-agent and multi-agent scenarios, making it possible for one or more humans to communicate and interact with language agents.
ControllabiltyExisting agent frameworks generally define and control the agents' behavior only using a system prompt and then let the agent plan and act on its own. In contrast, Agents provides a novel paradigm to build _controllable agents_ via a _symbolic plan_, also referred to as _standard operating procedures_ (SOPs). An SOP is a graph of multiple states that defines different situations an agent may encounter while accomplishing a task, and the transition rules between the states. Similar to SOPs in the real world, an SOP in Agents is a meticulously documented set of step-by-step instructions that outlines how a particular task or process should be performed by an agent or a group of agents. SOPs can be generated by an LLM and edited by the user when customizing and tuning the agent. After deployment, an agent will behave following specified instructions and guidelines for each state and dynamically adjust its current state according to its interaction with the environment, humans, or other agents. The introduction of the symbolic plan offers the opportunity to provide fine-grained control of an agent's behavior, making agents' behavior more stable/predictable and facilitating tuning/optimizing agents at the same time.
In addition, we propose an automated SOP generation pipeline to reduce human labor on writing detailed SOP and config files when customizing (multi-) agent systems. The automated SOP generation pipeline is a "meta agent" that can generate config files for language agents with retrieval-augmented generation given a short description of the task.
Agents is an ongoing effort maintained by researchers and engineers from AIWaves2. We look forward to support from community contributors on the project. The library and detailed documentation and tutorials are available on GitHub3.
Footnote 2: [https://www.aiwaves.org/](https://www.aiwaves.org/)
Footnote 3: [https://github.com/aiwaves-cn/agents](https://github.com/aiwaves-cn/agents)
## 2 Related Work
### Autonomous Language Agents
The concept of language agents has become very popular recently and a variety of language agents targeting different tasks have been proposed. For example, Generative Agents [14] developed language agents to mimic human social behavior, WebAgent [15] demonstrated the possibility to build language agents that can complete the tasks on real websites following natural language instructions, Qian et al. [2023] and MetaGPT [17] experimented with software development in multi-agent communication settings, and Zhou et al. [2023] built language agents that act as interactive writing assistants.
In addition to language agents that target specific tasks, recent open-source projects such as AutoGPT [13], BabyAGI [12], and SuperAGI [21] are aimed at the goal of building autonomous agents that do whatever users want and attracted massive interest from both developers and non-specialist audiences.
### Language Agents Frameworks
More recently, a few open-source frameworks for language agents have been proposed. For example, Transformers Agents [23] builds language agents that can automatically use tools to solve tasks described in natural language; LangChain [16] supports end-to-end
language agents that can automatically solve tasks specified in natural language; Camel (Li et al., 2023) and AgentVerse (Chen et al., 2023) are platforms tailored for building multi-agent systems; Gentopia (Xu et al., 2023) and XLang4 are libraries for building tool-augmented agents. We illustrate the key features supported by these platforms and Agents in Table 1. We can see that Agents is the only framework that supports tool usage, long-short term memory, and multi-agent communication at the same time. Agents also offers human-agent interaction and controllability through symbolic plans (SOPs) for the first time.
Footnote 4: [https://github.com/xlang-ai/xlang](https://github.com/xlang-ai/xlang)
## 3 Library Design
Agents is designed following the philosophy in Franklin and Graesser (1996): _"an **autonomous** agent is situated in an **environment**"_. Therefore, **agent** and **environment** are two major classes in the Agents framework. In addition to these two classes, we also include a class for symbolic plans, named **SOP** (short for Standard Operating Procedure), to make language agents more controllable. These main classes are all initialized from a config file which can be filled in plain text. In sum, a typical script for initializing and running a (multi) agent system with Agents is illustrated in Code 1. The config file not only defines these core objects but also factorizes complicated prompts into modularized prompt components. The factorization of prompts significantly reduces the expertise requirements and efforts for users to build (multi) agent systems. Using a single config file to define the agents, plan, and basic environment also facilitates the sharing of language agents (which will be discussed in the Agent Hub section). Each of these three core classes consist of standardized APIs that can be overwritten by experienced developers and researchers. We describe these classes in detail:
```
1defrun(agents,sop,environment):
2whilenotsop.finished:
3agent,state=sop.step(agents,environment)
4action=agent.step(state,environment)
5environment.update(agent,action)
6#optional,incaseofdynamicplanning
7#new_states=get_new_states(action)
8#sop.add_states(new_states)
```
Code 2: Exemplar code for the running loop of a (multi) agent system in Agents
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Framework** & **Tool Usage** & **Long-short Term Memory** & **Multi-Agent** & **Human-Agent Interaction** & **Symbolic Control** \\ \hline Transformers Agents & ✓ & ✗ & ✗ & ✗ \\ LangChain & ✓ & ✓ & ✗ & ✗ \\ Auto-GPT & ✓ & ✗ & ✗ & ✗ \\ Gentopia & ✓ & ✗ & ✗ & ✗ \\ XLang & ✓ & ✗ & ✗ & ✗ \\ Meta-GPT & ✓ & ✗ & ✓ & ✗ \\ Camel & ✓ & ✗ & ✓ & ✗ \\ AgentVerse & ✓ & ✓ & ✓ & ✓ & ✗ \\ \hline Agents & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of Language Agent Frameworks
### Agent
The _Agent_ class abstracts a language agent. Its UML is illustrated in Figure 1. We can see that an agent maintains its long-short term memory and has methods to observe the environment (agent._observe(environment)), act according to its current state (agent._act()) and update its memory (agent._update_memory()). All these methods are wrapped in the agent.step() method. This factorization enables developers to customize agents with new functionalities easily. Unlike existing language agent frameworks that assume an agent must be based on an LLM, we include a "_is_human" property to an agent. If it is set to "True", the (agent._act()) will opt to provide observations and memory information to a human user and wait for the human user to input the action. This design allows flexible human-agent interaction in both single-agent and multi-agent systems by allowing human users to take the role of one or more language agents. It facilitates developers to build various interesting applications such as allowing human users to act as members of a team in debate and collaborate with (agent or human-based) teammates to beat another team, or act as CTO/engineers in a software company and collaborate with others for software development.
### Sop
The SOP class contains a graph of the states of agents. Each state specifies a certain sub-task or sub-goal of all agents when accomplishing the task described by the SOP. States are abstracted into a State class. A State object contains modularized prompts for the agent to leverage an LLM and various tools or APIs that an agent can use in the state. We abstract everything an agent may use for action in a state into a "Component" class. The Component class consists of two subclasses corresponding to different parts of the prompt and tools or external APIs, named "PromptComponent" and "ToolComponent", respectively. PromptComponent includes modularized prompts that specify the task/goal, rules/constraints, (step-by-step) demonstrations for in-context learning, and the output format. ToolComponent supports more complex usage beyond modularized prompts, including external tools and APIs such as web search, knowledge bases, etc. The results of the tools are either included in the prompt or directly returned and processed afterward, according to the config file.
An SOP object also includes an LLM-based control function that decides the transition between different states and the next agent to act. The state transit function is named **sop._transit()** and the agent routing function is named **sop._route()**. Both of the functions are wrapped in an **sop.next()** function which is used in the main loop.
### Environment
The Environment class abstracts the environment in which the agents are situated. An environment consists of two main functions: environment._observed() and environment.update(). environment._observed() defines how the environment influences the agent's action (i.e., what information should be transferred to the agent upon observation, and environment.update() defines how the agent's action impacts the environment.
Figure 2: (a) Customer service agent Figure 3: (b) Sales agent
The execution logic of a (multi) agent system based on Agents is very intuitive. As illustrated in Code 2, in each iteration, the SOP first decides the state transition and selects the next agent to act based on the agents and the environment. The agent then takes an action based on its state and the environment. Then the environment updates itself based on the new action. Finally, if a workflow requires dynamically adjusting the plan based on the intermediate execution results, one can parse the output from an action, define a new state and add it into the current SOP.
### Implementation Details of Core Features
Long-short Term Memory: Agents implements long-short term memories for language agents following Zhou et al. (2023). Specifically, long-term memories are action histories and are embedded by sentence-transformers Reimers and Gurevych (2019), stored in a VectorDB, and queried via semantic search. Short-term memories, or working memories, are in natural language form and updated by an LLM via a carefully tuned prompt.
Tool Usage & Web Navigation: Agents supports tool usage and web navigation via ToolComponents. For each external tool or API, developer can wrap the API call in the ToolComponent.func() method. For complicated tools of which the API call is context-dependent, Agents integrates the the "Function-calling" feature of OpenAI's GPT APIs to let LLMs decide how to use the tools. Web navigation is achieved by implementing web search as a specialized tool.
Multi-Agent Communication: Different from most existing frameworks for multi-agent systems that use pre-defined rules (e.g., let each agent act in a sequential order) to control the order for agents' action, Agents includes a controller function that dynamically decides which agent will perform the next action using an LLM by considering the previous actions, the environment, and the target of the current states. This makes multi-agent communication more flexible.
Human-Agent Interaction: Agents supports human-agent interaction in multi-agent systems by allowing human users to change the "is_human" field for a certain agent in the config file to "True". In this case, the user can play the role of the agent by himself/herself and input his/her own actions and interact with other language agents in the environment.
Figure 4: Multi-Agent System: Fiction Studio.
### Deployment
Existing open-source frameworks for language agents focus on building proof-of-concept language agents that run either in the terminal or on Gradio [Abid et al., 2019]. In contrast, Agents supports deploying language agents as APIs with FastAPI5. This greatly facilitates developers to integrate language agents in real-world applications.
Footnote 5: [https://fastapi.tiangolo.com/](https://fastapi.tiangolo.com/)
### The Agent Hub
Agents aims to not only facilitate the development, testing, and tuning of a language agents system but also makes the distribution and sharing of language agents easier. To this end, we introduce Agent Hub, a platform that allows users to share their fine-tuned language agents as well as search/download useful language agents that others share on the platform. In this way, one can easily customize language agents by starting from community agents and slightly modifying them. This greatly reduces the effort of designing, testing, and tuning language agents from scratch.
### Automatic Creation of Agent Systems
While using an SOP to provide fine-grained control to language agents, it can sometimes be laborsome for users to manually specify the SOP from scratch since it requires to set different states, their connections, and the prompts and tools for each Component for all states. Therefore, we carefully implement a pipeline for automatic SOP generation. Our SOP generation framework is based on retrieval-augmented generation (RAG) [Lewis et al., 2020]. The SOP generation pipeline itself is also based on the Agents framework and has an SOP of first specifying the agents required, then planning the states and their connections, and finally generating the Components. Therefore, this pipeline can be regarded as a "meta agent" that can create other agents and multi-agent systems. Detailed description of the automatic agent creation framework is decribed in [Zhou et al., 2023b].
Figure 5: Human-Agent Interaction in a debate.
Case Studies
We then present a few case studies on different language agents built with the library, including single-agent systems, multi-agent systems, and systems that require human-agent interaction. All demos are available at [http://www.aiwaves-agents.com/](http://www.aiwaves-agents.com/).
### Single-agent systems
We implement a few single-agent systems with Agents including a chit-chat bot, two customer service agents based on knowledge bases and web search engines, a shopping assistant agent, and a sales agent. The agents demonstrate different features of the library and the possibility of building language agents of different use cases using Agents. We present a screenshot of the customer service agent and the sales agent in Figure 2 and 3, respectively.
### Multi-agent systems
We also demonstrate how one can build a multi-agent system consisting of multiple agents interacting with each other in an environment. We select three scenarios including a fiction studio, a debate, and a software company. These scenarios include both cooperative and competitive scenarios, which are two main categories of multi-agent systems. All of the scenarios include multiple subtasks that are controlled through symbolic plans, i.e., SOPs. One can easily observe the language agents' behavior in each subtask and engineer the corresponding prompts to customize and improve the system. We present a system screenshot of the fiction studio system in Figure 4. We also showcase the human-agent interaction feature of the framework in a case study where a human user participate in a debate with language agents in Figure 5.
## 5 Conclusion
LLMs and language agents powered by them are playing increasingly important roles in both the NLP/AI community and our society in general. Agents, is a unified framework and open-source library for language agents. Agents aims to facilitate developers to build applications with language agents, researchers to conduct language agents research, and general non-technical audiences to build and customize personalized language agents.
|
2309.09685 | Quantum Computed Green's Functions using a Cumulant Expansion of the
Lanczos Method | In this paper, we present a quantum computational method to calculate the
many-body Green's function matrix in a spin orbital basis. We apply our
approach to finite-sized fermionic Hubbard models and related impurity models
within Dynamical Mean Field Theory, and demonstrate the calculation of Green's
functions on Quantinuum's H1-1 trapped-ion quantum computer. Our approach
involves a cumulant expansion of the Lanczos method, using Hamiltonian moments
as measurable expectation values. This bypasses the need for a large overhead
in the number of measurements due to repeated applications of the variational
quantum eigensolver (VQE), and instead measures the expectation value of the
moments with one set of measurement circuits. From the measured moments, the
tridiagonalised Hamiltonian matrix can be computed, which in turn yields the
Green's function via continued fractions. While we use a variational algorithm
to prepare the ground state in this work, we note that the modularity of our
implementation allows for other (non-variational) approaches to be used for the
ground state. | Gabriel Greene-Diniz, David Zsolt Manrique, Kentaro Yamamoto, Evgeny Plekhanov, Nathan Fitzpatrick, Michal Krompiec, Rei Sakuma, David Muñoz Ramo | 2023-09-18T11:45:04Z | http://arxiv.org/abs/2309.09685v3 | # Quantum Computed Green's Functions using a Cumulant Expansion of the Lanczos Method
###### Abstract
In this paper, we present a quantum computational method to calculate the many-body Green's function matrix in a spin orbital basis. We apply our approach to finite-sized fermionic Hubbard models and related impurity models within Dynamical Mean Field Theory, and demonstrate the calculation of Green's functions on Quantum's H1-1 trapped-ion quantum computer. Our approach involves a cumulant expansion of the Lanczos method, using Hamiltonian moments as measurable expectation values. This bypasses the need for a large overhead in the number of measurements due to repeated applications of the variational quantum eigensolver (VQE), and instead measures the expectation value of the moments with one set of measurement circuits. From the measured moments, the tridiagonalised Hamiltonian matrix can be computed, which in turn yields the Green's function via continued fractions. While we use a variational algorithm to prepare the ground state in this work, we note that the modularity of our implementation allows for other (non-variational) approaches to be used for the ground state.
## 1 Introduction
The Green's function (GF) is a quantity that describes dynamical electronic responses, and is useful for calculating spectroscopic properties, such as photoemission, photoabsorption, and conductivity [1]. It can also be used to capture electronic correlations in condensed matter and quantum chemical simulations, in addition to being the central quantity in many quantum embedding methods such as cluster perturbation theory and Dynamical Mean Field Theory (DMFT) [2, 3, 4, 5, 6]. Despite its importance in these fields, the GF remains a difficult quantity to calculate from first principles on classical computers.
In a previous study [7], an approach based on the Lanczos method was proposed to obtain the GF on quantum computers. This approach relies on a parameterised VQE method to construct the Krylov space, hence the orthonormal Lanczos basis is calculated iteratively in which a VQE-like algorithm is applied at each Lanczos iteration. This involves an overhead in the number of measurements and renders the approach more susceptible (relative to a typical ground state calculation) to the potential difficulties associated with variational optimization (such as barren plateaus) [8]. This is also the case for recent proposal based on variational compilation [9]. A method based on quantum subspace expansion (QSE) has also been proposed [10], in which the Hamiltonian and overlap matrix elements are obtained in a basis for the ground state and in another basis for the Krylov states involved in the GF. The basis sets are prepared using time evolution, and the matrix elements are subsequently measured on a quantum computer. The Krylov basis and Lanczos coefficients are then calculated iteratively on a classical computer. While an interesting approach, we note QSE requires more resources than VQE, and the reliance on trotterised time evolution may lead to a vulnerability to trotter errors as a function of the time step. A number of other recent proposals also utilise time evolution to measure the real-time GF [11, 12], as well as variational methods to capture the imaginary-time GF [13, 14]. We also note the approach of Kosugi and Matsushita [15] in which (multi-)controlled operations between ancillas and state qubits are used to represent the ladder operators involved in the (off-)diagonal elements of the GF matrix, with a subsequent application of quantum phase estimation (QPE). Other recently proposed methods obtain the GF using an algorithm to sample a Fourier series expansion [16], or using a 3-qubit iTofolli gate [17]. Such approaches are prohibitive for the noisy intermediate-scale quantum (NISQ) era in terms of circuit depth scaling. Other algorithms aimed at greater suitability for near-term quantum computers have been proposed to obtain the GF in both the time and frequency domains [18].
As mentioned above, the GF is a central quantity in quantum embedding methods such as DMFT [4]. In recent years, a number of quantum computational approaches to DMFT have been published [19, 20, 21]. These works involve simplifications which increase tractability, yet decrease generalisability, such as a restriction to two sites via the single Anderson impurity model (SIAM). In this picture, the quasiparticle weight has an analytical form, to which the updated impurity-bath interaction is related in a simple way, by appealing to the limit of infinite dimensions [4]. Previous works have also measured the GF in the time-domain to solve the impurity model in DMFT, which can lead to a large overhead in the number of measurements per time step [22], or a reliance on fault tolerant schemes such as QPE [23], or the use of a simplified sinusoidal form of the GF, or other techniques [20, 21, 24, 25]. A recent approach considers low energy expansions of the GF for an alternative embedding scheme based on the rotationally invariant slave boson method, which does not require calculating the full GF [26]. We also note recent quantum computational approaches for the DMFT solution of the Hubbard-Holstein model [27], as well as novel spectral resolution techniques to measure the Anderson impurity GF [28]. Finally, a very interesting recent approach adopted fast-forwarding circuit protocols to time-evolve the impurity state and converge the DMFT loop on noisy hardware [29]. We add to these previous works by providing a method to quantum compute the GF in the frequency domain that can be utilised in a hybrid quantum-classical algorithm for DMFT appropriate for NISQ.
In this work, a cumulant expansion of the Lanczos coefficients is utilised to provide measurable quantum circuits for each moment of the Hamiltonian [30, 31]. These measurable expectation values can be used to calculate the elements of the tridiagonalised Hamiltonian matrix in the Krylov basis, from which the well-known continued fraction expression of the GF can be evaluated [4, 6]. Thus, we extend the work of Vallury _et al._[30] by utilising quantum computed moments to develop a quantum algorithm for calculating the full GF matrix. As a first example, we apply our method to the calculation of the GF of the fermionic Hubbard model. We note recent studies of the ground state properties of the Hubbard model using quantum algorithms [32, 33]. Rather than investigating efficient ways to find the ground state, we instead focus on a quantum approach to calculate GFs for finite chains of Hubbard sites. In addition, we show that this approach can be applied to DMFT in a straight-forward manner to iteratively calculate the quantum computed impurity GF to self-consistency. Our methodology is implemented in the InQuanto quantum computational chemistry package [34, 35].
Our results show that application of this approach to the H1-1 trapped ion quantum computer [36, 37] results in spectral features (derived from the imaginary part of the GF) that compare well with ideal classical simulations, particularly in terms of peak positions, even without error mitigation techniques. Hence, expectation values of Hamiltonian moments are relatively robust to measurement noise for low lying Lanczos roots, which is consistent with previous applications of the method to infimum estimates of the ground state energy [30]. The stability of this method also extends to iterative calculations of the impurity GF in the DMFT algorithm, even when (emulated) circuit based measurements are used to evaluate the Hamiltonian moments at every DMFT iteration.
This paper is organised as follows. In section 2 we describe the methods and show how the quantum computed moments can be used to obtain the GF. This is followed by a brief overview of DMFT applied to the Bethe lattice with multiple bath sites [5]. In section 3, we present our results. Starting with the Hubbard model, we show that our method can be applied to multiple sites and benchmark our results against the classical Lanczos method. We then demonstrate this method on quantum hardware. Following this, we utilise the quantum computed GF in a DMFT algorithm. Section 4 provides the concluding remarks.
## 2 Methods
### Quantum Computed Moments for Green's Functions
In this section, we give a brief overview of the cumulant expansion of the Lanczos method, and its application to computing the GF in a quantum computational setting. The Lanczos recursion method [38] can be viewed as an approximate diagonalisation scheme, resulting in a tridiagonal form of the Hamiltonian matrix which can be efficiently solved. In a typical classical Lanczos routine, the elements of the tridiagonalised matrix (referred to as Lanczos coefficients below) \(\alpha_{l}=\langle v_{l}|\hat{H}|v_{l}\rangle\) and \(\beta_{l}=\langle v_{l+1}|\hat{H}|v_{l}\rangle=\langle v_{l}|\hat{H}|v_{l+1}\rangle\) are used to numerically orthonormalise the vectors \(|v_{l+1}\rangle=\frac{(\hat{H}-\alpha_{l})|v_{l}\rangle-\beta_{l}|v_{l-1} \rangle}{\beta_{l+1}}\), where \(l=1,2,3\), \(\beta_{0}=0\), and \(\hat{H}\) is the fermionic Hamiltonian operator in second quantization. Vallury _et al._[30] showed that this scheme can be re-expressed for a quantum computational context. The Lanczos coefficients can be expressed with the moments of the Hamiltonian operator \(\hat{H}\) with respect to some initial Lanczos vector, \(\langle\hat{H}^{n}\rangle=\langle v_{1}|\hat{H}^{n}|v_{1}\rangle\). Therefore, a quantum algorithm that can compute the moments can be used to generate the Lanczos coefficients. For example, for the upper \(2\times 2\) block of the tridiagonal matrix, i.e. \(\alpha_{1}\), \(\alpha_{2}\), \(\beta_{1}\), the quantities obtained from a quantum algorithm are the expectations val
ues \(\langle\hat{H}\rangle,\langle\hat{H}^{2}\rangle\), and \(\langle\hat{H}^{3}\rangle\), and the coefficients are calculated as:
\[\alpha_{1} =\langle\hat{H}\rangle \tag{1}\] \[\beta_{1} =\sqrt{\langle\hat{H}^{2}\rangle-\langle\hat{H}\rangle^{2}}\] \[\alpha_{2} =\frac{\langle\hat{H}^{3}\rangle-2\langle\hat{H}\rangle\langle \hat{H}^{2}\rangle+\langle\hat{H}\rangle^{3}}{\langle\hat{H}^{2}\rangle- \langle\hat{H}\rangle^{2}}.\]
In general, following the work of Hollenberg _et al._[39, 40], it can be shown that the Lanczos coefficients \(\alpha_{l}\) and \(\beta_{l}\) can be expressed as functions of \(\{\langle\hat{H}^{n}\rangle\}_{n=1..2l-1}\) and explicit expressions can be obtained via the cumulant expansion of the Lanczos coefficients [30, 39].
Once the Lanczos coefficients are obtained, they can be used to calculate the zero temperature GF matrix \(\mathbf{G}(\omega)\) in the continued fraction representation [41, 4]. In this representation, the diagonal elements of the single particle GF take the following form when decomposed into particle and hole parts [42]
\[G_{ii}^{\text{L}}(\omega)=g_{ii}^{\text{(p)}}(\omega)+g_{ii}^{ \text{(h)}}(\omega) \tag{2}\] \[=\frac{n_{ii}^{\text{(p)}}}{\omega+E_{\text{GS}}-\alpha_{1}^{ \text{(p)}}-\frac{\beta_{1}^{\text{(p)}}}{\omega+E_{\text{GS}}-\alpha_{2}^{ \text{(p)}}}-\cdots}\] \[+\frac{n_{ii}^{\text{(h)}}}{\omega-E_{\text{GS}}+\alpha_{1}^{ \text{(h)}}-\frac{\beta_{1}^{\text{(h)}}}{\omega-E_{\text{GS}}+\alpha_{2}^{ \text{(h)}}}-\cdots},\]
where \(E_{\text{GS}}\) is the ground state energy, and the numerators \(n_{ii}\) (defined below) are related to normalised states obtained by operating on the ground state with the fermionic creation (\(\hat{f}_{i}^{\dagger}\)) and annihilation (\(\hat{f}_{i}\)) operators, applied to spin orbital (or qubit) \(i\) (Jordan-Wigner (JW) [43] encoding is assumed throughout this paper). These normalised states correspond precisely to the initial Lanczos vector for each diagonal GF element. Using the particle component \(g_{ii}^{\text{(p)}}(\omega)\) as an example, we can write
\[|v_{1}\rangle\equiv|\Psi_{ii}^{\text{(p)}}\rangle=\frac{\hat{f}_{i}^{\dagger} |\Psi_{\text{GS}}\rangle}{\sqrt{n_{ii}^{\text{(p)}}}}, \tag{3}\]
where \(n_{ii}^{\text{(p)}}=\langle\Psi_{\text{GS}}|\hat{f}_{i}\hat{f}_{i}^{\dagger} |\Psi_{\text{GS}}\rangle\), and \(|\Psi_{\text{GS}}\rangle\) is the ground state represented by a quantum circuit. In Eq. 2 the Lanczos coefficients \(\alpha_{1}^{\text{(p/h)}},\beta_{1}^{\text{(p/h)}}\) are labelled by their particle/hole contributions, according to whether the initial Lanczos vector is obtained by operating on \(|\Psi_{\text{GS}}\rangle\) with \(\hat{f}_{i}^{\dagger}\) (p) or with \(\hat{f}_{i}\) (h), respectively.
The off-diagonal elements are obtained in an analogous way using linear combinations of fermionic operators indexed by the matrix element. For example, this results in an initial Lanczos vector for \(g_{i\neq j}^{\text{(p)}}(\omega)\)
\[|\Psi_{i\neq j}^{\text{(p)}}\rangle=\frac{(\hat{f}_{i}^{\dagger}+\hat{f}_{j}^ {\dagger})|\Psi_{\text{GS}}\rangle}{\sqrt{n_{i\neq j}^{\text{(p)}}}}, \tag{4}\]
where \(n_{i\neq j}^{\text{(p)}}=\langle\Psi_{\text{GS}}|(\hat{f}_{i}+\hat{f}_{j})( \hat{f}_{i}^{\dagger}+\hat{f}_{j}^{\dagger})|\Psi_{\text{GS}}\rangle\). Following through as above, the resulting off-diagonal element \(G_{i\neq j}^{\text{L}}(\omega)\) is not the true off-diagonal element of the GF, but requires a modification which has a simple arithmetic form [6], shown in Eq. 5. Utilising the symmetry of the GF matrix, we finally obtain the GF off-diagonal element as
\[G_{i\neq j}(\omega)=\frac{1}{2}(G_{i\neq j}^{\text{L}}(\omega)-G_{ii}(\omega)- G_{jj}(\omega)), \tag{5}\]
with \(G_{ii}(\omega)=G_{ii}^{\text{L}}(\omega)\). Thus, a method to quantum compute the matrix \(\mathbf{G}(\omega)\) is obtained, without the explicit reliance on multicontrolled ancilla qubits, time evolution, or phase estimation. On the other hand, relative to methods involving phase estimation, this approach in general requires a larger number of measurements due to the increase in the number of Hamiltonian moments with Lanczos root index \(l\), hence we note trade-off between circuit depth and number of measurements when choosing between these approaches. We also note that this method does not require calling a VQE routine for each Lanczos root, nor a time-evolved subspace expansion to obtain the Lanczos basis vectors.
In practical terms, the cumulant expansion of the Lanczos coefficients combined with the continued fraction representation of \(\mathbf{G}(\omega)\) implies at least two strategies for implementation in a quantum algorithm: _i)_ prepare the initial Lanczos vector explicitly with a state preparation circuit and measure the moments as expectations with respect to this circuit, or _ii)_ measure each moment as the expectation value of an operator built by sandwiching \(\hat{H}^{n}\) with ladder operators (or sums of ladder operators) indexed according to the initial Lanczos vector.
As an example, consider the \(n^{\text{th}}\) moment contributing to a Lanczos coefficient required for \(g_{ii}^{\text{(p)}}(\omega)\): Using _i)_ we would prepare \(|\Psi_{ii}^{\text{(p)}}\rangle\) and measure
\[\langle\hat{H}_{ii}^{\text{n,(p)}}\rangle=\langle\Psi_{ii}^{\text{(p)}}|\hat{H }^{n}|\Psi_{ii}^{\text{(p)}}\rangle. \tag{6}\]
Using _ii)_ we would prepare \(|\Psi_{\text{GS}}\rangle\) and measure
\[\langle\hat{H}_{ii}^{\text{n,(p)}}\rangle=(1/n_{ii}^{\text{(p)}})\langle\Psi_{ \text{GS}}|\hat{f}_{i}\hat{H}^{n}\hat{f}_{i}^{\dagger}|\Psi_{\text{GS}}\rangle. \tag{7}\]
Thus our methodology requires either \(|\Psi_{ii}^{\text{(p)}}\rangle\) or \(|\Psi_{\text{GS}}\rangle\) to be prepared on a quantum circuit, in addition to the measurement of Hamiltonian moments. While mathematically equivalent, _i)_ and _ii)_ can lead to considerable differences in the quantum circuit resources (depending on the state preparation methods), and catering for both approaches allows for a useful flexibility in the application of the quantum Lanczos approach to GFs. As discussed in Ref. [30], a naive counting of Hamiltonian terms results in an exponential growth in the number of Pauli strings with Hamiltonian power \(n\). However, this can be significantly improved by the existence of large commuting sets in the H moment expansions [44]. To mitigate
the rapid growth we measured the commuting Pauli terms with a single measurement circuit. However, we note there are other efficient techniques that could be also applied based on the tensor product basis (TPB) and qubit-wise commutativity [30, 45].
The following paragraphs provide further details on strategies _i)_ and _ii)_ for executing the quantum Lanczos approach to calculate a GF, followed by an outline of how the ground state can be prepared.
i) Initial Lanczos Vector Preparation
In the occupation number (ON) vector representation, the ground state is expressed as
\[|\Psi_{\text{GS}}\rangle=\sum_{x}c_{x}|x\rangle, \tag{8}\]
where \(x=\{x_{i}\}\) represents a set of occupations of spin orbitals \(i\) for a given number of particles, and each basis configuration can be written as the ON vector of \(N_{i}\) spin orbitals
\[|x\rangle=|x_{0},x_{1},\ldots,x_{i},\ldots,x_{N_{i}-1}\rangle. \tag{9}\]
Given a circuit representing the ground state, the coefficients of the ON vector (Eq. 8) can then be extracted by calculating the overlap (where \(\hat{U}_{\text{GS}}(\mathbf{\theta}_{\text{opt}})\) is a unitary applied to a reference state \(|\Psi_{\text{ref}}\rangle\), as is typical of VQE)
\[\begin{split} c_{x}&=\langle x|\Psi_{\text{GS}} \rangle\\ &=\langle x|\hat{U}_{\text{GS}}(\mathbf{\theta}_{\text{opt}})|\Psi_{ \text{ref}}\rangle\end{split} \tag{10}\]
for all \(x\), resulting in a set of \(c_{x}\) values, and the corresponding \(\{|x\rangle\}\), hence the ON representation (Eq. 8) is obtained. The ON representation of \(|v_{1}\rangle\equiv|\Psi_{ii}^{(\text{p})}\rangle\) or \(|v_{1}\rangle\equiv|\Psi_{ii}^{(\text{h})}\rangle\) is then obtained by applying \(\hat{f}_{i}^{\dagger}\) or \(\hat{f}_{i}\), respectively, to Eq. 8 and normalising (as in Eq. 3). The resulting expansion of \(|v_{1}\rangle\) as a linear combination of basis configurations with real coefficients can then be prepared on a quantum circuit using controlled Givens rotations as particle-conserving excitations [46]. While the procedure defined by Eqs. 8 - 10 in general exhibits an exponentially scaling overhead (due to the inclusion of all particle-number preserving configurations) for the representation of \(|v_{1}\rangle\) to be exact, it is used here to demonstrate the flexibility of our procedure to allow for arbitrary preparation of the initial Lanczos vector. Hence future preparation schemes with better scaling can be easily accommodated.
ii) Sandwiched Moment Expectation
As mentioned above, one can measure the Hamiltonian moment as the expectation with respect to the ground state of a sandwiched Hamiltonian moment operator, see Eq. 7. This requires the preparation of a quantum circuit representing the ground state, which can be achieved by a VQE-optimized variational ansatz. Following the JW transformation of \(\hat{f}_{i}\hat{H}^{n}\hat{f}_{i}^{\dagger}\) (and of \(\hat{f}_{i}^{\dagger}\hat{H}^{n}\hat{f}_{i}\) for the hole part), the corresponding Pauli strings can then be measured with respect to the ground state circuit. In section 3.1.2, we report Pauli operators representing the measurable sandwiched Hamiltonian moments for GF elements of the Hubbard dimer for Lanczos coefficients corresponding to \(l=2\).
### Ground State Preparation
To prepare the ground state, a number of approaches have been proposed in recent years based on hybrid quantum-classical algorithms such as imaginary time evolution [47, 48], or VQE [8]. These approaches generally involve the application of some \(\hat{U}_{\text{GS}}\) to a reference state, where \(\hat{U}_{\text{GS}}(\mathbf{\theta})\) is expressed as an ansatz which depends on parameters \(\mathbf{\theta}\) that are variationally optimized to \(\mathbf{\theta}_{\text{opt}}\). For the latter, a wide range of chemically intuitive (e.g. unitary coupled cluster (UCC) [49]) and hardware efficient [50] ansatze have been proposed. In addition, adaptive methods also exist in which the ansatz is constructed iteratively for a given problem [51, 52, 53]. In this work we focus on a quantum approach to the GF and assume an accurate ground state has been provided from a separate calculation. In practice, we prepared the ground state circuit using a VQE algorithm with a parameterised ansatz, but our approach also supports the classical calculation of the ground state (which would exemplify a procedure that combines classical computing for ground state properties, with quantum computing for excited state properties). This can be done as long as the classical calculation yields an expansion of the state vector in a basis of occupation configurations, since the latter in principle can always be prepared using controlled excitation gates [46], as outlined for strategy _i)_.
### Fermionic Hubbard model
To demonstrate our approach, we quantum compute the GF of the fermionic Hubbard model, the Hamiltonian of which can be written as
\[\begin{split}\hat{H}_{\text{Hub}}=&-\mu\sum_{s} \sum_{\sigma}\hat{f}_{s,\sigma}^{\dagger}\hat{f}_{s,\sigma}\\ &-t\sum_{<r,s>}\sum_{\sigma}\left(\hat{f}_{r,\sigma}^{\dagger} \hat{f}_{s,\sigma}+\hat{f}_{s,\sigma}^{\dagger}\hat{f}_{r,\sigma}\right)\\ &+U\sum_{s}\hat{f}_{s,\uparrow}^{\dagger}\hat{f}_{s,\uparrow}\hat {f}_{s,\downarrow}^{\dagger}\hat{f}_{s,\downarrow},\end{split} \tag{11}\]
where \(\mu\), \(t\), and \(U\) are the chemical potential, hopping amplitude, and on-site Coulomb interaction, respectively (\(t=1\) in these calculations, which sets the energy scale throughout this paper). \(s\) labels a site, \(<\!\!r,s\!\!>\) denotes nearest neighbor pairs of sites, and
spins are labelled by \(\sigma=\uparrow,\downarrow\). To establish consistency with the spin orbital index \(i\) of the GF matrix (see section 2.1), we use a linear mapping between the site-spin index and the spin orbital index \(s,\uparrow(s,\downarrow)\mapsto i\ (i+1)\) where \(s=0,1,2,\ldots,N_{s}-1\) and \(i=2s\), hence \(G_{s\sigma}\ _{s^{\prime}\sigma^{\prime}}\equiv G_{ij}\). There are \(N_{s}\times 1\times 1\) Hubbard sites arranged in a linear geometry.
Since we use JW encoding throughout, the spin orbital index \(i\) also corresponds to a qubit index on a quantum circuit. Fermionic operators are mapped according to JW encoding as
\[\begin{split}\hat{f}_{i}^{\dagger}&\mapsto\frac{X_{ i}-\mathrm{i}Y_{i}}{2}\prod_{d=1}^{i-1}Z_{d}\\ \hat{f}_{i}&\mapsto\frac{X_{i}+\mathrm{i}Y_{i}}{2} \prod_{d=0}^{i-1}Z_{d}\end{split} \tag{12}\]
where \(\mathrm{i}^{2}=-1\) (referring to the non-italic symbol i), and \(P_{i}\in\{I_{i},X_{i},Y_{i},Z_{i}\}\) refers to a Pauli rotation gate applied to qubit \(i\) (for \(i=0\) the product of \(Z_{d}\) over \(d\) is omitted). This results in a qubit representation of the Hamiltonian as a sum of tensor products of Pauli operators.
Given the qubit representation of \(\hat{H}_{\mathrm{Hub}}\), the corresponding GF matrix is obtained using the methodology outlined in section 2.1. We assume the half-filled regime (\(\mu=U/2\)), set the initial reference state to the half-filled singlet configuration, and obtain the ground state using VQE with a parameterized ansatz. For the latter, we utilise qubit excitations [52] in the Hard-core Boson representation [54] to obtain the ground state using significantly fewer 2-qubit gates than UCC. Consider the Hubbard dimer, corresponding to 4 qubits. The initial reference state of a spin-up electron occupying site 0 and a spin-down on site 1 (\(|\Psi_{\mathrm{ref}}\rangle=|1001\rangle\)) can be prepared with Pauli-\(X\) gates. The ground state can then be obtained by applying two qubit excitation operators, i.e. unitary operators corresponding to a single-excitation and a double-excitation which act directly on the qubit Hilbert space (hence obviating the need for Pauli-\(Z\) gates to maintain fermionic exchange antisymmetry [55]) \(|\Psi_{\mathrm{GS}}(\theta_{1},\theta_{2})\rangle=e^{\mathrm{i}\theta_{1}Y_{ 0}X_{2}}e^{\mathrm{i}\theta_{2}Y_{0}X_{1}X_{2}X_{3}}|\Psi_{\mathrm{ref}}\rangle\). The doubles excitation portion of this circuit is shown in Fig. 1.
By utilising particle-hole symmetry, as well as to-
### Dynamical Mean Field Theory
We also apply our approach to quantum compute the impurity GF within the DMFT solution of the single band Hubbard model on a Bethe lattice [5]. In the limit of infinite connectivity, the GF of this model can be interpreted as the impurity-site GF of an Anderson model Hamiltonian [57, 4]
Figure 1: Quantum circuit representation of the double qubit excitation operator \(e^{\mathrm{i}\theta_{2}Y_{0}X_{1}X_{2}X_{3}}\). The \(V\), \(V^{\dagger}\), and \(H\) (Hadamard) gates are for rotation into the desired computational basis (such that \(HZH=X\) and \(V^{\dagger}ZV=Y\)), while the CNOTs and \(R_{z}\) represent the exponentiated Pauli strings corresponding to the qubit excitation. Here 6 CNOTs are needed for the double excitation (an extra 2 CNOTs is also needed for the singles excitation not shown here, so 8 CNOTs altogether to express the ground state).
\[\begin{split}\hat{H}_{\rm And}&=\hat{H}_{\rm imp}\\ &+\sum_{b=N_{s}}^{N_{s}+N_{b}}\sum_{\sigma}\epsilon_{b,\sigma}\hat{ f}_{b,\sigma}^{\dagger}\hat{f}_{b,\sigma}\\ &+\sum_{b=N_{s}}^{N_{s}+N_{b}}\sum_{\sigma}V_{b,\sigma}\Big{(} \hat{f}_{\sigma}^{\dagger}\hat{f}_{b,\sigma}+\hat{f}_{b,\sigma}^{\dagger}\hat{ f}_{\sigma}\Big{)},\end{split} \tag{13}\]
in which \(\hat{H}_{\rm imp}\) = \(\hat{H}_{\rm Hub}(N_{s}=1)\), and baths sites are indexed by \(b\). \(\epsilon_{b,\sigma}\) and \(V_{b,\sigma}\) are variational parameters corresponding to the on-site bath energies and impurity-bath interactions, respectively. Here, we set \(\epsilon_{b,\uparrow}=\epsilon_{b,\downarrow}\) and \(V_{b,\uparrow}=V_{b,\downarrow}\). We use the same topology for the Anderson impurity model as [22].
Solving for the eigenspectrum of \(\hat{H}_{\rm And}\) yields the GF of the impurity+bath system, the upper \(2\times 2\) block of which corresponds to the impurity-site GF matrix \({\bf G}^{\rm imp}\) (with elements \(G_{ij}^{\rm imp}\)). Since there is only 1 impurity site and no spin-flipping terms, impurity-related quantities such as \({\bf G}^{\rm imp}\) are \(2\times 2\) diagonals and reduce to scalar functions of frequency in this case, however we write these as matrices in a basis of impurity spin orbitals \(i,j\) for consistency in notation and to emphasise generalisability. We solve for the \(G_{ij}^{\rm imp}\) elements using the methodology of section 2.1.
The dynamical interaction between the impurity and bath is governed by the \(U=0\) GF of the Anderson impurity model [5], the inverse of which is called hybridisation \(\mathbf{\Delta}\), with elements
\[\Delta_{ij}({\rm i}\omega_{k})=\Bigg{(}{\rm i}\omega_{k}+\mu-\sum_{\sigma}\sum _{b}^{N_{\rm bath}}\frac{V_{b,\sigma}^{2}}{{\rm i}\omega_{k}-\epsilon_{b, \sigma}}\Bigg{)}I_{ij}, \tag{14}\]
where \(I_{ij}\) is an element of the identity matrix, and \({\rm i}\omega_{k}\) is the \(k^{\rm th}\) Matsubara frequency. The Anderson model can also be related to \({\bf G}^{\rm imp}\) by a self-consistency condition [4, 5]
\[\Delta_{ij}^{\rm sc}({\rm i}\omega_{k})=\Big{(}{\rm i}\omega_{k}+\mu\Big{)}I_ {ij}-\frac{G_{ij}^{\rm imp}({\rm i}\omega_{k})}{2} \tag{15}\]
(with corresponding matrix \(\mathbf{\Delta}^{\rm sc}\)). We note that strict self-consistency can only be obtained for \(N_{\rm bath}=\infty\). This limit can be numerically approximated by a finite bath by varying the bath parameters to minimise the cost function
\[\begin{split}&{\mathfrak{C}}(\{\epsilon_{b,\sigma}\},\{V_{b, \sigma}\})\\ &=\frac{1}{N_{\omega}+1}\sum_{k=1}^{N_{\omega}}|\Delta_{ij}^{\rm sc }({\rm i}\omega_{k})-\Delta_{ij}({\rm i}\omega_{k})|^{2},\end{split} \tag{16}\]
thereby fitting \(\mathbf{\Delta}\) to \(\mathbf{\Delta}^{\rm sc}\). Here, \(N_{\omega}\) is the number of fitting frequencies, and we note that all DMFT fitting is performed on a grid of imaginary Matsubara frequencies \({\rm i}\omega_{k}=\frac{{\rm i}\pi(2k+1)}{\beta}\) where \(\beta\) is an inverse temperature which sets a frequency grid cutoff above \({\rm i}\omega_{k}=0\). In our applications, we run the DMFT algorithm on 1, 2, and 3 bath sites, corresponding to 4, 8, and 12 qubits, respectively. For 1 bath site, bath fitting is performed on 26 imaginary frequencies and \(\beta=8\). While for 2 and 3 bath sites, bath fitting is performed on 64 imaginary frequencies with \(\beta=16\). Numerical minimisation of \({\mathfrak{C}}\) results in a new set of bath parameters \(\{\epsilon_{b,\sigma}\},\{V_{b,\sigma}\}\), which in turn define a new \(\hat{H}_{\rm And}\), which leads to a new \({\bf G}^{\rm imp}\) via the approach described in section 2.1. Eqs. 13 - 16 therefore suggest an iterative algorithm, which can be terminated at a threshold value of the convergence error \(\tau\). For \(\tau\), we use the absolute change in \(\mathbf{\Delta}^{\rm sc}\) between iterations \(m\) and \(m+1\).
We assume the half-filled regime, hence \(\mu\) is set to \(U/2\) which corresponds to 1 electron on the impurity site in the ground state. To find the number of electrons in the bath, the following is performed: after finding the ground state of \(\hat{H}_{\rm And}\) for a given set of bath parameters, the total number of electrons \(N_{e}\) is obtained from the expectation of the total number operator. Bath sites are then filled according to the number of bath electrons = \(N_{e}-1\). The resulting occupation state is then used to initialise the Lanczos
Figure 2: Parameterised circuit for \(e^{\beta_{1}Y_{0}X_{2}}e^{\beta_{2}Y_{0}X_{1}X_{2}X_{3}}|\Psi_{\rm mf}\rangle\) used to obtain the ground state of the Hubbard dimer, with an optimized 2-body qubit excitation. The \(X\) gate on qubit 0 corresponds to Hubbard site 0 occupied by an even-indexed electron. The \(S\), \(S^{\dagger}\), \(V\), \(V^{\dagger}\), and \(H\) gates are for rotation into the desired computational basis. The CNOTs with qubit 0 (2) as targets (controls) bracketing the \(R_{z}\) represent the exponentiated Pauli strings corresponding to the qubit excitations. The 3\({}^{\rm rd}\) and 4\({}^{\rm th}\) CNOTs translate the action of previous gates to odd-indexed qubits, resulting in a double excitation \(e^{\beta_{2}Y_{0}X_{1}X_{2}X_{3}}\) applied to the initial reference state \(|\Psi_{\rm mf}\rangle=|1001\rangle\), followed by the single excitation \(e^{\beta_{1}Y_{0}X_{2}}\). Hence in this circuit, 6 CNOTs a are needed to represent the ground state.
procedure to solve for the impurity GF using the approach outlined in section 2.1. This procedure can be summarised by noting that we consider the Anderson impurity model to be in thermodynamic equilibrium with a reservoir of particles, hence it is treated as a grand canonical ensemble with the number of electrons in the Hubbard impurity set by particle-hole symmetry. In order to find the ground state of \(\hat{H}_{\text{And}}\) at each DMFT iteration, the UCCSD [49] ansatz is optimized using the VQE [8].
## 3 Results
### Green's Functions of Hubbard Chains
In Fig. 3 we compare a statevector simulated (noiseless) quantum GF to the classically calculated GF (throughout this work, results labelled 'classical ED' correspond to classical exact diagonalisation used to obtain the GF in the Lehmann representation at zero temperature [6]), for the non-interacting (\(U=0\)), intermediate (\(\left|\frac{U}{t}\right|=2\)), and strongly correlated (\(\left|\frac{U}{t}\right|=8\)) regimes of the Hubbard dimer. In Fig. 3, the spectral function and real part of a diagonal element of the GF are plotted. The spectral function (which counts the number of states per energy normalised by \(\pi\)) is obtained from the imaginary part of the diagonal elements of the GF matrix
\[A(\omega)=-\frac{1}{\pi}\mathfrak{Im}\mathbf{G}(\omega+\mathrm{i}\delta)]. \tag{17}\]
where \(\delta\) is a broadening term, and we set \(\delta=0.01\). Both strategies _i)_ and _ii)_ (described in section 2.1) are used in Fig. 3, showing their equivalence in terms of physical results. In terms of the circuit resources used in either case, interesting differences arise due to the differences in state preparation and operator expectation measurement. In this case, the exact solution is obtained (i.e. the continued fraction GF matches the GF calculated from exact diagonalisation (ED)) for maximum \(l=2\). This corresponds to a maximum Hamiltonian moment of \(n=3\), as in Eq. 1. In Fig. 4 the spectral function is calculated for a 4-site Hubbard model, with an increasing number of Lanczos roots (i.e. increasing dimensionality of the Krylov space). This shows that at 8 Lanczos roots, reasonable accuracy is obtained in the position of spectral peaks (energies which exhibit poles of the GF), with most of the deviation relative to the exact result exhibited in the weights (rather than positions) of the peaks near \(\omega\sim\pm 2\) (throughout this paper, \(\omega\) is in units of \(t\), and we set \(t=1\)).
Figure 3: Noiseless simulations of Quantum Computed Green’s function for the Hubbard dimer when \(U=0\) (top panels), \(\left|\frac{U}{t}\right|=2\) (middle panels), and when \(\left|\frac{U}{t}\right|=8\) (bottom panels), using two different initialisation strategies for the quantum Lanczos routine: blue ‘\(+\) corresponds to strategy _i)_, while ‘green ’\(\times\)’ represents strategy _ii)_. In the legend, spin orbital indexes \(ij\) have been omitted and \(\hat{H}^{n,(p)}=\hat{f}\hat{H}^{n}\hat{f}^{\dagger}\), \(\hat{H}^{n,(k)}=\hat{f}^{\dagger}\hat{H}^{n}\hat{f}\). Left panels show the spectral function (see text above Eq. 17 for units), and right panels show the real part of the \(G_{00}\) element (in units of \(\frac{1}{t}\) where we use \(t=1\)). We note that in this case the Lanczos routine reproduces the GF obtained from ED.
#### 3.1.1 Strategy i) Initial Lanczos Vector Preparation: Hubbard Model
Following the noiseless VQE optimization of the ground state ansatz (Fig. 2), the ground state of the interacting Hubbard dimer exhibits the following expansion in terms of ON basis vectors
\[\begin{split}|\Psi_{\text{GS}}\rangle&=c_{1100}|1100 \rangle+c_{1001}|1001\rangle\\ &+c_{0110}|0110\rangle+c_{0011}|0011\rangle,\end{split} \tag{18}\]
where the coefficients depend on \(U\) and \(t\) Hubbard parameters, with \(c_{1100}=c_{0011},c_{1001}=-c_{0110}\) due to symmetry and fermionic exchange, and for the non-interacting \(U=0,t=1\) case, \(|c_{1100}|=|c_{0011}|=|c_{1001}|=|c_{1001}|=1/2\). Taking the diagonal element of the particle and hole GFs \(g_{00}^{\text{(p)}}\) and \(g_{00}^{\text{(h)}}\) as an example, which require preparation of Lanczos vectors \(|\Psi_{00}^{\text{(p)}}\rangle=\frac{f_{0}^{\text{(h)}}|\Psi_{\text{GS}}\rangle} {\sqrt{n_{00}^{\text{(p)}}}}\) and \(|\Psi_{00}^{\text{(h)}}\rangle=\frac{f_{0}|\Psi_{\text{GS}}\rangle}{\sqrt{n_{0 0}^{\text{(h)}}}}\) to measure the \(\langle\hat{H}_{ii}^{\text{n,(p)}}\rangle\) and \(\langle\hat{H}_{ii}^{\text{n,(h)}}\rangle\), respectively, the following expansions for these Lanczos vectors are obtained after applying the ladder operators to the ground state
\[\begin{split}|\Psi_{00}^{\text{(p)}}\rangle&=\frac{ c_{0110}}{\sqrt{n_{00}^{\text{(p)}}}}|1110\rangle+\frac{c_{0011}}{\sqrt{n_{00}^{ \text{(p)}}}}|1011\rangle,\end{split} \tag{19}\]
\[\begin{split}|\Psi_{00}^{\text{(h)}}\rangle&=\frac{ c_{1100}}{\sqrt{n_{00}^{\text{(h)}}}}|0100\rangle+\frac{c_{1001}}{\sqrt{n_{00}^{ \text{(h)}}}}|0001\rangle.\end{split} \tag{20}\]
The quantum circuits representing Eqs. 19 and 20 (in addition to the off-diagonal terms) can then be prepared using multicontrolled Given's rotations to obtain arbitrary (particle-conserving) linear combinations of basis states [46]. In strategy _i)_, in contrast to strategy _ii)_, we do not measure the expectation of sandwiched moments with respect to the ground state, but rather take the expectation of the moments \(\hat{H}^{n}\) with respect to the Lanczos vectors. Hence, the Pauli strings to be measured for each GF element do not change with the element index of the GF matrix. The Pauli strings take the following form after JW encoding the fermionic Hamiltonian moments (using Hubbard parameters \(U=2,t=1\) as an example)
Figure 4: Spectral function of the the 4-site Hubbard model when \(\left|\frac{U}{t}\right|=2\), using noiseless simulations of Quantum Computed Green’s functions, using strategy _ii)_ for the quantum Lanczos routine. The convergence of the spectral function with respect to the number of Lanczos roots (maximum value of \(l\)) is observed. Peak positions are reasonably converged for \(\geq\) 8 Lanczos roots. Dashed black line shows the result from exact diagonalisation. See text above Eq. 17 for units.
\[\begin{split}&\hat{H}^{1}\mapsto\frac{1}{2}\bigg{(}Z_{0}Z_{1}+Z_{2}Z_{3}-X _{0}Z_{1}X_{2}-Y_{0}Z_{1}Y_{2}\\ &-X_{1}Z_{2}X_{3}-Y_{1}Z_{2}Y_{3}\bigg{)},\end{split} \tag{21}\]
\[\begin{split}&\hat{H}^{2}\mapsto\frac{1}{2}\bigg{(}3I-X_{0}X_{1}Y_{2}Y_ {3}+X_{0}Y_{1}Y_{2}X_{3}+Y_{0}X_{1}X_{2}Y_{3}\\ &-Y_{0}Y_{1}X_{2}X_{3}+Z_{0}Z_{1}Z_{2}X_{3}-Z_{0}Z_{2}-Z_{1}Z_{3} \bigg{)},\end{split} \tag{22}\]
\[\begin{split}&\hat{H}^{3}\mapsto\frac{1}{2}\bigg{(}X_{0}X_{1}X_{2}X_{3 }+X_{0}Y_{1}X_{2}Y_{3}-3X_{0}Z_{1}X_{2}\\ &+2X_{0}X_{2}Z_{3}+Y_{0}X_{1}Y_{2}X_{3}+Y_{0}Y_{1}Y_{2}Y_{3}-3Y_ {0}Z_{1}Y_{2}\\ &+2Y_{0}Y_{2}Z_{3}+2Z_{0}X_{1}X_{3}+2Z_{0}Y_{1}Y_{3}+2Z_{0}Z_{1} \\ &-Z_{0}Z_{3}-3X_{1}Z_{2}X_{3}-3Y_{1}Z_{2}Y_{3}-Z_{1}Z_{2}+2Z_{2}Z _{3}\bigg{)}.\end{split} \tag{23}\]
(with \(I\equiv I_{0}I_{1}I_{2}I_{3}\)). However, the circuits corresponding to the Lanczos vectors do change with GF matrix element index, and grow rapidly with the size of the system. Considering the scaling of the number of terms in Eq. 18 with respect to the number of qubits, the number of terms required to exactly represent the Lanczos vectors (Eqs. 19 and 20) scales exponentially. Hence, the feasibility of strategy _i)_ largely depends on how the Lanczos vectors are prepared on the quantum circuit.
A well known issue with the Lanczos method is the degradation of the Lanczos basis due to floating point precision [42]. Typically, this degradation occurs for large values of \(l\), where \(\beta_{l}\) tends to become small and hence numerical noise is amplified when dividing by \(\beta_{l}\) to normalise \(|v_{l}\rangle\). In classical schemes, this is typically treated by re-orthogonalising the current \(|v_{l}\rangle\) to the set of previously calculated vectors \(\{|v_{k<l}\rangle\}\). However, quantum noise and the statistical nature of circuit measurements required for the evaluation of Hamiltonian moments can also affect the Lanczos basis and associated Lanczos coefficients calculated from the measured moments.
To study this, the imaginary part of a diagonal element of \(\mathbf{G}\) for the Hubbard dimer is obtained from measurements of expectations of Hamiltonian moments corresponding to the particle (\(g_{00}^{\rm(p)}\)) and hole (\(g_{00}^{\rm(b)}\)) contributions to \(G_{00}\). The results are obtained from the Quantumium H1-1 emulator, a classical device emulator with a noise model corresponding to the noise profile of the H1-1 device [37]. These emulated experiments correspond to one of the 14 measurable circuits to be described in section 3.2, representing the upper diagonal elements of the particle and hole GF matrices, both with 23 (7) total (2-qubit) gates and depth 13. These are plotted alongside the normalised absolute errors in \(\alpha_{l}\) and \(\beta_{l}\) resulting from measurements, and the diagonal and off-diagonal components of the overlap matrix elements calculated classically from the Lanczos vectors which in turn are built from the measured \(\alpha_{l}\) and \(\beta_{l}\). Since the ED result for the Hubbard dimer GF is recovered for maximum \(l=2\), only \(\alpha_{1},\alpha_{2}\), and \(\beta_{1}\) are used in this case, and we average the error in \(\alpha_{l}\). Hence the errors in the Lanczos coefficients are calculated as
\[|\Delta\alpha^{\rm(p/h)}|=\frac{1}{2}\bigg{(}\bigg{|}\frac{\alpha_{1}^{\rm(p/ h)}-\alpha_{1,\rm exact}^{\rm(p/h)}}{\alpha_{1,\rm exact}^{\rm(p/h)}} \bigg{|}+\bigg{|}\frac{\alpha_{2}^{\rm(p/h)}-\alpha_{2,\rm exact}^{\rm(p/h)} }{\alpha_{2,\rm exact}^{\rm(p/h)}}\bigg{|}\bigg{)} \tag{24}\]
\[|\Delta\beta^{\rm(p/h)}|=\bigg{(}\bigg{|}\frac{\beta_{1}^{\rm(p/h)}-\beta_{1, \rm exact}^{\rm(p/h)}}{\beta_{1,\rm exact}^{\rm(p/h)}}\bigg{|}\bigg{)} \tag{25}\]
where \(\alpha_{l,\rm exact}^{\rm(p/h)},\beta_{l,\rm exact}^{\rm(p/h)}\) are noiseless ideal values of the Lanczos coefficients. The next Lanczos vector \(|v_{2}\rangle\) is then constructed from these measured Lanczos coefficients, and its norm \(\langle v_{2}|v_{2}\rangle\) and overlap with \(v_{1}\) are obtained classically (since the ground state parameters are obtained from an ideal noise free calculation, and the vector norms are calculated classically, the first Lanczos vector \(v_{1}^{\rm(p/h)}=|\Psi_{00}^{\rm(p/h)}\)) has unity norm by construction). The results of 4 separate emulator runs, corresponding to 4 separate sets of measurements, are shown in Fig. 5. It can be observed that, for a given set of measurements, the error in \(\beta_{l}\) resulting from the quantum noise correlates to the deviation from unity norm of \(v_{2}\), which is not unexpected since \(\beta_{l}\) governs the normalisation of the Lanczos vectors. This error in normalisation also translates to the deviation of spectral peak heights from the exact result, which gives a qualitative understanding of the effect of quantum noise in \(\beta_{l}\) on the resulting GF: within each plot of Fig. 5, a larger \(|\Delta\beta^{\rm(p/h)}|\) results in a worse relative peak height (compared to ED) for \(g_{00}^{\rm(p/h)}\), where superscript "p" ("h") denotes contribution to the positive (negative) frequency spectral peaks.
Also, quantum noise at the circuit level can have a cumulative effect on the orthogonalisation of the Lanczos vectors at higher roots, the study of which requires measurements of corresponding higher moments of the Hamiltonian with respect to the ground state for models larger than the Hubbard dimer. Such measurements require circuits too deep for the current NISQ era, as errors due to gate infidelities and qubit decoherence would accumulate and dominate the measurements of associated Pauli operators. Hence, we leave the effect of quantum noise on the orthogonalisation of Lanczos vectors for higher lying roots as an open question for future studies, although we note the investigation of Vallury _et. al._[30] into the effect of noise and device errors on the arithmeti
cal operations involved in the assembly of cumulants, in which it was found that infimum estimates of the ground state energy are improved in accuracy and more robust to device noise relative to variational approaches to optimizing \(\langle\hat{H}(\mathbf{\theta})\rangle\)[31]. The relation between numerical errors in the Lanczos basis and quantum noise could have interesting implications for error mitigation, which we briefly discuss in the conclusion.
#### 3.1.2 Strategy ii) Sandwiched Moment Expectation: Hubbard Model
Using the particle GF with \(U=2,t=1\) as an example, the first, second, and third powers of the Hamiltonian, sandwiched between ladder operators, and contributing to the diagonal element \(g_{00}^{(p)}\), are mapped to the following sums of Pauli operator strings via JW encoding.
\[\begin{split}\hat{f}_{0}\hat{H}^{1}\hat{f}_{0}^{1}\mapsto\bigg{(} -\frac{1}{4}Z_{0}X_{1}Z_{2}X_{3}-\frac{1}{4}Z_{0}Y_{1}Z_{2}Y_{3}-\frac{1}{4}Z_ {0}Z_{1}\\ +\frac{1}{4}Z_{0}Z_{2}Z_{3}-\frac{1}{4}X_{1}Z_{2}X_{3}-\frac{1}{4 }Y_{1}Z_{2}Y_{3}-\frac{1}{4}Z_{1}+\frac{1}{4}Z_{2}Z_{3}\bigg{)},\end{split} \tag{26}\]
Figure 5: Imaginary part of \(G_{00}(\omega)\) obtained from the QuantumH1-1 emulator, from 4 separate runs. For each run, the associated errors in Lanczos coefficients are shown, along with the norms of the Lanczos vector constructed from the (moment-)measured coefficients.
\[\hat{f}_{0}\hat{H}^{3}\hat{f}_{0}^{\dagger}\mapsto\bigg{(}\frac{3}{4 }I+\frac{3}{4}Z_{0}-\frac{1}{4}Z_{0}Z_{1}Z_{2}Z_{3}-\frac{1}{4}Z_{0}Z_{1}Z_{3} \tag{27}\] \[+\frac{1}{4}Z_{0}Z_{2}-\frac{1}{4}Z_{1}Z_{2}Z_{3}-\frac{1}{4}Z_{1} Z_{3}+\frac{1}{4}Z_{2}\bigg{)},\]
\[\hat{f}_{0}\hat{H}^{3}\hat{f}_{0}^{\dagger}\mapsto\bigg{(}-\frac{3 }{4}Z_{0}X_{1}Z_{2}X_{3}-\frac{1}{2}Z_{0}X_{1}X_{3}-\frac{3}{4}Z_{0}Y_{1}Z_{2}Y _{3} \tag{28}\] \[-\frac{1}{2}Z_{0}Y_{1}Y_{3}-\frac{1}{2}Z_{0}Z_{1}-\frac{1}{4}Z_{0 }Z_{1}Z_{2}+\frac{1}{2}Z_{0}Z_{2}Z_{3}+\frac{1}{4}Z_{0}Z_{3}\] \[-\frac{3}{4}X_{1}Z_{2}X_{3}-\frac{1}{2}X_{1}X_{3}-\frac{3}{4}Y_{1 }Z_{2}Y_{3}-\frac{1}{2}Y_{1}Y_{3}-\frac{1}{2}Z_{1}\] \[-\frac{1}{4}Z_{1}Z_{2}+\frac{1}{2}Z_{2}Z_{3}+\frac{1}{4}Z_{3} \bigg{)}\]
At variance to strategy _i_), in strategy _ii_) the measurable Pauli strings contributing to each GF element change with matrix element index. This is exemplified for 2, 3, and 4 Hubbard sites (4, 6, and 8 qubits, respectively) in Fig. 6, in which the number of Pauli strings in \(\hat{f}_{0}\hat{H}^{n}\hat{f}_{0}^{\dagger}\), and in \((\hat{f}_{0}+\hat{f}_{2})\hat{H}^{n}(\hat{f}_{0}^{\dagger}+\hat{f}_{2}^{ \dagger})\) (contributing to the off-diagonal element \(g_{02}^{(\mathrm{p})}\), are plotted as a function of \(n\).
Interestingly, the total number of individual Pauli terms does not necessarily increase with the Hamiltonian power. This effect manifests in two ways: _1)_ The concatenation of Pauli strings that occurs when taking the power can result in the collapse of products into smaller equivalent strings; this is evident for smaller numbers of qubits where the number of Pauli strings can oscillate for even and odd powers of \(\hat{H}\), however the importance of this decreases rapidly as the system size grows (see Fig. 6). _2)_ In general, the maximum number of Pauli strings for \(N\) qubits is \(4^{N}\), however the Hamiltonian will have less terms than this (depending on the interactions of the model) for a given \(N\) which will affect the number of individual strings that result from \(n\) powers of \(\hat{H}\); our results show that the number of Pauli strings for the Hubbard Hamiltonian saturates for values of \(n\) large enough for the Lanczos procedure to sufficiently cover the Hilbert (sub)space of the model (in general of lower dimensionality of the full Hilbert space of \(N\) qubits due to symmetry), which occurs at large enough values of the Lanczos index \(l\) defined in section 2.1.
The resulting manageable number of Pauli terms of small models, in addition to the optimized qubit excitation ansatz circuit for the ground state presented in
The heights of the two central peaks in Fig. 7 are underestimated by an average (between positive and negative peaks) of \(\sim\)18% (17%) for H1-1 (H1-1E). Repeated runs on the emulator H1-1E shows this is roughly consistent (Fig. 8), yielding a mean underestimation of 15% over the 5 runs with a standard deviation of approximately 2.5%. As discussed in section 3.1.1, quantum noise resulting from Hamiltonian moment evaluation affects the normalisation of Lanczos vectors via errors in \(\beta_{l}\) Lanczos coefficients, which can section 2.2 (in this strategy, the circuits corresponding to the _bra_ and _ket_ of \(\langle\hat{H}_{ii}^{\mathrm{m,(p/h)}}\rangle\) do not change with GF matrix index, unlike strategy _i_)), facilitate the quantum calculation of the full GF matrix of the Hubbard dimer on Quantum's H1 ion-trap machine, the results of which are presented in the next section.
### Green's Functions Calculated from Hardware Experiments
#### 3.2.1 Hubbard Model
In Fig. 7, the spectral function of the Hubbard dimer is obtained from the H1-1 trapped ion quantum computer, and the Quantum H1-1 emulator (H1-1E), with its noise model calibrated to the noise profile of H1-1 [37]. Both the particle and hole GF matrices contain 16 elements, which immediately reduces to 10 elements each due to symmetry about the diagonal [6]. This would naively result in 20 measurable circuits for the full GF (10 for both particle and hole GFs): one circuit per matrix element for particle and hole matrices (despite particle-hole symmetry, we explicitly calculate both the particle and hole matrices to more comprehensively test the performance on hardware). Due to certain elements within the particle and hole GF matrices being identical for the Hubbard dimer in this regime, the final total number of measurable circuits for the Hubbard dimer is 14. The total number of (2-qubit ZZ) gates per circuit ranges from 23 to 27 (7 to 9), with depth ranging from 13 to 18. Measurements of each circuit are performed with 8192 shots. Excellent agreement is observed in the spectral peak positions, indicating an accurate description of the excitation energies involved in the particle/hole transitions, when compared to the ideal classical simulation. This is consistent with recent investigations into the robustness of this method to quantum measurement noise and device errors for infimum estimates of the ground state energy [30], our results indicate a similar robustness for low lying excitation energies. The quality of the results is also indicative of low device errors on the H1-1 trapped ion quantum computer, since measurements of Hamiltonian moment expectations were performed without error mitigation. We also note the excellent agreement between H1-1 and its emulator, which is relevant for later sections involving larger models with circuits too deep for real hardware.
Hence, while accurate in the position of low energy peaks, the GF obtained from quantum computed moments present small quantitative errors. Whether these errors are large enough to prevent application
Figure 6: Number of Pauli strings in sandwiched moment operators (for diagonal and off-diagonal elements) versus Hamiltonian moment index \(n\), using 4, 6, and 8 qubits, for non-interacting \(U=0\) (left panels) and interacting \(\frac{|\frac{U}{t}|}{2}=2\) Hubbard models. For an interacting case of 4 qubits, oscillations in the number of Pauli strings are observed for even and odd \(n\); the magnitude of these oscillations rapidly decrease as the system size increases. For larger numbers of qubits, the shape of the curve (semilog plot) is characteristic of polynomial scaling, and indicates a roughly convergent number of Pauli strings as a function of \(n\) as the Lanczos procedure approaches the full dimensionality of the Hilbert space (or symmetry-reduced subspace thereof). For 8 qubits, dotted lines are shown to indicate saturated numbers of Pauli strings.
of this quantum GF approach to more elaborate simulations, such as quantum embedding within DMFT, remains an interesting question. In the next section, we address this issue and demonstrate the usefulness of this technique for DMFT.
#### 3.2.2 Dmft
The quantum computed moments approach is also used to obtain the GF of the impurity site in a DMFT algorithm. In Fig. 9, the impurity GF corresponding to 1 bath site is quantum computed at the final DMFT step in which the Hamiltonian corresponds to converged bath parameters (where convergence was obtained using ideal noiseless simulations of the quantum GF for previous iterations), using measurements of expectations of Hamiltonian moments performed on the H1-1 trapped ion quantum computer (real and emulated hardware) [37]. In this case 2 Lanczos roots were used, which reproduces exact diagonalisation (consistent with the Hubbard dimer). For 1 bath site, in each DMFT iteration in which the GF is obtain from the quantum circuit, 14 circuits were measured (corresponding to 14 impurity+bath GF matrix elements, similar to the Hubbard dimer), ranging from
Figure 8: Quantum computed GF obtained from the classical emulator of the H1-1 trapped ion quantum computer, with a noise model calibrated to current hardware data. Solid black line shows the result from classical ED. See text above Eq. 17 for units.
Figure 7: Quantum computed GF obtained from the H1-1 trapped ion quantum computer. Solid black line shows the result from classical ED. See text above Eq. 17 for units.
58 (22) to 67 (25) gates (2-qubit gates), with depths ranging from 44 to 51. Each circuit is measured with 8192 shots. The larger number of gates compared to the Hubbard dimer is a result of the reduced symmetry of the impurity-bath Hamiltonian for DMFT. Excellent agreement is observed between the hardware, emulator, and noiseless simulations.
As a step further, a hardware emulator in which the noise model is calibrated to H1-1 was also used to quantum compute the GF at each iteration of DMFT, for 20 iterations. The resulting spectral function is shown in Fig. 10. We observe the error in \(\mathbf{\Delta}^{\text{sc}}\) is approximately 0.000237, a value sufficiently low enough to obtain a reasonably accurate solution to the impurity problem. Hence the error due to quantum noise from the H1-1 emulator was sufficiently low to run the DMFT algorithm applied to 1 impurity site and 1 bath site for multiple iterations.
We also compute the impurity GF corresponding to converged bath parameters for 2 bath sites in the DMFT algorithm, in which 8 Lanczos roots are calculated. This is estimated to be reasonably accurate for 2 and 3 bath sites, in accordance with Fig. 4 in which it was shown that 8 Lanczos roots provide a good approximation to exact diagonalisation for 4 Hubbard sites. For 2 bath sites, 65 circuits were measured (emulated hardware) at the final DMFT iteration to obtain the impurity+bath GF, ranging from 193 (81) to 204 (86) total gates (2-qubit gates), with depths of 212 to 214. The spectral function is shown in Fig. 11. In this case 3 separate applications of the quantum method are applied to the emulator H1-1E, in order to simulate 3 separate hardware runs and assess the variation of results. We observe that the positions of low lying spectral peaks generally agree well with the exact result, and are also stable against statistical variations in measurements. Higher lying spectral peaks differ in position from the exact result more significantly, while peak heights (associated with normalisation of Lanczos vectors) are also less accurate and are more sensitive to variations due to measurement statistics.
Figure 10: Quantum computed impurity GF after 20 DMFT iterations, in which the emulator of the H1-1 trapped ion quantum computer is used to compute the impurity GF at each iteration. Black line shows the final impurity GF obtained from classical ED. See text above Eq. 17 for units.
Figure 9: Quantum computed impurity GF, in which the H1-1 trapped ion quantum computer is used to compute the impurity GF at the final DMFT iteration, following classical impurity GF computations for the previous iterations. Solid black line shows the final impurity GF obtained from classical ED. See text above Eq. 17 for units.
Finally, ideal noiseless simulations in which the Hamiltonian moment expectations are evaluated classically are performed for 3 bath sites, and the quantum computed moments approach is used to compute the impurity GF at all DMFT iterations. The resulting spectral function is shown in Fig. 12. Disagreements arise in the total number of spectral peaks, and in the weight of low energy poles (consistent with observations made for the spectral function of the 4-site Hubbard model, see section 3.1 and Fig. 4). In this case, disagreements with classical ED are due to the continued fraction representation of the GF in Lanczos method; The Lehmann representation of the GF used in the classical ED is exact for all temperatures T including the limit T \(\to 0\), this is not the case for the Lanczos calculated GF. In the latter, the exponential factor containing the inverse temperature \(\upbeta\) is discarded. Hence, despite the common practice, it is strictly an approximation to use a finite \(\upbeta\) in the fitting of the Lanczos-calculated impurity GF. However, this practice is justified by the common observation that the discrepancies between the Lanczos and ED impurity GFs vanish at sufficiently high \(\upbeta\) (sufficiently low T). We note that full DMFT convergence is achieved for the quantum Lanczos computed impurity GF, as well as for the classical ED GF, hence the DMFT algorithm finds slightly different solutions corresponding to the differences arising from approximations in the Lanczos computed GF. We choose a relatively low value of \(\upbeta\) for quicker convergence of the impurity GF, however we expect the differences between Lanczos and ED impurity GFs to vanish as \(\upbeta\to\infty\). Hence, the discrepancies here are not due to errors from quantum noise or measurements, and these results show that this quantum approach in principle works for multiple bath sites in DMFT.
## 4 Conclusions
In this paper, a quantum computational approach has been presented to calculate the single particle Green's function in a spin orbital basis, using a cumulant expansion of the Lanczos procedure involving quantum computed moments. This is implemented in the InQuanto package [34, 35], and is an extension of a recent work which presented the quantum computed moments approach to obtain infimum estimates of the ground state energy [30]. In our work, we utilise quantum computed moments to obtain the Lanczos coeffi
Figure 11: Quantum computed impurity GF of DMFT with 1 impurity and 2 bath sites, in which the emulator of the H1-1 trapped ion quantum computer is used to compute the impurity GF at the final DMFT iteration, following classical impurity GF computations for the previous iterations. Solid black line shows the final impurity GF obtained from classical ED. 3 separate applications of the quantum method are applied to the emulator H1-1E, in order to simulate 3 separate hardware runs and assess the variation of results. See text above Eq. 17 for units.
Figure 12: Quantum computed impurity GF corresponding to 3 bath sites, calculated to full convergence of the DMFT algorithm. Hamiltonian moment expectation values correspond to noiseless calculations, yielding a corresponding noiseless GF. Solid black line represents classical ED. See text above Eq. 17 for units.
cients, which in turn can be used to obtain the Green's function in the continued fraction representation.
Following a brief outline of the method, we show that our implementation allows for multiple strategies to initialise and run the Lanczos procedure; we present two separate strategies, one involving the explicit preparation of the first Lanczos vector on a quantum circuit (strategy _i_)), the other involving measurements of Pauli terms representing the Hamiltonian moments sandwiched between ladder operators indexed by the corresponding Green's function matrix element (strategy _ii_)). While strategy _i_) allows for a flexibility in the choice of representation of the Lanczos vector, we focus on strategy _ii_) for application of the method on trapped ion quantum computers due to the "NISQ-friendly" number of measurable Pauli terms for small sized Hubbard models.
Using our approach, we computed the GF matrix of the Hubbard dimer on the H1-1 trapped ion quantum computer, showing excellent agreement with the ideal noiseless result in terms of spectral peak positions. We also note the good agreement between H1-1 and H1-1E in Fig. 7, which also indicates the accurate approximation of hardware results when applying the emulator to larger models in section 2.3. Following the GF of the Hubbard Hamiltonian, we then apply our approach to obtain the impurity GF in a DMFT algorithm with up to 3 bath sites. Hardware results again show good agreement with the ideal noiseless result when applied to the final DMFT iteration, and emulated hardware results indicate that errors in the GF due to quantum noise do not prevent convergence of the DMFT algorithm when the quantum computed GF is applied to all DMFT iterations. We also note that no error mitigation has been applied to the hardware or emulator results in this paper.
We also studied the scaling of the number of measurable Pauli terms with respect to the Hamiltonian moment index, for the sandwiched moment operators required for strategy _ii_), indicating polynomial scaling and a saturation for high-lying moments. Finally, we mention our investigation of errors in the Lanczos basis and corresponding coefficients arising from quantum noise, and how these can impact elements of the GF matrix. As mentioned in section 3.1.1, an accurate description of the relation between quantum noise and errors in the Lanczos basis could lead to very useful techniques to correct for these errors. For example, knowledge of this relation could be used to design error mitigation protocols in which noisy measurements of Hamiltonian moments, or the resulting errors in the Lanczos coefficients, could be corrected/mitigated by known calibration data of a particular device. This could potentially widen the application domain of quantum computed Green's functions by allowing for larger system sizes, represented by larger, more error prone circuits. We consider this a fruitful direction for future work.
## Acknowledgements
We acknowledge Yu-ya Ohnishi for the productive discussions throughout this work. We also gratefully acknowledge Georgia Prokopiou and Ramil Nigmatullin for reading the manuscript and providing useful suggestions. We thank Andrew Tranter for helpful discussions on commuting sets of measurable Pauli terms. Finally, we thank Cono Di Paola for useful comments and suggestions during the implementation of the method.
|
2305.19743 | Towards Monocular Shape from Refraction | Refraction is a common physical phenomenon and has long been researched in
computer vision. Objects imaged through a refractive object appear distorted in
the image as a function of the shape of the interface between the media. This
hinders many computer vision applications, but can be utilized for obtaining
the geometry of the refractive interface. Previous approaches for refractive
surface recovery largely relied on various priors or additional information
like multiple images of the analyzed surface. In contrast, we claim that a
simple energy function based on Snell's law enables the reconstruction of an
arbitrary refractive surface geometry using just a single image and known
background texture and geometry. In the case of a single point, Snell's law has
two degrees of freedom, therefore to estimate a surface depth, we need
additional information. We show that solving for an entire surface at once
introduces implicit parameter-free spatial regularization and yields convincing
results when an intelligent initial guess is provided. We demonstrate our
approach through simulations and real-world experiments, where the
reconstruction shows encouraging results in the single-frame monocular setting. | Antonin Sulc, Imari Sato, Bastian Goldluecke, Tali Treibitz | 2023-05-31T11:09:37Z | http://arxiv.org/abs/2305.19743v1 | # Towards Monocular Shape from Refraction
###### Abstract
Refraction is a common physical phenomenon and has long been researched in computer vision. Objects imaged through a refractive object appear distorted in the image as a function of the shape of the interface between the media. This hinders many computer vision applications, but can be utilized for obtaining the geometry of the refractive interface. Previous approaches for refractive surface recovery largely relied on various priors or additional information like multiple images of the analyzed surface. In contrast, we claim that a simple energy function based on Snell's law enables the reconstruction of an arbitrary refractive surface geometry using just a single image and known background texture and geometry. In the case of a single point, Snell's law has two degrees of freedom, therefore to estimate a surface depth, we need additional information. We show that solving for an entire surface at once introduces implicit parameter-free spatial regularization and yields convincing results when an intelligent initial guess is provided. We demonstrate our approach through simulations and real-world experiments, where the reconstruction shows encouraging results in the single-frame monocular setting.
1
[MISSING_PAGE_POST]
## 1 Introduction
When we look at objects through a refractive surface like water or glass, the object changes appearance, and its image is distorted (see Fig. 1). This distortion is caused by refraction that happens when light propagates to a medium that transmits light at a different speed. When the ray's path is obstructed with a refractive material, it causes a change of the light ray's trajectory on the interface based on its surface geometry.
This situation happens, for example, when looking into a body of water from above with a drone, or as a lifeguard at a pool. Reconstructing the surface can help compensate for its effects to see through it, and several applications can exploit the surface shape itself, such as sea condition monitoring or refractive object inspection.
While refraction can confuse many computer vision algorithms, the bending of the light also provides hints towards the shape of the surface. In the problem of reconstruction of such surfaces, each surface point has two unknowns: its location and its normal. Previously it was mostly solved in settings with multiple views [13, 16, 18] or multiple frames [21].
We consider a simpler setup with a single frame monocular camera with a known arbitrary background.
We ask, given an object point and its refracted image, can we determine the light path between them without knowing anything about the surface or its location? While for a single point this problem is ill-posed, we show that solving this problem for all points at once imposes implicit spatial regularization and yields reconstruction of the full 3D of the refractive surface.
We propose an energy formulation such that we seek a surface that explains the refracted projection of the background object using Snell's law. To overcome the ambiguity, we optimize the whole surface in a single energy, which allows us a calculation of the necessary normal field. Furthermore, our energy works directly in world coordinates and minimizes the geometric errors such that we can obtain a full 3D reconstruction of the sought surface.
**Our contributions are:**
* We present a method that can estimate a complete 3D geometry of a refractive surface from a single distorted image with knowledge of its background texture and arbitrary depth using a standard off-the-shelf camera.
* Our method relies only on the physics of refraction, and the energy formulation is parameter-free.
The rest of the paper is organized as follows: In Sec. 2 we summarize existing works. In Sec. 3, we show the preliminaries necessary to understand our approach. In Sec. 4 we show how we optimize the unknown refractive surface via our proposed energy and in Sec. 5 we demonstrate several experiments and evaluations on synthetic and real-world scenes.
## 2 Related Work
A comprehensive analysis of the problem of refractive geometry in computer vision can be found in [11]. Here we review works related to our method.
**Multi-view Approaches.** Ben-Ezra _et al_. [4] presented a parametric method for shape and pose recovery of transparent objects from a known observer's motion. The seminal work of Kutulakos _et al_. [13] formulates conditions about the number of views and reference points under which an arbitrarily shaped specular or refractive scene can be reconstructed. They show that a surface of a refractive object can be retrieved from two viewpoints and one
Figure 1: [Left] A simulated scene with a refractive surface. We can observe a significant distortion of the background from the refraction, which is a function of the surface geometry. We exploit this relation and estimate the 3D geometry of the refractive surface from its image, knowing the background. [Right] An example of reconstruction using our approach.
reference point. Chang _et al_. [7] model the distortion from refraction as a depth-dependent function and reconstruct a refracted scene from multiple views. Han _et al_. [9] showed a shape estimation method based on altering a background pattern of an object immersed in water. The work of Morris _et al_. [14] introduces the reconstruction of dynamic refractive surfaces with an unknown refractive index by exploiting the refractive disparity in the multi-frame stereo setting. In [1] refraction through a dynamic surface was treated as a random event happening between the object of interest and the camera. An optimization framework for 3D reconstruction of a transparent object viewed from different viewpoints on a turntable is shown in [23].
A multi-view position-normal consistency was suggested in [16, 17, 18]. The constraint imposes consistency on the normal field between the different views. In [17] this constraint was used to estimate a refractive surface in a stereo setting with a known background texture and depth. In [18] this constraint was used for 3D shape reconstruction of a wavy water surface together with the refracted underwater scene from multiple views.
**Radiometric Approaches.** Chari _et al_. [8] showed that radiometric clues in addition to the geometric distortion can help for refractive object reconstruction. The method of Ihrke _et al_. [10] used a level set approach to reconstruct a dynamic dyed liquid using a fluorescent substance. In [2] spectral dependency of water is used to reveal object geometry immersed in water from the absorption coefficient by applying Beer-Lambert law.
**Monocular Reconstruction.** Pioneering work in monocular refractive surface reconstruction was introduced in [15]. They presented an approach for non-rigid transparent shape reconstruction by statistical and geometrical clues from surface motion over a known pattern using tens of frames. The work of Wetzstein _et al_. [22] uses specially designed patterns that can encode angular information of the scene and estimate surface geometry from a single image with the additional angular information provided by the pattern.
A recent learning-based method [20] uses a large set of synthetic images to train a model which can reconstruct complex refractive objects from a single image with a known distant background. Our method recovers the surface using the physics of refraction and does not rely on any training data. Thapa _et al_. [21] use a series of images under orthographic projection to learn the fluid dynamics of water and estimate a height map of a dynamic surface from images with a known background.
Similarly to our method, [19] they present an optimization approach in a monocular setting. However, they reconstruct height-field under an orthographic projection and require a known flat-patterned background. The parallel viewpoint direction in the orthographic projection eliminates the need to estimate the absolute distance of the surface, since moving the surface away from the camera does not influence refraction. Unlike [19], our method estimates the absolute surface depth directly by optimizing reconstruction error on refracted light paths. Our energy optimizes over world coordinates instead of reprojection error in the image plane. This enables us to cope with the realistic setup of an arbitrary background shape, unlike the formulation of [19] which is limited solely to the flat background. Furthermore, optimization in world coordinates allows us to exploit the perspective projection of the refracted rays and therefore avoid the height-ambiguity present in [19, 21].
## 3 Refraction Through a Surface
Consider a camera whose reference frame is in the origin and pointing towards a positive \(Z\) coordinate. The matrix of the intrinsic parameters \(K\) of the camera is known. Furthermore,
consider a known background point \(B\), see Fig. 2 [left]. When the medium between point \(B\) and the camera is homogeneous, the path of the ray from the point to the observer (camera) forms a straight line, as is usually assumed in computer vision. Then we denote the image coordinate of \(B\) as \(\mathbf{x}_{B}\).
The situation changes when an object with a different refraction index is placed between the camera and the background. Light passing through the interface between the two media is refracted and the image of the background is distorted. We denote image coordinates of the refracted ray as \(\mathbf{x}\), which is a projection of the 3D point surface point \(X\) located on the refractive surface, see Fig. 2 [center].
In our setting, we assume only two homogeneous media, one medium where the observer is located with an index of refraction (IOR) \(n_{1}\), the other where is the background with IOR \(n_{2}\). With known IORs, we can apply Snell's law [12] which defines how refraction happens between two media. Formally, let us denote in \(\mathbf{l}\) the direction of the ray coming out of the camera and hitting the refractive surface at a point that has a normal \(\mathbf{n}\). Then, a ray exits the interface at direction \(\mathbf{s}\), given by Snell's law [12]
\[\mathbf{s}=\text{snell}\left(\mathbf{l},\mathbf{n},\mu\right):=-\mu\mathbf{l}+\mathbf{n}\cdot \left(\mu\alpha-\sqrt{1-\mu^{2}\left(1-\alpha^{2}\right)}\right)\ \, \tag{1}\]
where \(\mu=n_{1}/n_{2}\) is the ratio of the IORs and \(\alpha=\mathbf{l}^{T}\mathbf{n}\). By knowing the point \(\mathbf{x}\), we can calculate the line-of-sight (LOS) on which the point \(X\) must lie. The LOS is an unprojection of a point \(\mathbf{x}\),
\[\mathbf{l}\doteq K^{+}\mathbf{x}\ . \tag{2}\]
The critical yet straightforward fact is that the point \(X\) must lie on its corresponding observed LOS \(\mathbf{l}\). Given \(\mathbf{l}\) we know that for a distance \(d\) we can calculate a point \(X\)
\[X(d)\ =\ d\mathbf{l}\ \, \tag{3}\]
i.e., \(X\) lies on the ray \(\mathbf{l}\) at a distance of \(d\) from the camera origin where \(X=[x,y,z]^{T}\) is a surface point with a depth \(z\). Combining Eq. (1) and Eq. (3) yields the expression for the
Figure 2: The geometry of monocular imaging through a surface. [Left] Imaging without refraction. A background point at \(B\) is projected on an image plane at \(\mathbf{x}_{B}\) to an image plane with color \(I(\mathbf{x}_{B})\). [Center] Imaging through a homogeneous refractive material between \(B\) and the camera. The light emanating from the background point \(B\) is refracted and projected to the coordinate \(\mathbf{x}\). In this case, the light path consists of two line segments that go from \(\mathbf{x}\) through \(X\) and meet with the background point \(B\). [Right] Light path when the estimated refractive surface (red) is different from the correct one (blue). We measure the energy of a proposed solution \(E\) as the shortest distance between the estimated light path (\(\hat{X}+t\hat{\mathbf{s}}\), see Eq. (2)) and \(B\) which should (ideally) be intersected and then \(E=0\). The direction of the refracted ray \(\hat{\mathbf{s}}\) is calculated from the normal \(\hat{\mathbf{n}}\) and the line-of-sight \(\mathbf{l}\) and we know that the correct refracted surface point lies on \(\mathbf{l}\).
ray's light path that connects \(X\) and \(B\)
\[B=X+t\boldsymbol{s}\ \, \tag{4}\]
where \(t\) is the distance from \(X\) to the background point \(B=\left[x^{B},y^{B},z^{B}\right]^{T}\), see Fig. 2 [center].
## 4 Optimization
The input to the method is \(I_{B}\), an image of the background object acquired through a homogeneous medium (i.e. air) including its depth \(B\) for each pixel, and \(I_{R}\), an image of the background object acquired through the refractive surface from the same viewpoint. Then, dense matches are found between these two images, which is done by estimating the flow field \(\boldsymbol{u}\) such that \(I_{B}(\boldsymbol{x})=I_{R}(\boldsymbol{x}_{B}+\boldsymbol{u})\).
Once the dense matches \(\boldsymbol{u}\) matches are found, the optimization solves for the refractive surface by finding the \(\boldsymbol{d}\) along the known LOS \(\boldsymbol{l}\) for each object point. The optimization aims to find the values \(\boldsymbol{d}\) for all points such that their light paths will pass through their respective background points \(B\) (see Fig. 2 [center]).
In each iteration of the optimization, we perform the following steps: 1) Calculating the surface point cloud (PC) using Eq. (3) with the current estimate of the distances \(\boldsymbol{d}\). 2) Calculating the normals \(\boldsymbol{n}\) from the entire PC. 3) Calculating \(\boldsymbol{s}\) using Eq. (1) and reconstructing the estimated light path in Eq. (4).
The steps are performed in each update of \(\boldsymbol{d}\) until a stop criterion on the value of the energy is met.
### The Energy
In this part, we describe the energy we optimize to obtain surface geometry. We first show how we measure the error of the depth map when it does not form a correct light path.
In Sec. 3 we said that \(B\) has to lie on the trajectory defined in Eq. (4) (see Fig. 2 [center]). During the estimation of the surface, we reconstruct light paths using Eq. (4), which should ideally intersect with \(B\). For an incorrect \(d\), the path does not intersect with its \(B\), and we
Figure 3: A minimal synthetic example of the proposed energy \(E\) for a two-point surface \(d_{1},d_{2}\). The background plane is placed at \(B^{z}=3\) and the shown surfaces are energies of different combinations of \(d_{1},d_{2}\) with their ground truth location shown above. The values are the logarithm of \(E\) for better visibility. All energies show that the ground truth solution yields a global minimum. However, notice that the magnitude of the gradient gets very small around the location of the correct solution. Similarly, there is a ridge along a scalar multiple of the depth. The peak in the energy slightly deviated from the ground truth, because the normals are calculated from the numerical approximation of surface derivatives.
penalize the estimate by its smallest distance of the light path to its \(B\). Note that we have to reconstruct light paths for the entire image since normals are needed to calculate \(\mathbf{s}\) in Eq. (1).
The situation is schematically shown in Fig. 2 [right], where the light path does not intersect with the background point \(B\). In this case, the energy of a point is calculated as the shortest distance between a light path in Eq. (2) and its background point, \(B\). This point-line distance has a following closed form,
\[E(X,\mathbf{s},B):=\left\|B-[X+[\mathbf{s}\cdot(B-X)]\mathbf{s}]\right\|_{2} \tag{5}\]
where \(\cdot\) is a dot product and \(X\) is given by Eq. (3). The total energy we optimize for all points jointly is
\[\mathbf{d}^{*}=\arg\min_{\mathbf{d}}\sum_{i\in\Omega}E(X_{i},\mathbf{s}\left(\mathbf{l}_{i}, \mathbf{n}_{i},\mu\right),B_{i})+\max\{0,z_{i}-z_{i}^{B}\}\enspace, \tag{6}\]
where individual normals \(\mathbf{n}_{i}\) are calculated from all points in PC \(\Omega\) of all surface points \(X_{i}(\mathbf{d})\) in Eq. (3). The second term prevents estimated depth \(z_{i}\) to fall behind the background depth \(z_{i}^{B}\). Note that the normals and PC are calculated in the iteration of the optimization step, but only \(\mathbf{d}\) is updated.
Fig. 3 shows a simple example of our energy Eq. (5) with three different combinations of point distances of two adjacent points. The energies are calculated on a grid of distances. Each image shows that even when only one of the points is correct, the energy is negatively affected as the points are spatially linked by normals. This link imposed by normals affects neighbouring points, and the lower energy is always at the location which matches the true value for both points. The spatial linking through normals introduces the only regularization in our energy. It imposes physical correctness on the entire depth map because it is linked to other surface points through normals which are input for Snell's law. Moreover, the figure reveals that there is a significant drop in the gradient magnitude when a solution is either close to a correct solution or its scalar multiple of depth, which corresponds to the ridge in the energy surface shown in Fig. 3.
**Initialization.** The energy is non-convex and relies on a good initial guess, like coarse information about the surface distance. For this purpose, we developed an initialization scheme where we optimize a location of a plane perpendicular to the camera's principal axis. The plane location from the observer is defined by a single variable, a shift, which we can easily and quickly optimize with the energy, i.e., we optimize for \(c\) after plugging \(\mathbf{d}_{\text{flat}}=c\) as the depth to the energy in Eq. (6). With the initial estimate, the location of the plane is usually roughly located around the mean position of the ground truth surface and serves as a good initial guess.
## 5 Experiments
To showcase our method, we compare it quantitatively to benchmarks with available ground truth [18, 21] and make a series of real-world experiments to estimate a surface of a liquid with a standard off-the-shelf camera. We have chosen a CPU Matlab implementation of L-BFGS [24] which does not require the calculation of the memory-prohibitive Jacobian or Hessian as the number of the variables equals the number of surface points. Run time of one energy evaluation is on average 2.06 ms for 64 \(\times\) 64 px images, 7.26 ms for \(128\times 128\) px and 25.32 ms for \(256\times 256\) px images. For dense matches between background and refracted surface \(\mathbf{u}\) we used the optical flow [6]. We optimize the energy for the entire input surface
simultaneously, but alternatively, the optimization step can be performed patch-wisely. The patch-wise computation can however hurt the quality in non-overlapping regions.
**Synthetic Experiments.** We compare our method to two methods, a multiview method [18] and a monocular method [21].
In the first experiment, we use the same benchmark functions presented in [18] with a perspective projection. The benchmarks are denoted _wave1_ and _wave2_. The surface _wave1_ is defined as \(z_{1}(x,y,t)=2+0.1\cos[\pi(t+50)\sqrt{(x-1)^{2}+(y-0.5)^{2}}/80]\). It can be intuitively seen as a diagonally rolling wave. The second benchmark _wave2_, \(z_{2}(x,y,t)=2-0.1\cos[\pi(t+60)\sqrt{(x+0.05)^{2}+(y+0.05)^{2}}/75]\) can be seen as a wave growing in the center. Since surfaces are smooth, a resolution of \(64\times 64\) px is sufficient to represent the surface. Scenes are rendered with Blender [5] with Cycles Engine. The rendering scripts and input images will be made available with the paper. Both functions are evaluated on 100 surfaces in \(t\in(0,99)\). We tested both benchmark functions with two background shapes to show that background geometry can be arbitrary. The first, a plane at \(z_{\text{flat}}=2.5\) and the second, a more complex shape, \(z_{\text{func}}(x,y)=2.5+0.05[\sin(2\pi x)+\cos(2\pi y)]\).
The results of the first experiment are shown upper part of Tab. 1 and Fig. 4. We show results for three initialization schemes, the independent initialization is done for each frame \(t\) independently and sequential initialization uses the previous estimate as the initial value, which is similar to the technique used in [18]. Lastly, to demonstrate that our method works well when an initial surface location is provided, we initialized the method with the surface location at \(\boldsymbol{d}_{0}=2\), which roughly corresponds to a mean location of the sought surface. Error is reported in both Root Mean Square Error (RMSE) of the reconstructed depth map and Mean Angular Error (MAE) of the estimated normals. We used the same calibration as [18] therefore the RMSE is measured in the same (arbitrary) units.
The errors vary with time as the surfaces become more complex, but the shape of the background surface has little influence on the results. We see that our method is sensitive to
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c} \multicolumn{11}{c}{} & \multicolumn{6}{c}{Perspective} \\ & \multicolumn{2}{c|}{Ours - indep. init} & \multicolumn{2}{c|}{Ours - seq. init} & \multicolumn{2}{c|}{Ours - \(\boldsymbol{d}_{0}=2\)} & \multicolumn{2}{c|}{9 Cameras [18]} & \multicolumn{2}{c}{4 Cameras [18]} \\ \hline & RMSE [unit] & MAE [\(\,\uparrow\,\)] & RMSE [unit] & MAE [\(\,\uparrow\,\)] & RMSE [unit] & MAE [\(\,\uparrow\,\)] & RMSE [unit] & MAE [\(\,\uparrow\,\)] & RMSE [unit] & MAE [\(\,\uparrow\,\)] \\ \hline \hline _wave1_-flat & 0.15 & 5.89\({}^{\circ}\) & 0.11 & 4.29\({}^{\circ}\) & 0.05 & 5.06\({}^{\circ}\) & 0.006 & 0.76\({}^{\circ}\) & 0.014 & 4\({}^{\circ}\) \\ _wave1_-flat & 0.14 & 6.06\({}^{\circ}\) & 0.10 & 4.79\({}^{\circ}\) & 0.05 & 4.80\({}^{\circ}\) & & & & \\ _wave2_-flat & 0.23 & 11.16\({}^{\circ}\) & 0.45 & 13.28\({}^{\circ}\) & 0.04 & 5.89\({}^{\circ}\) & & & & \\ _wave2_-flat & 0.23 & 10.85\({}^{\circ}\) & 0.43 & 13.18\({}^{\circ}\) & 0.04 & 5.92\({}^{\circ}\) & & & & \\ \hline & \multicolumn{6}{c}{Ours - indep. init} & \multicolumn{2}{c|}{Full [21]} & \multicolumn{2}{c}{Single-input [21]} \\ \hline & \multicolumn{2}{c|}{RMSE} & \multicolumn{2}{c|}{MAE [\(\,\uparrow\,\)]} & RMSE & MAE [\(\,\uparrow\,\)] & RMSE & MAE [\(\,\uparrow\,\)] \\ \hline \hline _ocean_ & 0.230 & & 1.30\({}^{\circ}\) & & & & & & \\ _ripple_ & 0.217 & & 0.649\({}^{\circ}\) & & & & & & \\ _tian_ & 0.248 & & 1.36\({}^{\circ}\) & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison of reconstructed surfaces with a multi-view [18] and a monocular method [21]. The upper part of the table shows results on two benchmark functions _wave1_ and _wave2_ in perspective setting. To demonstrate that our method can handle different background depths, we conducted experiments with two different background depths defined by \(z_{\text{flat}}\) and \(z_{\text{func}}\). The table shows three different initialization schemes, the first column shows results when each frame is initialized separately, the second the sequential initialization when only a first frame is initialized and consecutive frames use the previous result and the third is where all frames use the same initial \(\boldsymbol{d}_{0}=2\). The lower part shows evaluation of three different surfaces (_ocean_, _ripple_ and _tian_) compared to [21] (monocular setting under orthographic projection). Each of these surfaces consists of a ten-frame sequence generated with the code provided by [21].
initialization. There is a noticeable difference in RMSE when a good initial value is provided (\(\mathbf{d}_{0}=2\), roughly the location of the correct surface). Results differ particularly on \(wave2\), where RMSE is approximately six times better when a good initialization is used. A less noticeable gap in results is apparent on \(wave1\) where MAE is even better when sequential initialization is used than in the case of the good initial guess (\(\mathbf{d}_{0}=2\)) but still worse in RMSE. Note that we rely on a monocular setting with just a predefined background. In comparison with the nine-camera setting of [18], our method performs worse both in RMSE and MAE. Since the benchmark surfaces are largely non-flat, our method sometimes estimated the mean location of the surface slightly off but recovered the normals comparatively accurately to the four camera settings in [18] on \(wave1\) benchmark. However, in the reduced setting with four cameras [18], our method is on par in MAE, but is still worse in RMSE.
The second experiment compares our method to [21] in a monocular setting with orthographic projection. We used the code provided by [21] to generate the surfaces and their optical flow. We rendered ten frames of the three available scenarios \(ocean\), \(ripple\) and \(tian\) with \(128\times 128\) px under orthographic projection. Note that there is a height-ambiguity im
Figure 4: Results of the \(wave1\) and \(wave2\) with \(z_{\text{func}}\) background at times \(t\in\{25,50,75\}\). Initialization was done for each frame independently. Estimated [top] and ground truth surfaces [middle] with colour-coded [3] optical flow [6]. The bottom row shows a colour coded error of the estimated depth map.
Figure 5: Estimated surfaces of \(ocean\), \(ripple\), \(tian\) of surfaces generated from [21] under an ortographic projection. Estimated [top] and ground truth surfaces [middle] with colour-coded [3] given optical flow [6]. The bottom row shows a colour coded error of the estimated depth map. Due to the height-ambiguity imposed by the problem, the comparison is performed with surfaces normalized to have a zero mean.
posed by the orthographic projection, as a shift of the entire surface does not change the refracted ray \(\mathbf{s}\) since the angle subtended between \(\mathbf{l}\) and \(\mathbf{n}\) does not change with by moving the surface in \(Z\) coordinate, therefore the estimated surfaces are normalized to have zero mean before their RMSE is calculated. For the same reason, all frames were initialized identically with an arbitrary flat surface at \(\mathbf{d}_{0}=1\).
Our method quantitatively performs better in RMSE than [21] in a single frame setting. For the full multi-frame FSRN-RNN [21] the RMSE error is half compared to ours but this method takes into account fluid dynamics between subsequent frames, unlike our method which works independently on each frame. It is important to stress that in [21] a flat background at \(z=0\) is required, while in our case the background depth can be arbitrary. Furthermore, [19, 21] are limited only to orthographic projection and thus can only estimate height-field, while our approach is more general and can estimate a 3D surface under the perspective setting.
In both experimental setups, our method is efficient in finding the correct surface shape as is evident in the low MAE. Most of the RMSE error stems from errors in the shift and scale of the surface, as discussed in Sec. 4.
**Real-World Experiments.** We further evaluated our approach on real-world scenes using a water tank and a flat surface with a printed complex texture as the background (shown in Fig. 6 [left]). We used a Nikon D810 camera with a Nikon AF-S Nikkor 105 mm lens. Camera calibration was performed with the MATLAB Camera Calibration Toolbox. We also used the results of the depth of the intrinsic calibration marker to measure the depth of the flat background surface. The optical flow was estimated with [6]. We fixed the camera to a steel custom-made structure approximately 630 mm away from the background (an averaged distance of the flat calibration pattern) where the background is perpendicularly aligned to the principal axis of the camera. The textures contain many fine details, which are particularly good for finding the dense matches between the background image and the images distorted by refraction. We show the background and one refracted input image in Fig. 6 [left] and the estimated surfaces in Fig. 7.
We perturbed the surface by hand to make different types of waves with varying smoothness. Frames 1 and 2 in Fig. 7 show reconstructions of smooth surfaces with rather larger waves. Frame 4 showcases a situation with a slightly larger depth range where we shook the surface. Lastly, frames 5 and 6, in contrast, show waves with rather sharper edges of smaller depths. Since the real-world surface is mostly flat, the initial estimate is often accurate enough and produces convincing depths.
Figure 6: [Left] The input background texture we used in our real-world experiment. [Center] One of the input images of a refracted surface (denoted as Frame 6 in Fig. 7). [Right] Optical flow between these frames.
## 6 Conclusions
A refractive object can hinder many conventional computer vision methods. Nevertheless, refraction can teach us about the physics of the scene and pose interesting challenges. Here we looked at the task of reconstructing a refractive surface over a known background using just a single view. This setup was never tackled before in the general setup of a perspective camera and an arbitrary background shape.
Unlike previous papers that used multiple views, frames, or additional constraints, we showed that a simple energy function based on Snell's law can estimate a surface under a monocular setup when background depth is given. Working directly at world coordinates, we express our energy as a distance between the original background point and the expected ray's trajectory. The energy looks for the surface that best explains the refracted image altogether. By optimizing for the entire surface, implicit smoothness is imposed, and the surface shape can be reconstructed.
We demonstrate our method on numerous synthetic and real-world scenes with varying complexity. We achieve errors that are on par in MAE with a previous method that used four views and similarly accurate in RMSE with a deep learning single-frame monocular setup under orthographic projection.
Since our method is indirectly optimizing the depth through normal fields, it is highly sensitive to the initial depth and performs better in reconstructing surface normals than reconstructing absolute depth. For initialization, we developed a simple initialization scheme, which works robustly on mostly flat surfaces. Our method is based on calculating the optical flow between the refracted image to the original one. Therefore, it is limited to scenarios where the optical flow can be reliably calculated, i.e., enough texture and moderate distortions from the refraction.
Figure 7: Estimated surfaces of our real-world water experiment. Fig. 6 shows input images. Surfaces are shown with their corresponding colour-coded [3] optical flow [6]. The experiment was made with a water tank with a fixed camera acquiring the scene from above. Apart from making artificial waves on the surface, the amount of liquid was also changed to vary the overall depth. Because the magnitude of depth is significantly smaller than surface width and height, the shown surface depths are re-scaled 20 times for better visibility.
## Acknowledgement
This work was supported by the grant "Internationalisierung Exzellenzstrategie" from University of Konstanz, "SFB Transregio 161 - Quantitative Methods for Visual Computing (project B5)", Leona M. and Harry B. Helmsley Charitable Trust, The Maurice Hatter Foundation, Israel Science Foundation grant \(\#680/18\) and the Technion Ollendorff Minerva Center for Vision and Image Sciences.
|
2308.16498 | Generalised Winograd Schema and its Contextuality | Ambiguities in natural language give rise to probability distributions over
interpretations. The distributions are often over multiple ambiguous words at a
time; a multiplicity which makes them a suitable topic for sheaf-theoretic
models of quantum contextuality. Previous research showed that different
quantitative measures of contextuality correlate well with Psycholinguistic
research on lexical ambiguities. In this work, we focus on coreference
ambiguities and investigate the Winograd Schema Challenge (WSC), a test
proposed by Levesque in 2011 to evaluate the intelligence of machines. The WSC
consists of a collection of multiple-choice questions that require
disambiguating pronouns in sentences structured according to the Winograd
schema, in a way that makes it difficult for machines to determine the correct
referents but remains intuitive for human comprehension. In this study, we
propose an approach that analogously models the Winograd schema as an
experiment in quantum physics. However, we argue that the original Winograd
Schema is inherently too simplistic to facilitate contextuality. We introduce a
novel mechanism for generalising the schema, rendering it analogous to a
Bell-CHSH measurement scenario. We report an instance of this generalised
schema, complemented by the human judgements we gathered via a crowdsourcing
platform. The resulting model violates the Bell-CHSH inequality by 0.192, thus
exhibiting contextuality in a coreference resolution setting. | Kin Ian Lo, Mehrnoosh Sadrzadeh, Shane Mansfield | 2023-08-31T07:00:21Z | http://arxiv.org/abs/2308.16498v1 | # Generalised Winograd Schema and its Contextuality
###### Abstract
Ambiguities in natural language give rise to probability distributions over interpretations. The distributions are often over multiple ambiguous words at a time; a multiplicity which makes them a suitable topic for sheaf-theoretic models of quantum contextuality. Previous research showed that different quantitative measures of contextuality correlate well with Psycholinguistic research on lexical ambiguities. In this work, we focus on coreference ambiguities and investigate the Winograd Schema Challenge (WSC), a test proposed by Levesque in 2011 to evaluate the intelligence of machines. The WSC consists of a collection of multiple-choice questions that require disambiguating pronouns in sentences structured according to the Winograd schema, in a way that makes it difficult for machines to determine the correct referents but remains intuitive for human comprehension. In this study, we propose an approach that analogously models the Winograd schema as an experiment in quantum physics. However, we argue that the original Winograd Schema is inherently too simplistic to facilitate contextuality. We introduce a novel mechanism for generalising the schema, rendering it analogous to a Bell-CHSH measurement scenario. We report an instance of this generalised schema, complemented by the human judgements we gathered via a crowdsourcing platform. The resulting model violates the Bell-CHSH inequality by 0.192, thus exhibiting contextuality in a coreference resolution setting.
## 1 Introduction
The Winograd Schema Challenge (WSC) originated from the ideas of the American computer scientist Terry Winograd in the 1970s. Winograd was interested in situations where machine understanding could fall behind human understanding. He constructed hypothetical experiments where humans and machines would read a given description, and then answer some questions about it. The descriptions would provide humans with enough context and thus they could answer the questions correctly. However, machine understanding would fall short, as machines did not learn from the context in the same way as humans did. An example description is the sentence "The city councilmen refused the demonstrators a permit because they feared violence.". The question following it is "Who feared violence?" and the correct answer is "The city councilmen". If we change the word "feared" to "advocated", the question will have the opposite answer, namely "the demonstrators". Winograd's examples were picked up by the Canadian AI scientist Hector Levesque in 2011. He created a suite of descriptions and questions, proposing them as a test of machine intelligence - an alternative to the Turing Test [27]. Later, the AI company Nuance put forwards a cash prize of USD 25,000 for any AI that could solve the challenge with an accuracy close to humans, 92-96%. No AI system managed to achieve the target, and as a result, the prize was withdrawn in 2018. It was not until the 2020s that large pre-trained language models, employing transformer architectures, eventually reached a performance level comparable to human accuracy [25]. Despite these advancements, the WSC continues to present significant challenges for AI systems lacking extensive data resources and computational power.
In previous work, we showed how natural language notions of context can be modelled by the mathematics of quantum contextuality [37, 38, 35]. In particular, we modelled anaphoric context in [29]. Inspired by the reliance of the WSC on anaphoric context, we decided to explore whether quantum contextuality could potentially provide a solution to the challenge.
Our initial examination found that the WSC in its original form lacked the complexity required to be of interest from a quantum contextuality standpoint. Upon modelling the WSC within the sheaf theoretic framework, it became evident that the scenario was too simplistic to exhibit contextuality, as the models derived from it were deterministic.
This motivated us to extend the schema and allow it to be non-deterministic such that it can, in principle, host contextuality. This was achieved by introducing additional linguistic context, namely, (1) two special words rather than one and (2) two ambiguous pronouns instead of one. Consequently, we obtained more observables and more measurement contexts, leading to a scenario that resembles the Bell-CHSH scenario.
The above outlines the first contribution of this paper. Our second contribution lies in the instantiation of our generalized Winograd Schema and the collection of human judgments via a crowdsourcing platform. This allowed us to calculate the violation of the Bell-CHSH inequality and thereby establish the contextuality of our model, which was constructed based on human judgments. We also modelled the data using the Contextuality-by-Default (CbD) framework of contextuality and calculated a corresponding CbD degree of contextuality. It was found that our probabilistic model exhibited contextuality in both the Bell-CHSH and CbD models.
## 2 Contextuality
The origins of contextuality research can be traced back to 1935, with the work of Einstein, Podolsky, and Rosen (EPR) [16]. In their work, they posited that the quantum mechanical description of physics was incomplete when two spatially separated parties were permitted to make measurements on an entangled system. A way of formalising such theories is in terms of hidden variables, which, if known, might fully determine the outcome that would result from any given measurement. Bell's theorem [8, 7] in the 1960s showed that no hidden-variable theory exists for quantum mechanics unless the measurement outcomes were allowed to be dependent on which other measurements are performed simultaneously. Around the same time, Kochen and Specker [24] independently demonstrated that there exists a set of measurements in a 3-dimensional Hilbert space such that a non-contextual hidden-variable theory cannot exist, regardless of the state of the system. These two results, collectively known as the Bell-Kochen-Specker theorem, showed that a hidden-variable theory for quantum mechanics must be contextual, providing some clarity to the debate on a more fundamental theory conforming to certain classical intuitions for quantum mechanics. The first attempt at experimentally verifying Bell's inequality was performed by Aspect et al. [6], with the most recent ones closing all known loopholes in the earlier experiments [19, 21, 33]. Thus it has been established that quantum physics is vastly different from classical physics - a description of quantum physics that agrees with our classical intuition must be contextual.
Other than the philosophical implications, contextuality has been shown to possess computational power through non-classical correlations. Anders and Browne first showed that certain measurements on GHZ states can be used to lift a linear classical computation into a universal classical computation [5]; Raussendorf later showed that the probability of success of such computation is bounded by the degree of contextuality [31], as measured by the contextual fraction [3, 2]. Subsequent work by Howard et al. revealed that contextuality is an essential ingredient for _magic state distillation_, a process that yields
specific quantum states known as _magic states_[22]. The current most promising fault-tolerant quantum computing scheme, the surface code [23], only permits fault-tolerant computation with a subset of quantum operations which can be efficiently simulated by classical computers. Via state injection, these magic states can be used with surface code to allow for fully fault-tolerant universal quantum computation. Thus, one might argue that contextuality carries an intrinsic computational power that is absent in non-contextual systems.
A variety of frameworks for modelling contextuality have been developed. These including the sheaf-theoretic framework [3, 4, 2], the Contextuality-by-Default (CbD) framework [12, 15, 13], the graph-theoretic framework [9], a framework based on simplicial sets [30]. Generally speaking, these frameworks enable the formalisation of the notion of measurement through the use of various mathematical structures. Bell's inequalities, or in general inequalities that witness contextuality, can be derived systematically within these frameworks. Although we will mainly use the terminology from the sheaf-theoretic framework to describe our examples, our results are framework-agonistic.
### Sheaf Theoretic Framework
Here, we provide a concise overview of the sheaf-theoretic framework of contextuality proposed by Abramsky and Brandenburger [3]
A measurement scenario is defined as a triplet \(\langle X,\mathcal{M},O\rangle\), where \(X\) refers to a collection of observables, \(O\) is the possible outcomes, and \(\mathcal{M}\) denotes an abstract simplicial complex composed of subsets from \(X\).
Every element in \(X\) is an observable of the system under consideration. Upon measurement, each observable yields one of the outcomes contained in \(O\). The characterization of \(\mathcal{M}\) as an abstract simplicial complex implies a particular structural feature: if a subset \(C\) belongs to \(\mathcal{M}\), then every subset nested within \(C\) must also be an element of \(\mathcal{M}\).
A necessity of contextuality is that one cannot measure all the observables in \(X\) simultaneously, at least not without altering the state of the system. Thus, every framework for contextuality must provide a description of the compatibility between observables. Within the sheaf-theoretic framework, each simplex in the simplicial complex \(\mathcal{M}\) constitutes a subset of observables in \(X\) that are measurable simultaneously, i.e. they are mutually compatible. A _measurement context_, or simply _context_, is defined as a maximal simplex in \(\mathcal{M}\), which is not a proper subset of any other simplex in \(\mathcal{M}\).
For instance, the measurement scenario in the Bell-CHSH settings is specified by \(X=\{a_{1},a_{2},b_{1},b_{2}\}\); \(\mathcal{M}=\{\{a_{1},b_{1}\},\{a_{1},b_{2}\},\{a_{2},b_{1}\},\{a_{2},b_{2}\}\}\); \(O=\{0,1\}\). The simplicial complex \(\mathcal{M}\) can be geometrically realized as the boundary of a square, where each vertex corresponds to an observable and each edge represents a context (see Figure 1(a)). Two parties are involved in this scenario: Alice is allowed to measure either \(a_{1}\) or \(a_{2}\), and Bob is allowed to measure either \(b_{1}\) or \(b_{2}\). The measurements are dichotomic, i.e. the outcomes are either \(0\) or \(1\).
Every subset of observables which is a context in \(\mathcal{M}\) can be measured jointly. Thus we can define a (local) joint probability distribution over the observables in the context. Such a joint probability distribution can either be estimated by performing the measurements in an experiment, or be calculated according to a theory of the system under consideration. A collection of all such joint probability distributions is called an _empirical model_, or simply _model_, of the system. For instance, using a set of appropriately chosen measurement bases, the Bell state \(|\Psi\rangle=\big{(}|00\rangle+|11\rangle\big{)}/\sqrt{2}\) produces the empirical model depicted in Figure 1(b). This state exhibits the highest violation of the Bell-CHSH inequality among all quantum states.
An empirical model is said to be _signalling_ if the marginalised distribution of a set of observables
differs from one context to another. In contrast, non-signalling implies that the observed probabilities remain invariant under different contexts, thereby preventing the transmission of information through the choice of context.
A prevalent misconception is a belief that _signalling is contextuality_, often based on the incorrect reasoning that the _probabilities_ in a signalling model are generally context-dependent, leading to the conclusion that the model is contextual. However, it is essential to recognize a fundamental distinction between the two concepts: signalling pertains to the observed probabilities, while contextuality relates to the underlying hidden-variable theories of the model.
The qualitative criterion for contextuality of a model in the sheaf-theoretic framework is based on Fine's theorem [18], which states that a model is contextual if and only if there exists a global probability distribution that is compatible with every local probability distribution in the model.
The quantitative degree of contextuality of a model is measured by the _contextual fraction_ CF[2]. Given an empirical model \(e\), the contextual fraction \(\mathsf{CF}(e)\) is defined as the minimum \(\lambda\) such that \(e\) admits a convex decomposition1:
Footnote 1: Here, we represent the empirical models as empirical tables. Addition and scalar multiplication are then interpreted as standard matrix operations, where the empirical tables are treated as matrices.
\[e=(1-\lambda)e^{NC}+\lambda e^{C}, \tag{1}\]
where \(e^{NC}\) is a non-contextual (and non-signalling) empirical model and \(e^{C}\) is an empirical model that may be contextual.
Suppose a given model \(e\) is non-contextual, then \(\lambda\) can be set to zero by choosing \(e^{NC}=e\). Otherwise, \(\lambda\) must be taken to be greater than zero to make the decomposition valid. Therefore, for non-signalling models, the sheaf-theoretic criterion of contextuality is
\[\mathsf{CF}(e)>0. \tag{2}\]
The calculation of \(\mathsf{CF}\) can be reduced to solving for a linear program, for which numerical solvers are readily available. The \(\mathsf{CF}\) of a model has a nice interpretation as the maximum amount of _normalised violation_ of all possible general Bell's inequalities [2].
In the case of signalling models, the above decomposition cannot hold because \(e^{NC}\) and \(e^{C}\) are, by definition, non-signalling. We could consider allowing \(e^{C}\) to be signalling. However, this adjustment would lead to the misleading conclusion that all signalling models are contextual, assuming we maintain our interpretation of \(\mathsf{CF}\) as a measure of contextuality for these models.
Figure 1: (a) The simplicial complex \(\mathcal{M}\) in the Bell-CHSH scenario. Every vertex represents an observable and every edge represents a context. Alice chooses between \(a_{1}\) and \(a_{2}\); Bob chooses between \(b_{1}\) and \(b_{2}\). The absence of edges between \(a_{1}\) and \(a_{2}\), and between \(b_{1}\) and \(b_{2}\), indicates their incompatibility. (b) An empirical model of the Bell-CHSH scenario. Each row represents a joint probability distribution over the observables in the context. For example, the bottom-right entry \(1/8\) is the probability of observing \(a_{2}=1\) and \(b_{2}=1\) when measuring the observables in the context \((a_{2},b_{2})\).
### Contextuality by Default
In the setting of Contextuality-by-Default (CbD), there are two important notions: _contents_, denoted by \(q_{i}\), which are measurements, or more generally, questions about the system; and _contexts_, denoted by \(c^{j}\), which represent the conditions under which the questions are asked, e.g. their ordering. Every \(q_{i}\) in a \(c^{j}\) gives rise to a random variable \(R_{i}^{j}\) taking values in \(\{\pm 1\}\), and representing possible answers and their probabilities. All random variables in a given context are jointly distributed.
A well-studied class of CbD systems are the cyclic systems [12, 13, 14], where each context has exactly 2 contents and every content is in exactly 2 contexts. The rank of a cyclic system is the number of contents, or equivalently, the number of contexts.
A cyclic system of rank \(n\) is contextual if and only if \(\mathsf{CNT}_{1}\) is positive, where \(\mathsf{CNT}_{1}\) is defined as:
\[\mathsf{CNT}_{1}:=s_{odd}\left(\left\{\left\langle R_{i_{j}}^{j}R_{i_{j}}^{j} \right\rangle\right\}_{j=1,\ldots,n}\right)-\Delta-n+2>0 \tag{3}\]
where \(i_{j}\neq i_{j}^{\prime}\) for all \(j\) and \(R_{i_{j}}^{j},R_{i_{j}}^{j}\) are well-defined for all \(j\). Quantities \(s_{odd}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) and \(\Delta\) are defined as follows:
\[s_{odd}\left(\underline{x}\right)=\max_{\begin{subarray}{c}\underline{\sigma }\in\{\pm 1\}^{k}:\\ \mathsf{p}(\underline{\sigma}=-1)\end{subarray}}\underline{\sigma}\cdot \underline{x}\;;\qquad\Delta=\sum_{i=1}^{n}\left|\left\langle R_{i}^{j_{i}} \right\rangle-\left\langle R_{i}^{j_{i}^{\prime}}\right\rangle\right| \tag{4}\]
where \(\mathsf{p}(\underline{\sigma})=\prod_{i=1}^{n}\sigma_{i}\) (\(\mathsf{p}\) is the parity function of \(\underline{\sigma}\)). The quantity \(\Delta\) measures the degree of signalling in the system. Thus, a non-signalling system has \(\Delta=0\).
For a rank 4 cyclic system, i.e. the Bell-CHSH scenario, the above inequality reduces to the maximum violation of the Bell-CHSH inequalities over the choices of the four signs:
\[\mathsf{CNT}_{1}=\pm\left\langle R_{0}^{0}\ R_{1}^{0}\right\rangle\pm\left \langle R_{1}^{1}\ R_{2}^{1}\right\rangle\pm\left\langle R_{2}^{2}\ R_{3}^{2} \right\rangle\pm\left\langle R_{3}^{3}\ R_{0}^{3}\right\rangle-2 \tag{5}\]
where the number of minus signs has to be taken odd. Therefore, the CbD criterion of contextuality coincides with the Bell-CHSH inequalities for the Bell-CHSH scenario.
### Ambiguous words as observables
Ambiguities in natural language have posed a challenge to natural language processing. Lexical ambiguity, where a word has multiple meanings, is one of the most common types of ambiguity in natural language. For instance, the word _produce_ has two possible meanings: _to give birth_ and _to make something_.
Without any context, it is not possible to determine which of the two meanings is intended. Another type of ambiguity is _coreference ambiguity_, where a word can potentially refer to different entities. For instance, the pronoun _it_ can refer to the _dog_ or the _cat_ in the sentence _The dog chased the cat. It barked._. In this paper, we focus on the latter type of ambiguity.
A method to formalise the notion of contextuality in natural language is by viewing an ambiguous word as an observable, with its interpretations as possible outcomes. For instance, the word _produce_ has (at least) two possible interpretations: _to give birth_ and _to make something_. Measuring the word _produce_ amounts to selecting one of these interpretations by a reader.
We can assign probabilities to these interpretations based on the frequency of the interpretations in an experiment where a group of readers is asked to interpret the word _produce_, or a single reader is asked to assign a probability to each of the interpretations. The first approach is more costly as it requires a
large number of readers to be involved in the experiment. However, the latter approach is better suited to machine learning models since they can be trained to assign probabilities to different interpretations.
This way of treating ambiguous words as observables was first proposed by Wang et al. [37, 38]. The authors considered subject-verb and verb-object phrases where each word carries at least two possible interpretations. Measurement contexts were constructed by selecting different pairs of nouns and verbs, in a way similar to how Alice and Bob select their measurements in the Bell-CHSH scenario. The probabilities in the results were estimated from a group of crowd workers who were asked to assign a score to the different interpretations.
## 3 Winograd Schema Challenge
Commonsense reasoning, the inherent human capacity to logically comprehend the world around us, has long been a focal point in the field of artificial intelligence, with the aim to cultivate this ability in machines.
The Winograd Schema Challenge (WSC) emerged as a measure of this commonsense reasoning capability. The challenge was inspired by Terry Winograd's seminal paper [39], wherein he contended that syntax alone falls short in the interpretation of natural language, necessitating commonsense or world knowledge as well. The challenge presents a collection of sentences, each with an ambiguous pronoun whose meaning can be clarified via the context. A machine is deemed to have passed the test if it can disambiguate the pronoun with an accuracy on par with human performance.
The classic example of a Winograd schema, originally constructed by Winograd himself, is the following pair of sentences:
1. 1. The city councilmen refused the demonstrators a permit because **they**_feared_ violence. 2. The city councilmen refused the demonstrators a permit because **they**_advocated_ violence.
Note that the two sentences differ only in the words _feared_ and _advocated_. In both sentences, there is an ambiguous pronoun **they** which can either refer to the _city councilmen_ or the _demonstrators_. In the first sentence, it can be inferred through commonsense reasoning that the pronoun **they** refers to the _city councilmen_, as it is within our common sense that city councilmen are the ones who tend to prevent violence in demonstrations. In the second sentence, the pronoun _they_ refers to the _demonstrators_, as it is within our common sense (stereotype) that demonstrators tend to advocate violence and that doing so would lead to the refusal of a permit for a demonstration.
Another classic example of a Winograd schema is the following pair of sentences:
1. The trophy doesn't fit into the suitcase because it's too [_small / large_].
Here we adopt a compact notation in which the pair of square brackets encloses the two possible word choices, each leading to a different sentence. This notation will be employed throughout the paper.
In a WSC, the participant is asked to identify the correct interpretation of the ambiguous pronoun. Success in the test is defined by the participant's accuracy equalling or approximating human performance. The evaluation of responses to a WSC question is straightforward, either the correct referent of the ambiguous pronoun is identified or not.
In contrast, the Turing Test has been criticised for being too difficult to evaluate. Originated as the imitation game by Turing [34], the test involves a human judge interrogating a machine via a textual interface. The conversation between the judge and the machine is unrestricted. If the judge or a panel of judges cannot distinguish the machine from a human based on the conversation, the machine is deemed
to have passed the test. However, this unrestricted nature of the Turing Test opens doors to potential deception. In fact, for a machine to pass the test, it must deceive as machines lack physical bodies. If questioned about its physical attributes, like height or weight, the machine must lie to successfully pose as a human. Due to this advantage of the ease of evaluation over the Turing Test, the WSC was proposed as a replacement for the Turing Test.
Unlike the Turing Test, the WSC is a structured binary-choice test. The major issue with the WSC is that it is over-constrained - it is unexpectedly difficult to construct examples of it, due to the numerous requirements that must be satisfied. A valid Winograd schema must satisfy the following requirements:
1. A Winograd Schema comprises a pair of sentences that differ slightly from each other. The first sentence includes a _special_ word which, when replaced by an _alternate_ word, yields the second sentence. For instance, in the _trophy-suitcase_ example, _small_ is the _special_ word, and _large_ is its _alternate_.
2. The sentences should contain two noun phrases. In the _trophy-suitcase_ example, _the trophy_ and _the suitcase_ serve as the two noun phrases.
3. A pronoun, which agrees with the two noun phrases in number and gender, must be present in the sentences. For example, in the _trophy-suitcase_ scenario, the pronoun _it_ aligns with both _the trophy_ and _the suitcase_ regarding number and gender.
4. The pronoun's referent should be easily identifiable from a natural reading of the sentence, and the correct referent should differ between the two sentences.
5. Each sentence in the pair should be fluid and natural to read, to the extent that they could feasibly appear in regular text sources like news articles or Wikipedia pages.
The outlined requirements ensure the preservation of both linguistic structure and the test's integrity:
1. The first requirement ensures grammatical consistency across the pair of sentences.
2. The fourth requirement necessitates a change in the correct referent of the pronoun when the special word is replaced with the alternate. This stipulation indicates that grammatical structure alone does not determine the correct pronoun referent.
3. The fifth requirement safeguards the authenticity of the language used in the test, ensuring it remains aligned with naturally occurring language.
Crafting valid examples of the Winograd schema is a complex task due to the set restrictions and requirements. The challenge of creating such schemas is evidenced by the limited number of examples in the original Winograd Schema Challenge set, which includes only 285 instances2.
Footnote 2: Available at [https://cs.nyu.edu/advise/papers/WinogradSchemas/WS.html](https://cs.nyu.edu/advise/papers/WinogradSchemas/WS.html).
In 2018, the first system achieved a better-than-chance accuracy of 57.1% [17] on the original 285 examples of the WSC. In 2019, a fine-tuned RoBERTa [28] model achieved a human-like accuracy of 90.1% [32]. The WSC has suffered from the same problem that plagued the Turing Test - there are weaknesses in the test that can be exploited without having to demonstrate the desired human-level intelligence. Simply put, the WSC has been defeated [25].
It is even more so for the WSC precisely because of its ease of evaluation. Proposals to increase the difficulty of the WSC, such as requiring the test-taker to select a correct explanation for their answer from a list of options [40, 20], emerged as potential solutions. However, these suggestions further complicate the already challenging task of question set construction. An alternative could involve requiring free-form explanations from the test-taker, though this would likely introduce additional ambiguity and make the evaluation process more difficult.
## 4 Generalised Winograd Schema
In this section, we present our approach for the generalisation of the Winograd Schema, enabling the potential observation of contextuality. We will first discuss why the original Winograd Schema is insufficiently complex to exhibit contextuality, and then propose a generalised Winograd Schema that is sophisticated enough to host contextuality.
### Modelling Winograd Schemas as measurement scenarios
To study the contextuality in the Winograd Schema, we model it with a measurement scenario in the sheaf-theoretic framework. This way of treating ambiguity in language is akin to the way ambiguous phrases are treated in [37], where an ambiguous word is considered an observable in a measurement scenario.
However, the same ambiguous word, i.e. the ambiguous pronoun, is shared across the twin pair of sentences in a Winograd Schema. Thus, if we follow the approach of "words as observables" strictly, then we will end up with a trivial measurement scenario, where there is only one observable, i.e. the ambiguous pronoun. Moreover, this naive approach deviates from the spirit of the Winograd Schema, which is to disambiguate a pronoun by considering the linguistic context. Instead, We argue that there should be exactly two contexts in the measurement scenario, one for each sentence in the twin pair. Recall that in the original Winograd Schema, the twin pair of sentences are identical except for the special word and the alternate word. In a rough sense, the special word and the alternate word provide the _lingutistical context_ for disambiguating the pronoun. This way of defining the measurement contexts provides a concrete link between _context in language_ and _contextuality in quantum mechanics_.
Following from the above discussion, we define an observable as a tuple: (**pronoun**, _special word_) or (**pronoun**, _alternate word_), to distinguish between the two pronouns in different linguistical contexts. The possible outcomes of each of the two observables are the candidate referents of the pronoun.
**Definition 1** (Winograd Schema scenario): _Given a Winograd Schema with two noun phrases \(A\) and \(B\); an ambiguous pronoun \(\boldsymbol{p}\) which refers to either \(A\) or \(B\); a special word (s) and an alternate word (a), the corresponding measurement scenario is defined by the data:_
* _observables_ \(X=\{(\boldsymbol{p},s),(\boldsymbol{p},a)\}\)_;_
* _contexts_ \(\mathcal{M}=\big{\{}\{(\boldsymbol{p},s)\},\{(\boldsymbol{p},a)\}\big{\}}\)_;_
* _outcomes_ \(O=\{A,B\}\)_._
_We call such a measurement scenario a Winograd Schema scenario, or a WS scenario in short._
With the _councilmen-demonstrators_ example, the measurement scenario would be given by the data:
* observables \(X=\{(\textbf{they},\,\textit{feared}),\,(\textbf{they},\,\textit{advocated})\}\);
* contexts \(\mathcal{M}=\big{\{}\{(\textbf{they},\,\textit{feared})\},\,\{(\textbf{they}, \,\textit{advocated})\}\big{\}}\);
* outcomes \(O=\{\text{city councilmen},\,\text{demonstrators}\}\).
It becomes apparent that any Winograd Schema scenario is too simplistic to accommodate any contextual model due to the absence of overlapping contexts. One can always construct a compatible global distribution by taking the product of the local distributions.
### Generalising the Winograd Schema scenario
Before proceeding to the generalisation of Winograd Schema, we point out an interpretation of the WS scenario as an analogy to an experiment in quantum physics. Consider an imaginary experimenter, Alice, who decides whether to measure the pronoun with the special word, or with the alternate word. That is, Alice chooses between the two observables: \((\mathbf{p},s)\) and \((\mathbf{p},a)\). This is exactly analogous to Alice choosing between two projection axes in an experiment measuring a spin-1/2 particle.
A natural and obvious way to generalise the WS scenario would be to add one more experimenter, Bob. This results in the Bell-CHSH scenario, which is well-known to be able to host contextual models. That amounts to introducing one more pronoun, one more special word and its alternate word, to the original Winograd Schema. We use the subscript 1 to denote objects relating to the first pronoun and the subscript 2 to denote objects relating to the second pronoun.
Here we give a set of requirements for the generalised Winograd Schema, in the style of the original WSC:
1. A generalised schema consists of four slightly differing sentences. The first sentence contains two special words \(s_{1}\) and \(s_{2}\). Similar to the original Winograd Schema, \(s_{1}\) can be replaced by an alternate word \(a_{1}\) and \(s_{2}\) can be replaced by an \(a_{2}\). The possibility of replacing special words with alternate words creates the rest of the four sentences.
2. There are a pair of noun phrases.
3. There are two pronouns in the sentences. The first pronoun refers to one of the noun phrases in the first pair of noun phrases. The second pronoun refers to either one noun phrase in the second pair of noun phrases.
4. All four sentences should be natural to read.
In short, a generalised Winograd Schema is two Winograd Schemas put together in a single discourse.
**Definition 2** (Generalised Winograd Schema scenario): _Given a Generalised Winograd Schema with two noun phrases A and B; two ambiguous pronouns \(\boldsymbol{p}_{1}\) and \(\boldsymbol{p}_{2}\) can each refers to either A or B; two special words (\(s_{1}\)) and (\(s_{2}\)); two alternate words (\(a_{1}\)) and (\(a_{2}\)), the corresponding measurement scenario is defined by the data:_
* _observables_ \(X=\{(\boldsymbol{p}_{1},s_{1}),(\boldsymbol{p}_{1},a_{1}),(\boldsymbol{p}_{2 },s_{2}),(\boldsymbol{p}_{2},a_{2})\}\)__
* _contexts_ \(\mathcal{M}=\big{\{}\{(\boldsymbol{p}_{1},s_{1}),(\boldsymbol{p}_{2},s_{2}) \},\{(\boldsymbol{p}_{1},s_{1}),(\boldsymbol{p}_{2},a_{2})\},\{(\boldsymbol{ p}_{1},a_{1}),(\boldsymbol{p}_{2},s_{2})\},\{(\boldsymbol{p}_{1},a_{1}),( \boldsymbol{p}_{2},a_{2})\}\big{\}}\)_;_
* _outcomes_ \(O=\{A,B\}\)_._
_Such a measurement scenario is called a Generalised Winograd Schema scenario, or a generalised WS scenario in short._
The generalised WS scenario is isomorphic, i.e. identical upon relabelling, to the Bell-CHSH scenario shown in Figure 1. It has long been known that the Bell-CHSH scenario can host contextual models [7, 10]. Thus a carefully designed generalised Winograd Schema would be able to demonstrate contextuality.
Here we provide a straightforward example of a generalized Winograd Schema scenario, built upon the original _trophy-suitcase_ example:
1. The trophy doesn't fit into the suitcase because \(\mathbf{it}_{1}\) is too [\(s_{1}\) = _small_ / \(a_{1}\) = _large_]. Nonetheless, \(\mathbf{it}_{2}\) is [\(s_{1}\) = _light_ / \(a_{2}\) = _heavy_].
The corresponding generalised WS scenario is given by:
* observables \(X=\{(\mathbf{it_{1}},small),(\mathbf{it_{1}},large),(\mathbf{it_{2}},light),( \mathbf{it_{2}},heavy)\}\)
* contexts \(\mathcal{M}=\begin{cases}\{(\mathbf{it_{1}},small),(\mathbf{it_{2}},light)\},\{( \mathbf{it_{1}},small),(\mathbf{it_{2}},heavy)\},\\ \{(\mathbf{it_{1}},large),(\mathbf{it_{2}},light)\},\{(\mathbf{it_{1}},large),( \mathbf{it_{2}},heavy)\}\},\end{cases}\);
* outcomes \(\mathcal{O}=\{\text{trophy},\text{suitcase}\}\).
Interestingly, it was in the original set of Winograd Schemas (WSC285) that Davis designed a special example making use of two pronouns:
(4) Sid explained his theory to Mark but **he** couldn't [_convince / understand_] **him**.
The author deemed this example a "Winograd schema in the broad sense" since using more than one pronoun violates the requirements of the original Winograd Schema. Yet, this example is not a proper generalised Winograd Schema defined in this paper, as it only employs one special word and one alternate word.
Other than the fact that its scenario is too simple, there is another reason why the original Winograd Schema is not contextual: the intended referent of the pronoun should be obvious to a human reader. That means an empirical model constructed with judgement data collected from human subjects on the original Winograd Schema would be deterministic or nearly deterministic. It is known that deterministic systems are not contextual [11]. On the other extreme, a completely random model is trivially non-contextual. Intriguingly, it seems that only a system with a moderate level of intelligence, in between that of humans and that of complete randomness, would have the possibility of being contextual.
There are two directions to where we could take the generalised Winograd Schema: (1) to continue its mission to be a test of intelligence or commonsense reasoning; (2) to become a well-structured linguistic setting under which contextual models could be found.
Recent results from large language models have demonstrated human-like accuracies in solving the Winograd Schema Challenge. The introduction of one more pronoun might increase the difficulty of the challenge, possibly stipulating advancements in the field of natural language processing. However, it is our goal to find bridges between natural language and contextuality. Therefore the second direction will be the focus of this paper.
### An example of the generalised Winograd Schema
As our goal is to uncover contextual models in natural language, we need to gather judgment data from human participants to build empirical models for generalized Winograd Schema instances. Crucially, deterministic systems lack contextuality. Therefore, our generalized Winograd Schema examples should be inherently ambiguous to human readers, unlike the original Winograd Schema where humans can easily resolve the pronoun.
Due to the requirement of having two almost identical pairs of naturally-sounding sentences, it is a difficult task to come up with examples of the original Winograd Schema. The extra requirements we put forward for the generalised Winograd Schema make it even harder to come up with naturally-sounding examples. Here we report an example of the generalised Winograd Schema3:
Footnote 3: It was pointed out by one of the reviewers that the original version of the example contains several incorrect uses of English. Here we provide the corrected version of the example.
(5) A and B belong to the same [_cannibalistic / herbivorous_]\({}_{1}\) species of animal. On a hot afternoon in the south Sahara, **one of them**\({}_{1}\) was very hungry. They noticed each other when they were roaming in the field. After a while, **one of them**\({}_{2}\) is no longer [_hungry / alive_]\({}_{2}\).
Note that we had to violate the requirement of having a single sentence because it is difficult to come up with a naturally-sounding sentence that contains every ingredient of the generalised Winograd Schema. We also decided to use the referring phrase **one of them** instead of the third-person pronoun **it** to improve the naturalness of the example.
We used the alphabetic symbols A and B as the two noun phrases as we wanted to make the two symmetric. That is, any empirical model of the scenario is invariant to the interchanging of A and B. It turns out that all symmetric models are non-signalling, at least for cyclic scenarios such as that Bell-CHSH scenario. Dealing with symmetric models carries two disadvantages: (1) it is more difficult to assert the contextuality of a signalling model; (2) the sheaf-theoretic criterion of contextuality applies to non-signalling models only. By considering only symmetric models, we thereby avoid the complications of dealing with non-signalling models.
### Human judgements on the example
We collected human judgments on this example on the crowd-sourcing platform Amazon Mechanical Turk in the form of a questionnaire. There were four versions of the questionnaire, each corresponding to one of the four contexts in the generalised WS scenario. The respondents were asked to read the example and answer a question about the correct referents, A or B, of the two referring phrases **one of them\({}_{1}\)** and **one of them\({}_{2}\)**. A screenshot of the questionnaire is shown in Figure 2.
Since each referring phrase can be interpreted in two ways, there are 4 possible combinations of interpretations, (A, A), (A, B), (B, A), (B, B), of the two referring phrases. The symmetry between A and B in the example ensures that the combinations (A, A) and (B, B) are equally plausible and (A, B) and (B, A) are also equally plausible. Therefore we asked the respondents to pick two out of the four combinations. This design choice also allows the detection of invalid answers, that is, those that do not
Figure 2: A screenshot of the template of the questionnaire. The placement holders ${word1} and ${word2} are instantiated with the two special words or the alternate words of the generalised Winograd Schema. In this example, ${word1} can be either _cannibalistic_ or _herbivorous_ and ${word2} can be either _hungry_ or _alive_. Four versions of the questionnaire were created, each corresponding to one of the four contexts in the generalised WS scenario. _Note that the story contains several incorrect uses of English. Unfortunately, we did not notice these until a reviewer pointed them out, after data collection._
respect the symmetry between A and B.
A total of 410 responses were collected on Amazon Mechanical Turk separately on two dates: 20th Oct 2022 and 23rd Nov 2022. Out of the 410 responses, 110 were to the context (_cannibalistic_, _hungry_) and 100 each were to the rest of the three contexts. Out of all the responses, 348 were valid, i.e. their responses respected the symmetry between A and B. The respondents were each financially rewarded USD 1.00, regardless of the validity of their responses.
The collected valid data were used to build an estimated probability distribution for each of the four contexts. The resulting empirical model is shown in Table 1. The model violates the Bell-CHSH inequality by 0.192 with a standard deviation of 0.176. Since the model is symmetric in the outcomes by construction, it is non-signalling and thus the measure of contextuality \(\mathsf{CNT}_{1}\) in the CbD framework coincides with the degree of violation [26]. The symmetry in the outcomes also allows the violation to saturate the bound defined by \(\mathsf{CF}\) in sheaf-theoretic framework [2], i.e. the following equality is attained
\[\max\left\{0,\frac{1}{2}\ \text{violation of Bell-CHSH inequality}\right\}=\mathsf{CF}. \tag{6}\]
Thus, our model is considered contextual in both the sheaf-theoretic framework and the CbD framework.
To establish the significance of the contextuality result, we conducted bootstrap resampling to estimate the spread of the violation to the Bell-CHSH inequality. Simulated datasets were generated by random sampling with replacement from the original dataset. The resulting distribution of violations is depicted in Figure 3. Among the resampled datasets, 87% of them exhibited a positive violation, indicating that our experimental model demonstrates contextuality with a significance level of 87%.
## 5 Conclusions and Future Work
In this work, we employed the sheaf-theoretic framework for contextuality to model the Winograd Schema, originally formulated as an ambiguous coreference resolution task. Our findings revealed that the original Winograd Schema scenario lacked the necessary complexity to exhibit contextuality. To address this limitation, we introduced an additional ambiguous pronoun and a new pair of special and alternate words, creating a generalized Winograd Schema reminiscent of the Bell-CHSH scenario. Through crowdsourcing, we collected human judgments on an example of the generalized Winograd Schema and observed a contextual empirical model with a significance level of 87
An intriguing direction for future research involves constructing a comprehensive set of examples based on the proposed generalized Winograd Schema, thereby establishing it as a new challenge in the field of natural language processing. One potential approach is to leverage state-of-the-art generative
\begin{table}
\begin{tabular}{l l|c c c c c c c c} (a) & & (A, A) & (A, B) & (B, A) & (B, B) & (A, A) & (B, B) \\ \hline (\(canni\), & _hungry_) & 0.402 & 0.097 & 0.097 & 0.402 & & & & & \\ (\(canni\), & _alive_) & 0.044 & 0.455 & 0.455 & 0.044 & & & & & \\ (\(herbi\), & _hungry_) & 0.345 & 0.154 & 0.154 & 0.345 & & & & & \\ (\(herbi\), & _alive_) & 0.344 & 0.155 & 0.155 & 0.344 & & & & & \\ \end{tabular}
\begin{tabular}{l l|c c c c c} (b) & (A, A) & (A, B) & (B, A) & (B, B) \\ \hline \(\ldots\) & \(1/2\) & 0 & 0 & \(1/2\) \\ \(\ldots\) & 0 & \(1/2\) & \(1/2\) & 0 \\ \(\ldots\) & \(1/2\) & 0 & 0 & \(1/2\) \\ \(\ldots\) & \(1/2\) & 0 & 0 & \(1/2\) \\ \(\ldots\) & \(1/2\) & 0 & 0 & \(1/2\) \\ \(\ldots\) & \(1/2\) & 0 & 0 & \(1/2\) \\ \(\ldots\) & \(1/2\) & 0 & 0 & \(1/2\) \\ \(\ldots\) & \(1/2\) & 0 & 0 & \(1/2\) \\ \end{tabular}
\end{table}
Table 1: (a) The empirical model constructed with the 410 human judgments collected from Amazon Mechanical Turk. The violation of Bell’s inequality of the model is 0.192 \(\pm\) 0.176. For brevity, the special word _cannibalistic_ is shortened to _canni_ and the alternate word _herbivorous_ is shortened to _herbi_. The model generally resembled the PR model shown in Table (b) on the right.
language models such as GPT-4 to systematically generate examples of the schema with minimal human intervention. Careful prompt engineering would be needed to ensure that the generated examples are of high quality.
As collecting human judgments is costly and time-consuming, another alternative approach for constructing empirical models of the generalized Winograd Schema involves utilizing generative language models to generate responses to examples. This approach also offers an opportunity to explore the extent to which the responses generated by language models align with human responses. By comparing and analysing the correspondence between model-generated responses and human responses, one could gain insights into the capabilities and limitations of language models in capturing the way human beings understand language.
This paper presents an approach that consists of deliberately constructing sentences that exhibit contextuality. This strategy of "detecting contextuality in natural language" may invite criticism for its contrived nature.
An alternative approach could involve the application of mathematical frameworks designed for contextuality to analyze pre-existing natural language data, moving away from the intentional construction of examples with distinct features [36]. The aim of this strategy would not be to pursue contextuality within natural language. Instead, it would focus on developing novel methods for modelling natural language phenomena from a different perspective.
## Acknowledgements
We are grateful to Daphne Wang for insightful discussions and the anonymous reviewers for their constructive comments. KL is supported by the Engineering and Physical Sciences Research Council [grant number EP/S021582/1]. MS is supported by the Royal Academy of Engineering research chair RCSRF2122-14-152 on Engineered Mathematics for Modelling Typed Structures.
Figure 3: A normalised histogram of the Bell-CHSH inequality violation for 100,000 bootstrap samples from the model shown in Table 1. A positive violation, indicative of contextuality, is observed in 87% of the resampled models. The standard deviation of the distribution is 0.176. |
2309.11960 | A Comprehensive Review on Financial Explainable AI | The success of artificial intelligence (AI), and deep learning models in
particular, has led to their widespread adoption across various industries due
to their ability to process huge amounts of data and learn complex patterns.
However, due to their lack of explainability, there are significant concerns
regarding their use in critical sectors, such as finance and healthcare, where
decision-making transparency is of paramount importance. In this paper, we
provide a comparative survey of methods that aim to improve the explainability
of deep learning models within the context of finance. We categorize the
collection of explainable AI methods according to their corresponding
characteristics, and we review the concerns and challenges of adopting
explainable AI methods, together with future directions we deemed appropriate
and important. | Wei Jie Yeo, Wihan van der Heever, Rui Mao, Erik Cambria, Ranjan Satapathy, Gianmarco Mengaldo | 2023-09-21T10:30:49Z | http://arxiv.org/abs/2309.11960v1 | # A Comprehensive Review on Financial Explainable AI
###### Abstract.
The success of artificial intelligence (AI), and deep learning models in particular, has led to their widespread adoption across various industries due to their ability to process huge amounts of data and learn complex patterns. However, due to their lack of explainability, there are significant concerns regarding their use in critical sectors, such as finance and healthcare, where decision-making transparency is of paramount importance. In this paper, we provide a comparative survey of methods that aim to improve the explainability of deep learning models within the context of finance. We categorize the collection of explainable AI methods according to their corresponding characteristics, and we review the concerns and challenges of adopting explainable AI methods, together with future directions we deemed appropriate and important.
**A Comprehensive Review on Financial Explainable AI**
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
+
Footnote †: journal: Journal of Applied Econometrics
can be traced back to 5000 years ago, in the agrarian societies that had been established and developed for some thousand of years at the time. Indeed, one of the first examples of banking, a central institution within finance, can be attributed to the Babylonian empire. Since then, societal development and technological advances have pushed the field to undergo several changes. In the past two decades, these changes have been particularly marked, due to the accelerating pace of technological development, especially in the context of AI. The latter has started spreading across multiple segments of finance, from digital transactions to investment management, risk management, algorithmic trading, and more [114]. The use of novel AI- and non-AI technologies to automate and improve financial processes is now known as FinTech (Financial Technology), and its growth in the past two decades has been remarkable [89]. In this review, we focus on AI-based technologies and machine learning for financial applications.s
Financial researchers and practitioners have been relying on supervised, unsupervised, and semi-supervised machine learning methods as well as reinforcement learning for tackling many different problems. Some examples include credit evaluation, fraud detection, algorithmic trading, and wealth management. In supervised-based machine learning methods it is common to use e.g., neural networks to identify complex relationships hidden in the available labeled data. The labels are usually provided by domain experts. For instance, one can think of building a stock-picking system, where a domain expert labels periods of positive and negative returns. The machine is then tasked to build the relationship between (possibly) high-dimensional data, and positive and negative returns of a given stock (or multiple stocks) and generalize to unseen data to e.g., predict the future stock's behavior. In unsupervised-based machine learning methods, the task is instead to identify data with similar characteristics that can therefore be clustered together, without domain-expert labeling. For example, one can think of identifying all stocks that have similar characteristics into clusters using some similarity metrics, such as valuation, profitability and risk. Semi-supervised learning is a middle ground between supervised and unsupervised learning, where only a portion of the data is labeled. Finally, reinforcement learning aims to maximize, through a set of actions, the cumulative reward specified by the practitioners. Reinforcement learning is used in finance for e.g., portfolio construction. Reinforcement learning is strictly related to Markov decision processes and substantially differs from both supervised and unsupervised learning.
Among supervised, unsupervised, and reinforcement learning methods, there is vast heterogeneity in terms of complexity. Some methods are considered easier to understand, hence to interpret by practitioners (also referred to as white-box methods), while others are considered not interpretable (also referred to as black-box methods). To this end, neural networks and deep learning strategies, that underpin the majority (albeit not the entirety) of recent machine learning methods for financial applications, are considered black-box methods - i.e., the reason for a given prediction is not of easy access when available). This constitutes a critical issue, especially in risky and highly regulated sectors, such as healthcare and finance, where a wrong decision may lead to catastrophic loss of life (healthcare) or capital (finance). Hence, it was deemed important to understand the reasons (i.e., the data and patterns) the machine used to make a given decision. This aspect encompasses the broad field of AI transparency. The latter is composed of three pillars, (i) AI awareness, (ii) AI model explainability, and (iii) AI outcome explainability. The first is tasked to understand whether AI is involved in a given product. The second is responsible to provide a detailed explanation of the AI model, including its inputs and outputs. The third is responsible to provide a granular explanation of the inputs' contributions to the AI model's outcomes. To this last category, we find a vast array of post-hoc interpretability methods. In this review, we assume that AI awareness is achieved, i.e., we know that in a given financial process AI is involved, and focus on AI explainability, also referred to as eXplainable AI or simply XAI. A further distinction commonly made is between interpretability and explainability of an AI model. These two terms, frequently used interchangeably, have subtle differences. Interpretability refers to how and why a model works. Explainability refers to the ability of explaining the results in human terms.
While deep learning methods are considered black-boxes, many other methods in finance are considered white-box methods. The trade-off between complexity and interpretability is perhaps one of the most debated aspects in the field of financial AI. On the one hand, white-box methods are highly interpretable but lack the ability to grasp complex relationships, frequently failing to meet the desired performance. On the other hand, black-box methods are not interpretable but usually (although not always) meet the desired performance. Therefore, it is not surprising that there are significant efforts being pushed forward in recent years to render black-box methods more interpretable, where the primary example is the field of deep learning. In this paper, we provide an extensive review of XAI methods for the financial field that we name FinXAI. Although there have been a number of surveys on XAI methods [7, 19, 49, 88, 90, 102, 105], these papers are targeted towards general XAI, and are not specific to finance. Hence, we conduct a review on explainability techniques exclusively related to financial use cases.
To compile this review, we took into account 69 papers, focusing mainly, though not exclusively on the third pillar, i.e., the explainability of the inputs' contributions to the AI model's outcomes. To this end, we considered both post-hoc interpretability methods applied to black-box deep learning models, and inherently transparent models that do not require further post-hoc interpretability. Despite the relatively small number of collected papers in the field of XAI, it is important to note that our main objective is to focus specifically on XAI techniques applicable to the financial industry. This targeted approach will provide valuable insights for researchers in related fields and will ultimately help drive innovation and progress in the financial industry. With the growing need for transparency and accountability of deep learning, the XAI community has seen increasing growth in the number of works published, we focus here instead only on works concerning financial use cases. Notably, FinXAI is but a small subset of the general field of XAI and, thus we take a holistic approach to assembling existing studies with the goal of keeping up to date with the current approaches. The papers were queried from both Google Scholar and Scopus where we searched using a set of keywords relating to works that have applied explainable AI techniques in financial use cases, the set of keywords include _"XAI, explainable AI, finance, financial sector, financial field, explainable ML"_. We try to collect a diverse set of papers that covers each category sufficiently well, and summarized in tables 1, 2, 3. We also noticed the majority of explanation types were limited to fact-based explanations, hence we explicitly search for techniques explaining in the form of counterfactuals. Counterfactual explanations are deemed as a desirable form of explanation as the receiver tends to prefer understanding why a certain prediction was made instead of the opposing.
The main _contributions_ of our work as such:
* We provide an extensive study on consolidating XAI methods in the field of finance (FinXAI), for researchers interested in prioritizing transparency in their solutions.
* We frame the FinXAI process as a sequential flow of decision-making processes (see Fig 4), where we place importance on aligning the XAI technique with the target audience. The objective of this framework is to produce explanations that are both goal-oriented and audience-centric.
* We review current FinXAI techniques, analyze their technical contributions to ethical goals, and list down a number of key challenges faced in implementing XAI as well as important directions to be improved for the future.
The remainder of the review paper is organized as follows: Section 2 describes the definitions, reasons, and brief overview of FinXAI. Subsequently, we explain the methodology of FinXAI, starting from numerical in Section 3, textual in Section 4, hybrid analysis in Section 5 and ending with transparent models in Section 6. In Section 7, we analyze how the reviewed FinXAI methods contribute to ethical goals. Section 8 discusses key challenges of adopting explainable models and future directions for research. Finally, Section 9 offers concluding remarks.
## 2. Firsxi: Definition, Reason, Approach
This section details the definition, purpose, and approaches that have been taken in improving the transparency of AI models. A collection of existing literature reviews are analyzed and collated to give the reader a better understanding of commonly used terminologies in the XAI field, as well as the interlink between AI, linguistics, and social sciences which is essential to provide a solid understanding of the subject.
### Definition of FinXAI
FinXAI or explainable AI methods in finance is inherently linked to the broader concept of AI transparency. As mentioned in section 1, this term encompasses under a single framework three key steps: AI awareness, AI model explainability, and AI outcome explainability. In this review, we focus on the latter two aspects, model and outcome explainability. Model explainability means that the inner workings of a given AI solution are interpretable, and therefore the results may be interpreted by humans. This is typically the case for models with reduced complexity (i.e., white-box models), such as linear and logistic regression, and decision trees.
Outcome explainability means that the inner workings of a given AI solution are not fully interpretable, and therefore the results may not be fully understandable by humans, unless some interpretability tools are applied to explain the AI outcomes. This is the case for complex models (i.e., black-box models), such as deep neural networks. In these cases, it is common to apply model agnostic post-hoc (and other) interpretability tools to understand the results the AI provided in human terms.
Correspondingly, XAI models may be cast into two broad categories: intrinsically explainable due to their highly interpretable nature (e.g., linear and logistic regression), and extrinsically explainable, hence requiring an external tool to make them interpretable. In turn, these two categories of models lead to different classes of model transparency: _simulatability_, _decomposability_, and _algorithmic transparency_(Deng et al., 2017). Each of these three classes inherits the preceding class' properties, that is, if a model is decomposable, it is also simulatable, and if a model is algorithmically transparent is also decomposable and simulatable. In simple terms, _simulatability_ refers to the model's ability to allow a human observer to simulate a thought process over the inner workings of the model. _Decomposability_ entails that interpretability is available at every segment of the model, including inputs, outputs as well as model inner workings and parameters. _Algorithmic transparency_ largely deals with the human user being able to understand how the model reacts with varying inputs and more importantly the ability to reason about errors the model produces.
A transparent model is an interpretable model which exhibits the ability to provide human-understandable explanations. Areas of transparency not only lie within the region of the model but also the data and design process of the end-product (Kang et al., 2018). The EU High-Level Expert Group on AI (Han et al., 2018) states that the data the model interacted with should be traceable by human users at any given time. In addition, the design process of the system must be clear and explainable in a manner comprehensible to related stakeholders. The list of information types, regarded as explainable can even be extended to include principles and guidelines in the development of the AI system, as well as personnel involved in the implementation and development process (Kang et al., 2018).
A key rationale for explainability is to gain the trust of affected stakeholders. Examples of such stakeholders include regulators, board members, auditors, end-users and developers (Kang et al., 2018). To this end, the format and degree of explanation vary among audiences. The key message is usually conveyed in reports customized to the suitability of the receiving audience. It is common knowledge that financial service providers are regularly audited by supervisory authorities to ensure adherence to regulations and to prevent potential fraud from taking place. The level of scrutiny expected of the authorities is much higher than what the service providers expect. A study conducted by (Kang et al., 2018) involves a preliminary investigation to identify the types of information that are deemed necessary, in the perspective of banks and supervisory authorities. The result was that supervisory authorities identify all forms of information types as relevant while banks only consider a subset of them. As such, there
exists a gap between each organization's understanding of necessity, more often than not leading to the delay in approving the deployment of financial services.
As mentioned, the constitution of a good explanation is largely subjective. The amount of required information usually increases in a hierarchical manner starting from the audience to the regulatory authorities, as depicted in Figure 1. Here, scrutiny refers to the amount of information regarded as essential. On the one hand, end-users typically require the least amount of explanations (cause of outcome, data security) since they are usually only interested in resolving their practical concerns. On the other hand, external regulators require explanations about the end-product from head-to-toe (overall guideline in design process, accountable and involved personnel, deployment process, training structure of organization), including the end-users requirements.
Proximity refers to the region of explanation provided by the XAI technique and can be classified under local (reasons about a particular outcome) and global (view of the underlying reasoning and mechanics of the AI model). End-users tend to be concerned with how the outcome affecting them is provided (local proximity). For example, a person whose credit card application was rejected would want to know the underlying reason behind it. In contrast, the solution providers and regulators tend to focus on the internal operations and design workflow of the product, for reasons related to performance enhancement, fairness in the model's sense of judgment, and identification of biases in the prediction (global proximity).
Figure 1: Levels of explanation requirements by different audiences, categorized by explanation proximity, and ordered by scrutiny level. Local proximity refers to explanations concerned about a specific outcome. Global proximity refers to the underlying reasoning and mechanics of an AI model). End-users typically require are satisfied with local-proximity explanations, and the level of scrutiny is low. Developers, domain experts and regulatory authorities require global-proximity explanations instead, and the level of scrutiny is much higher.
### Reasons for FinXAI
As mentioned in the previous section, various stakeholders lean towards different forms of explanation, naturally leading to different sets of goals the explanation can provide. A paramount reason for adopting explainable models is to ensure that financial solutions adhere to ethical standards set forth in the financial sector. The Monetary Authority of Singapore (MAS) stipulates that AI solutions should be developed in accordance with the Fairness, Ethics, Accountability and Transparency (FEAT) principles [95]. EU's General Data Protection Regulation (GDPR) [45] in 2018 announced a law referred as "right to explanation", dictating that individuals affected by automated decision-making solutions have a right to ask for an explanation of the outcome made for them.
The rising call for explainable models is mainly influenced by the rapid advancement of AI solutions and the increasing complexity surrounding them. More importantly, public cases of AI model displaying biases in their prediction magnifies the urge for explainable solutions. A famous example is Google's image recognition software, that accidentally labels dark-skin humans as gorillas [120]. Such biases can damage the company's reputation, and lead to profit losses.
The financial sector has its own set of ethics that should be upheld along with the desirable principles of AI ethics. These set of financial ethics often overlap with AI principles. An experiment involving 8 financial experts to investigate the relationship between the aforementioned sets was carried out in [101]. The results show that financial ethics (integrity, objectivity, competence, fairness, confidentiality, professionalism, diligence) has significant similarities with AI ethics (growth and sustainable development, human-centered values and fairness, transparency and explainability, safety and accountability). The strength of the links between each element was assessed, with integrity and fairness having the strongest relationship with AI ethics. Indeed, this is understandable given that AI solutions should naturally embody these qualities, regardless of the industry taken into account. As mentioned, the ethical goals set forth by XAI solutions differ among audiences, similar to the explanation types desired. Each audience is likely to be more affected by one than the other [7, 87]. Figure 2 list the ethical goals supported towards each set of audiences, and shows that there are some overlapping ethical goals across audiences. Referring to [7], we provide a brief explanation of each ethical goal reported in Figure 2, taking a financial perspective.
* _Trustworthiness_: Defined as instilling trust into users affected by the decisions of the AI model. Trustworthiness can be achieved when there is a high level of confidence in the model to constantly behave in the intended manner [100]. Trust is also sustained if the services provided are transparent and enables affected user to maintain their faith in the service providers. However, trust is a highly subjective quality and hence difficult to quantify. Judging if an explanation instills trust is mostly subjected to the affected users' opinions.
* _Fairness_: Refers to delivering AI solutions and explanations to every user and stakeholder equally, removing possible biases. Indeed, bias mitigation is a key constituent of fairness. The transparency of the AI model allows for a fair and socially ethical analysis, where any form of biases existing in the product chain are eliminated. In the financial markets, users tend to use the services provided by a firm if they are assured of fair and unbiased treatment.
* _Informativeness_: One important objective of AI models is to provide assistance to human counterparts in making decisions. Therefore, it is vital that the problem statement is made clear at all times. By providing explanations, the model benefits both from a social perspective as well as a performance standpoint, since knowing what is being done opens up opportunities for further refinement. Most of the papers in the literature dealing with this aspect aim at identifying relevant features which equates to highlighting parts of the input data the model is paying attention to. This can assist in debugging and allows pruning of unnecessary features which may cause overfitting.
* _Accessibility_: The main personnel interacting with algorithms are usually restricted to AI developers or domain experts, providing accessibility could allow for non-experts to get involved. This can be seen as an important stepping stone for making AI prevalent and well-accepted by the general society. Likewise, complicated algorithms deter financial companies from adopting such solutions, since extensive training is required while having to fear potential repercussions in the case of any unintended wrongdoings. If a model is able to relate its mechanisms in easily understandable terms, it can ease the fear of users and encourage more organizations to adopt such practices.
* _Privacy Awareness_: Not knowing the full limits of accessibility in the data can result in a breach of privacy. Likewise, such an issue triggers concerns within the overall design workflow. Accountable personnel in the designing process should ensure third parties are only allowed restricted access to the end-users data and prevent any misuse which can disrupt data integrity. Privacy awareness is especially important in the financial sector due to the amount and sensitivity of the information being captured.
* _Confidence_: The AI model should provide not only an outcome but also the confidence it has in the decision-making process, allowing domain experts to identify uncertainty in both model's results as well as the region of data captured. Stability in the prediction can be used to access a model's confidence while explanations provided by the model should only be trusted if it produces results that are consistent across different data inputs.
Figure 2: Ethical goals are classified under three broad audiences: end-users, developers/domain experts, and internal/external regulatory authorities. Some ethical goals are shared by the three different audiences considered, such as informativeness. [7]
* _Causality_: It is usually in the interest of developers or experts to understand the causality between data features. However, proving it is a difficult task that requires extensive experimenting. Correlation can be involved in assessing causality, though it is frequently not representative of causality. Since AI models only discover correlations among the data they learn from, domain experts are usually required to perform a deeper analysis of causal relationships.
* _Transferability_: Allowing for the distillation of knowledge learned from AI models is an extensive area of research, a notable benefit is that it allows for the reusability of different models and averts endless hours of re-training. However, the complexity of the algorithms limits experts from deploying trained models in different domains. For example, a model trained to forecast future stock prices can likely be used to predict other financial variables such as bond price, market volatility, or creditworthiness, if the model behavior in these circumstances is known. Delivering an intuition of the inner workings can ease the burden of experts to facilitate adapting the knowledge learned, reducing the effort required for fine-tuning. Transferability is arguably one of the essential properties for the improvement of future AI models.
### Approach of XAI
The review provided in this paper aims to give the readers an overall view of the XAI methodologies developed thus far in the financial industry. We note that explainability can be injected across different stages of the development cycle. These stages include: _pre-modeling, modeling, and post-modeling_[83]. Pre-modeling stage refers to the process chain before the designing stage of the AI model, this can include preliminary procedures which focus on identifying salient features by accessing readily available domain knowledge [57]. The modeling phase includes any adjustment to the model's architecture or optimization objective. As a start, simpler transparent models should be preferred over complex black-box models if the problem at hand is not too complicated. Most of the papers in the review focus on the post-modeling stage, mainly due to the flexibility and ease of designing
Figure 3: Different stages where interpretability can be injected into the design workflow. [83]
explainability techniques. Since the outcome is provided, it provides developers with more information to design an appropriate explanation method towards the form of data interacted (See Figure 3). Most XAI techniques tend to focus on one stage of the modeling process, though it is possible to do so in two or more.
The focused regions of finance can be broadly categorized under three sections [10]: _credit evaluation_ (peer-to-peer lending, credit assessment, credit risk management, credit scoring, accounting anomalies), _financial prediction_ (Asset allocation, stock index prediction, market condition forecasting, volatility forecasting, algorithmic trading, financial growth rate, economic crisis forecast, bankruptcy prediction, fraud detection, mortgage default) and _financial analytics_ (financial text classification, spending behavior, financial corporate social responsibility
\begin{table}
\begin{tabular}{|c|
(CSR), customer satisfaction). Following the task classification, we further differentiate the studies based on the underlying characteristics of the XAI technique as shown in Table 1, 2, 3. Specifically, we seek to answer questions such as "_What form of explanation is provided?_" (explanation procedure), "_Who is the explanation intended for?_" (audience), "_What kind of explanation is provided?_" (proximity, explanation type).
\begin{table}
\begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline Paper & Trans -parency & Proximity & Explanation Procedure & \multicolumn{2}{c|}{Audience} & \multicolumn{2}{c|}{Data Analysis} & \multicolumn{2}{c|}{Expla} & \multicolumn{2}{c|}{Expla} \\ & \multicolumn{2}{c|}{-parency} & & & & & & & & & & & & & & & \\ \hline
[MISSING_PAGE_POST]
& ✓ & & ✓ & & & ✓ & & ✓ & & & & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 2: Classification of papers relating to financial prediction. The papers reviewed are split by task category and subsequently categorized by entailed properties. Missing options are either not stated or non-applicable.
* _Transparency_: As mentioned in Section 2.3, interpretability of the model is either derived via interpreting the internal mechanisms of the AI model or through external techniques aimed at delivering some form of visualization or intuition of how the model works. Most of the reviewed papers focus on post-hoc explainability techniques, which we believe are preferred for a number of reasons. Intrinsic models usually under-perform complex networks and as such, producing explanations for an inaccurate prediction is pointless. We additionally note that the method of conveying explanations for intrinsic models is by definition model-specific. This means the same method cannot be reused for a different model. While post-hoc techniques can be agnostic or specific towards any single model.
* _Proximity_: The explanations provided by XAI tools can seek to explain either the derivation of an outcome, known as local explanation, or how the model outputs on a global scale, referred to as global explanation. Global explanations tend to provide information on how the model makes decisions globally based on the learned weights, data features, and structure of the network. Producing an acceptable global explanation tends to be difficult in most cases [88] as opposed to just a region of the input data. On the other hand, local explanations focus on a specific region of the dataset and seek to assist the receiver in understanding how a particular prediction is made. Local explanation is more accurate for unique cases where the dependency on input features is rarely captured by the AI model, which can cause global explanations to ignore such dependency. End-users tend to prefer local explanations as their concern lies with the explanation surrounding their outcome. Regulators and financial experts, on the other hand, prefer global explanations in order to have a complete understanding of the model.
* _Explanation Procedure_: According to [7], the various forms of post-hoc XAI techniques can be divided into several sections: text explanation (TE), visual explanation (VE), explanation by example (EE), explanation
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.
by simplification (ES) and feature relevance (FR). TE provides an explanation via text generation. Natural language tends to be easily understood by non-experts and is a common source of information in human society. VE enables visual understanding of the model's behavior, which may be preferable for image features [106], such methods comprise graphical plots for both local and global explainability. EE captures a smaller subset of examples which represents the correlations modeled by the black-box model at a high level. ES techniques build a simpler surrogate model to approximate the underlying black-box model with high fidelity yet being interpretable. FR techniques aim to identify features deemed relevant for the model's prediction, by computing a relevance score for each feature. FR can account for explainability at both local and global levels and constitutes the largest share among the reviewed papers in our literature.
* _Audience_: Since the quality of explanations is subjective, it is very difficult to derive a one-fit-all explanation and hence, explanations should be customized towards one's needs. The examples of audiences are referenced from Fig 1, while we further merge internal and external regulators together. We highlight that aligning the objective of the explanation to the audience receiving it is important [115]. Determining if an explanation is considered meaningful, is dependent on the target goals respective of each audience. Financial regulators, for example, would not be very concerned with understanding what sort of AI model or ML technique is used, but rather on the aspect of data privacy, model biases, or unfair treatment between affected end-users. It is uncommon for a single explanation to be deemed acceptable to audiences holding different positions in a financial company. An example is that the explanation produced for the developer tends to require additional customization before submitting to the immediate superior and the same applies to the proceeding higher-ups and external end-users.
* _Data Type_: The most commonly used forms of input data among the reviewed papers consists of text, images, and numerical values. In terms of frequency among the forms of available data, numerical features are the most common source of information used in the financial industry. Images represent the least utilized source, as they tend to be storage intensive and contain a large amount of redundant information or are not applicable for most use cases. We only found a single work using image features. [24] performs classification of eight different candlestick patterns and the explanation is delivered through monitoring changes in prediction after applying adversarial attacks. Surprisingly, textual information is not used as frequently as expected, albeit being a valuable source of information for deriving market sentiment or understanding consumers' emotions towards certain aspects of the business product. It is also possible to unify multiple sources of information, otherwise known as multi-model data. A boost in performance can be achieved, for instance by combining the patterns learned from time-series features and sentiment from textual features.
* _Explanation Type_: A single explanation can be conveyed in various forms, including factual, contrastive, and counterfactual explanations [84]. Factual delivers straightforward explanations that seek to answer the question _"Why does X lead to Y"_ as opposed to contrastive _"Why does X lead to Y instead of Z"_. Counterfactual instead reasons how the consequent can be changed with respect to the antecedent, answering the question _"how to achieve Z by changing X"_. Humans tend to prefer contrastive rather than factual explanations since the latter can have multiple answers and referring to [84], explanations are selective. As humans tend to ignore a large portion of the explanations except for the important ones due to cognitive bias. For example, if Person A's loan application was rejected, there could be numerous reasons for this, such as _"Person A's income was too low for the past 6 months"_, _"Person A's only have 1 existing credit card"_, _"Person A has had a credit default 3 months ago"_ and so on. Whereas a contrastive explanation can instead involve comparing against another applicant whose outcome contrasts the target applicant's and an explanation can be made, highlighting the most significant factor. As argued by [70], contrastive explanations are easier to deliver as one does not have to investigate the entire region of causes but rather a subset of it. Counterfactual
explanations then seek to provide solutions for the contrastive explanation, commonly done by identifying the smallest changes to the input features, such that the outcome can be altered towards the alternative.
* _Explanation Evaluation_: Despite the extensive studies carried out to investigate what defines a good explanation, it is difficult to qualitatively compare among interpretations. The quality of an explanation is mostly subjective as a single explanation can be perceived with varying opinions among audiences. Nonetheless, there exist a number of studies that provides a quantitative approach to evaluating explanations. These measurements can be derived from human experts (Krishnan et al., 2017), referencing financial ethical goals (Bahdan et al., 2017) or through statistical methods (Krishnan et al., 2017). (Krishnan et al., 2017) conducted a comparison between feature importance techniques in time series data and proposed a multivariate dataset that deals with the inability of techniques that identify salient time-series features. A vast majority of the reviewed papers focused only on evaluating the performance of the prediction model and consider it as a proxy for the quality of the explanation. We argue that such evaluation does not fully represent the quality of the explanation and even if so, it may not be suitable for every form of explanation procedure.
_Selection Procedure_: We design a framework shown in Figure 4, framing the designing of the XAI solution as a sequential decision-making process. The selection categories can be referenced from Tables 1, 2, 3. The sequential structure of the framework ensures the explanation provided is tailored to the audience's needs while achieving the goal set out with respect to the target audience. We note that certain properties of the XAI technique have inner dependencies with each other, such as the relationship between explanation proximity and target audience. The quality of the explanation is evaluated and serves as feedback for any necessary adjustment, resulting in an audience-centric explanation.
## 3. XAI With Numerical Features
Numerical features are a common source of information across all aspects of data-driven methodologies. Financial tasks such as credit scoring of individuals/firms and financial market forecasts commonly use a collection of historical numerical features, such as stock price, trade volume, and volatility, and apply various forms of data-driven models to make predictions. These data-driven models may include supervised learning approaches, e.g., classification and regression tasks, and unsupervised learning approaches, e.g., clustering tasks. The use of numerical features within the context of finance is well established, hence it is not surprising that the majority of reviewed studies focus on this area. In the following, we outline the main approaches used for explainability in this context, namely visual explanation, explanation by simplification, feature relevance, and explanation by example, and conclude with a brief summary.
### Visual Explanation
Visual explanatory (VE) techniques generate explanations of the underlying model in the form of visuals. VE techniques can be both model-specific and model-agnostic. The model-specific techniques reviewed are mainly constructed to interpret image-based networks such as convolutional neural networks (CNN). (Song et al., 2017) proposes to perform a deconvolution on the last layer preceding the output to extract a visual attentive map. The approach named CLass Enhanced Attentive Response (CLEAR) generates a graphical plot denoting the timeframe to which the stock-picking agent pays the most attention, along with a separate plot corresponding to the sentiment class of the stock. (Song et al., 2017) implements a CNN network to identify 8 common candlestick patterns which are widely used for technical analysis in stock market trading. The authors then perform an adversarial attack on regions of the feature space, to demonstrate that the model is focusing on regions similar to how a human would process the candlesticks. (Krishnan et al., 2017) employs a reinforcement learning (RL) agent to optimize a portfolio of equities while using a temporal CNN as a feature extractor. The dynamic asset allocation is interpreted with Gradient-weighted Class Activation Mapping (Grad-CAM) (Krishnan et al., 2017), improving over simple deconvolutions by producing class-discriminative
explanations, and is applicable to any deep neural networks. The proposed technique outputs a localization map using gradients corresponding to the target label. The right plot in Figure 5 depicts a global map highlighting each asset's importance across the trading period. Interestingly in the left plot, the agent focuses on the worst-performing stock, GLID the most, rather than the high-performing stocks. Here, the agent predicts the stock decline and reduces the allocation proportion, and indirectly increases the weights of high-performing stocks which in this case is the target stock, NVDA. [2] proposes to use an attention mechanism [119] to compute similarity scores of possibly fraudulent transactions on both feature and temporal levels and in return, allows for visualization at the top contributing features accounting for the model's prediction.
Model-agnostic VE techniques can be integrated with any form of model architecture and bear a similar resemblance with feature relevance techniques. Both investigate the effects on the model's output by adjusting the input features. [13, 40, 134, 139] employ Partial Dependence Plots (PDP) to visualize the marginal effects of features relating to corporate distress, credit scoring, and detecting mortgage loans defaults. The generated plots can enable a way of inferring if the underlying input-output relationship is linear or complex. However, PDP has often been criticized for its assumption of independence between features, evaluating unrealistic inputs, and also conceals any heterogeneous effects of the input features. Accumulated Local Effects (ALE) [6] address the concerns of feature correlation by considering the conditional distribution rather than the marginal one. In particular, it accumulates differences between intervals within the feature set to account for individual feature effects. [30] employs ALE on top of a tree ensemble model, XGBoost [25], as well as with global Shapley values [109] for better scrutability. This work deduces that the increase in profit margin and solvency ratio leads to lower debt default
Figure 4: XAI framework depicting a sequential flow of decision-making events. The proximity (local/global) and explanation type (factual/counterfactual) should be chosen in accordance with the target audience and data type available. The choice of explanation property is assessed by an iterative evaluation under the appropriate metric for both performance and explanation conveyed.
rates of small enterprises. [134] evaluates across an arsenal of XAI techniques, encompassing the aforementioned, and also includes Individual Conditional Expectation (ICE) for financial auditing purposes. ICE differs subtly from PDP in that it considers instance-based effects rather than averaging across all instances, making it a local approach (see Figure 6). [136] generates counterfactual explanations on credit loan applications by coupling unsupervised VAE with a supervised probit regression. The combined model yields a discriminative latent state, corresponding to class labels of either delinquency or non-delinquency. The counterfactual is subsequently produced by a stepwise manipulation function towards the opposite class label. The authors evaluate the generated counterfactuals quantitatively using maximum mean discrepancy (MMD) [137], which measures the number of successfully flipped class labels as well as minimal feature changes.
Figure 5: [left] shows a heatmap denoting the global attentiveness of individual stocks in the overall portfolio. [right] correspondingly presents a heatmap of individual assets. The agent chooses to allocate the most weight of the portfolio to NVDA, while surprisingly focusing most on a declining stock, GILD. The agent reduces the weightage of GILD and allocates to NVDA. Adapted from [110].
Figure 6: [left] shows a PDP on averaged marginal effects of total assets on the probability of statement restatement and [right] displays ICE, which considers instance-level relationship. Both show a negative relationship. [134].
### Explanation by Simplification
The idea of Explanation by Simplification (ES) techniques is to introduce a surrogate model performing uncomplicated operations. The purpose is to allow the machine learning developer to formulate a mental model of the AI model's behavior. The surrogate model has to be interpretable and more importantly capture the performance of the black-box model with high fidelity. The latter property should be given a higher priority since there is little use for interpreting a low-fidelity solution. ML techniques which apply linear operations and rule extraction are applicable as surrogate models in place of uninterpretable neural networks. These include decisions tree (DT) with limited depth, linear/logistic regression, K-Nearest Neighbors (KNN), and generalized linear models (GLM).
_Local Interpretable Model-agnostic Explanations (LIME)_[100]: is perhaps one of the most popular explanation techniques across various use cases, including finance. LIME is a model-agnostic method that is used to provide insight as to why a certain prediction was made and can be constituted as an outcome explanation technique. Since LIME is a local-based technique, it only has to approximate the data points within a defined neighborhood, achieving a much more realistic goal instead of capturing an interpretable representation of the entire dataset. On a high level, LIME can be implemented as follows (see Fig 7):
1. The target instance to be explained is denoted as \(x\in\mathbb{R}^{d}\). Uniformly sample \(n\) random subsets of nonzero elements of \(x\) to form local training points, \(z\in z_{1},z_{2},...,z_{n}\), where \(z_{i}\in\mathbb{R}^{d}\) for \(1\leq i\leq n\).
2. The target instance to be explained is denoted as \(x\in\mathbb{R}^{d}\). Uniformly sample \(n\) random subsets of nonzero elements of \(x\) to form local training points, \(z\in z_{1},z_{2},...,z_{n}\), where \(z_{i}\in\mathbb{R}^{d}\) for \(1\leq i\leq n\).
Figure 7. LIME process: Predictions of black-box model are uninterpretable. The local instance in the red box is the target to be explained. Subsets of nonzero elements of the target instance are uniformly drawn to form a local dataset on which the surrogate transparent model is trained on. The prediction from the linear transparent model can then be interpreted by the user. [100].
2. Derive labels \(f(z_{i})\) for each point using the black-box model \(f\). The surrogate model, \(g\) is then trained on the derived dataset, \(\{z,f(z)\}\in Z^{n}\).
3. Choose a transparent surrogate model, \(g\) and train it on the dataset, \(Z^{n}\) via Equation 1.
4. Interpret the outputs of the transparent model on the target instance, \(g(x)\).
LIME minimizes the following loss function to optimize for both fidelity of the local model as well as minimal complexity.
\[\text{explanation}(x)=\underset{g\in G}{\text{argmin}}[\ L(f,g,\pi_{x})+\Omega(g)] \tag{1}\]
\(L\) represents the loss function of the surrogate model \(g\) on the labels \(f\), weighted by proximity \(\pi_{x}\). \(\Omega\) represents the complexity or number of features in the surrogate model. \(G\) is the set of all locally fitted models, where each explanation is produced by an individual local model. The authors additionally propose a sparse selection of features, named Submodular Pick LIME (SP-LIME), to present the observer with a global view, based on an allocated budget of maximal features to focus on. The method delivers diverse representation by omitting redundancy. [85; 107] use LIME on top of tree ensembles to identify the contributions of individual features pushing towards predicting a specific borrower as defaulting or successfully paying off the loan. Such explanations can be useful in preventing social bias by discovering any socially discriminative features on which the model may be focused, thereby instilling trust in the model's usability. [127] extends LIME towards financial regulators requiring commercial banks to adhere to a set of financial factors, where they propose a method named LIMER (R stands for Regtech). The authors of LIMER argue that high acceptance of financial solutions can be achieved if such factors are integrated into the explainability design of the AI model. [28] implements model simplification by extracting logical rules from a random forest and selecting top most relevant rules. The decision rules are extracted from a local dataset, derived similarly to LIME without weighting the proximity of each drawn sample. [81] trains a recurrent neural network (RNN) to classify customer spending into five categories and an interpretable linear regression model was subsequently trained to predict the nodes formed by the RNN model. The authors then perform inverse regression which provides a mapping from output space to state space where the features responsible for categorizing customer spending can be identified.
### Feature Relevance
Feature relevance (FR) techniques account for the majority of the proposed explanation methodology we reviewed. FR techniques revolve around computing a relevance score for each feature, highlighting the respective contribution of the target feature either at a global or local scale. _SHapley Additive exPlanations (SHAP)_[74], motivated by the fair distribution among players from game theory [109] is a highly popular FR technique, which seeks to estimate the fair value of each feature in contributing towards the outcome, \(f(x)\). The fair value, otherwise known as shapley values are determined, based on estimating the difference between the black-box function over feature subset \(S\) with and without the target feature, \(f(x_{S\cup\{i\}})\) and \(f(x_{S})\) respectively. The difference is then averaged across all possible coalitions within the feature set \(F\).
\[f(x)=g(x^{\prime})=\phi_{0}+\sum_{i=1}^{M}\phi_{i}x^{\prime}_{i} \tag{2}\]
The final outcome is intuitively derived as an aggregate over all non-zero shapely values. SHAP's popularity stems from three attractive properties: guaranteeing a complete approximation of the original model \(f(x)\) through additive feature attribution (see Equation 2), ensuring non-contributing features have no impact on model output and consistency of feature values tracking the outcome contribution. We notice a large subset of papers reviewed has utilized SHAP as an explanation approach, likely given its flexibility towards explaining the model at both local and global scales (see Figure 8). [38] incorporates SHAP with additional credit knowledge
for the layperson to assess the logic of XGBoost's decision in a peer-to-peer lending scenario. [91] introduces RESHAPE, designed for unsupervised deep learning networks, which provide explanations at the attribute level. Such explanations can assist auditors in understanding why an accounting statement is flagged as anomalous. The authors evaluated RESHAPE against other variants of SHAP, based on metrics measuring fidelity, stability, and robustness. Attributing to the recent frenzy in cryptocurrency which has led to a number of studies attempting to predict movements in the cryptocurrency market, [41] proposes an interactive dashboard providing multiple graphical tools using SHAP for financial experts. [8] applies SHAP to explain predictions, generated by the popular mean-variance Markowitz model [82] which is an optimization model for establishing the optimal portfolio balancing between returns and risk. The generated explanation provides regulators a means of asserting compliance of algorithmic automized traders, otherwise known as robot-advisors, with established rules and regulations. [36] incorporates Global Interpretation via Recursive Partitioning (GIRP) with SHAP as a global interpretability technique. GIRP uses the importance values generated by SHAP to further extract meaning insights from tree models, and the method is compared against a boolean rule technique in a credit scoring use case. [17] constructs a tree-like visual explanation with TreeSHAP [73], specifically designed for ensemble trees with an improvement in computational efficiency. The produced structure allows users to visualize clusters of similar outcomes describing company default risk. [131] compares TreeSHAP against impurity metrics using information gain, on ensemble tree models for investment quality prediction. [46] identifies relevant features leading to consumers' decision on purchasing insurance and further clusters them into least to most likely groups with Shapley values. [16] similarly implements SHAP to explain XGBoost's classification of credit risk, while comparing it against an interpretable logistic regression model. Other studies include discovering the relationship between corporate social responsibility and financial performance [66], customer satisfaction [99], GDP growth rates [97], stock trading [12, 65], financial distress [116], market volatility forecast [124] and credit evaluation [15, 42, 101]. [122] performs K-means clustering on historical S&P 500 stock information to identify dominant sector correlations which describe the state of the market. This work applies Layer-wise Relevance Propagation (LRP) [9], after transforming the clustering classifier into a neural network since LRP is designed to
Figure 8: [left] Values of feature importance at a global level for the ML model’s decision in credit card approval. [right] An example of instance-level, \(E(f(X))\) represents the model’s base prediction if no features were considered, and \(f(x)\) represents the final prediction after summing the contributing features (\(\phi_{i}\)) [101].
work specifically with neural network architectures. [20] prunes unimportant technical indicators using different configurations of a permutation importance technique, before implementing decision tree techniques for stock market forecasting. The proposed technique was compared with LIME and demonstrated better reliability. [14] introduces a set of feature relevance techniques, Quantitative Input Influence (QII) [32] to compute interaction effects between influential features and additionally for each multi-class label. The authors additionally evaluated the ability of the XAI technique with five questions relating to each individual audience class. All of the XAI methods shown thus far are implemented in the post-modeling stage, while [57] is an example pertaining to pre-modeling where the identification of relevant features takes place before constructing the black-box model. This work explores the set of features relating to mortgage bankruptcy and performs feature mapping against a set of widely-used credit concepts. The utility of such an approach is confirmed through empirical evaluations.
As pointed out before, contrastive explanations are usually preferred. End-users subjected to an unfavorable AI model's decision would prefer a solution to the problem rather than a fact-based explanation which may present multiple possible reasons, giving little use to the explanation receiver. An explanation providing changes to be made such that the outcome can be reversed towards the favorable is referred to as a counterfactual explanation. Counterfactuals are derived by computing small changes to the input features continuously until the outcome is altered to the target class. [26] first identifies significant features, attributing to bankruptcy through SHAP, and subsequently generates an optimal set of counterfactuals using Genetic Algorithm (GA). The loss optimized by GA composes of objectives describing desirable properties of a good counterfactual outcome, including minimizing the size of altered features and maximizing the feasibility of the outcome. [48] additionally provides positive counterfactual explanations, describing the required changes to the current inputs that would instead reverse the loan approval to rejection. Such explanations can provide some form of safety margin for the user to be mindful of. [121] used various technique from DiCE [34] to generate counterfactuals under five different experimental conditions. The experiment aims to identify and study the effects of the causal variables in the fraud detection of ATM transactions.
### Explanation by Example
Apart from techniques that identify feature relevance on varying scales or approximate with a surrogate model, another form of explanation exists by selecting representative samples to illustrate the model's behavior. Such techniques can be classified as Explanation by Example (EE). One such technique includes prototype-based explanations. Prototypes can be regarded as representatives of the entire dataset, chosen based on similarity and importance in the overall decision-making of the model. [36] implements protodash [50], a gradient-based algorithm in a credit loan application to select top \(m\) prototypes, of which the top two are selected, with \(m\) being 6. The resulting outcome is a number of representative prototypes and each instance can be represented by either generated prototype in the clusters. In this case, the proportions of allocated instances were balanced between both prototypes. The number of prototypes is a hyperparameter to be fine-tuned. A higher value of \(m\) is frequently used where the complexity of the problem is a concern, albeit raises the risk of overfitting, while a lower value is used in simpler scenarios but incurs the risk of underfitting. For example, in the credit loan dataset, the selection of two prototypes was considered too little by domain experts who instead prefer 3-4 as being sufficiently representative of the evaluated dataset. [33] similarly extracts representative instances using KNN and generates insights on out-of-sample instances by looking for similarities with the representative points. Additionally, the data points for computing the distance are instead replaced with Shapley values, taking into account the importance of input features. Representative samples are generally suitable if the user is interested in determining the types of patterns or behavior found in the dataset while being relatively fast and straightforward to implement.
### Summary of numerical features
The above-mentioned approaches should be chosen according to the task at hand and the target audience. VE techniques, such as deconvolution and Grad-CAM, are less commonly used in the financial industry due to their limited applicability to networks other than CNNs. However, ALE, PDP, and ICE can be suitable approaches for financial analysts who might want to study the relationship between individual features and the model's outcome. ES is a straightforward approach that delegates the interpretability problem to a less complex surrogate model, though it incurs the additional cost of ensuring the faithfulness of the surrogate model. FR techniques allow users to observe each feature's contribution to the black-box prediction. Both global and local explanations serve different purposes for individual audiences. However, in situations where each feature equally contributes to the model outcome, such explanations might not be very helpful depending on the objective of the explanation. For example, a declined credit loan approval may have multiple features such as prior default, debt-to-income, and household capital contributing equivalently to the outcome. Such an explanation does not offer an obvious course of action for the applicant. EE techniques are particularly useful when the user wants a small set of representative samples to explain the model's outcome. This can provide a fast and straightforward explanation, but it has limited usefulness.
## 4. XAI for textual information
In this section, we review models operating with textual information. We note that the papers pertaining to this area represent the minority in the overall literature. When it comes to data preparation, textual data generally require additional pre-processing works such as stop word removal, stemming, lemmatizing, and tokenization. In terms of feature extraction, the semantics, and syntax structure around text is important and have to be learned to fully capture the information conveyed, unlike numerical features which are readily usable. Models suitable for training from textual data are also limited to a smaller subset of available techniques. Nevertheless, unstructured data such as text are in abundance. If properly processed, textual data can be used to derive informative signals such as market sentiment and emerging trends (Shi et al., 2019). Textual data are commonly classified under alternative information, which comes in a wide variety of sources including social media, online reviews, blog posts, and news headlines (Shi et al., 2019), in contrast to non-alternative information which refers to data commonly utilized for financial analysis. Fortunately, a wide variety of explanation techniques exist which are compatible with textual information. Conveniently, textual information is applicable for XAI techniques delivering explanation via text generation, which can be preferable for the layperson as natural language provide an easier form of interpretation as compared to statistical graphs.
### Text Explanation
Text explanation techniques provide clarity in the form of generating informative textual statements to assist in the understanding of the model's behavior. The generated text can either be re-generated text statements by using some form of generative model or replacing selected words in the original sentence. (Shi et al., 2019) utilizes Generative Adversarial Networks (GAN) to produce text statements that seek to align with user-defined inputs. Specifically, the explanation can take two different objectives, either converting actionable text to educational text or vice versa. The actionable text briefs audiences on optimal actions to consider, based on real-world responses from human responses on multiple loan application scenarios while the latter seeks to educate the audience on reasons attributing to the response. The transfer of objectives from one to another is analogous to the implementation of style transfer on images, a popular application of GANs that translates the style of an instance to the target image while retaining the content. Figure 9 shows a snippet of an example, the proposed method can identify the semantics behind the statement and relay the relationship between consistency and time, while knowing if the current income is above or below the required threshold. (Shi et al., 2019) generates plausible counterfactual text
sentences with a transformer architecture, trained under contextual decomposition. The explanation technique, derived from Sampling and Contextual Decomposition (SCD) (Sendle et al., 2017), performs different actions including inserting, removing, or replacing words that are representative of the context in the statement, based on the target objective. The high-level idea of counterfactual generation involves identifying the most relevant word and replacing it with an antonym from a reference dictionary and continues until the outcome is reversed. The proposed transformer outperforms even human experts in classifying financial articles on merger & acquisition event outcomes. (Kumar et al., 2019) generates text explanations using a state-of-the-art natural language generation transformer decoder, GPT-2 (Sendle et al., 2018), while fulfilling soft constraints of including keywords. The proposed technique, soft-constrained dynamic beam allocation (SC-DBA) extracts keywords corresponding to various levels of predicted market volatility using a separate network on harvested news titles. The quantitative measurement is evaluated based on the fluency and utility of the explanation produced.
### Visual Explanation for Text
Besides interpreting through text, users can understand through the form of visuals, which makes the use of attention a particularly attractive option. Attention was first introduced when it was used to consider correlations between words in a sentence in a parallel fashion and is a primary component in the transformer architecture (Kumar et al., 2019). Transformers are notably suitable for processing long sequences of text and through the use of attention. They are computationally efficient compared to RNN-based models. It so happens that, computing attention scores of each word serves as a natural form of interpretation, by allowing users to visualize how the network is capturing information from the input text (Sendle et al., 2018). Representative works in this area employ attention to highlight regions of text sentences that are deemed relevant for the output. (Kumar et al., 2019) utilizes dual-level attention with Gated-Recurrent Units (GRU) (Sendle et al., 2018), processing both inter-day and intra-day embedding of news titles relating to S&P 500 companies. The attention module assigns a relevance score to each news article and the authors additionally construct a knowledge graph conducting concept mapping between relevant entities as a visual explanation. Corresponding
Figure 9. [Top] Transfer of educating statement to actionable statements advising applicant on actions to take such that the subsequent loan application can be approved. [Bottom] Transfer of original statement highlighting actions to educating statement conveying the reason for rejected loan application. (Kumar et al., 2019).
to dual-level attention, [69; 75] propose a hierarchical attention model at both the word and sentence level and produced explanations in the form of a heatmap, highlighting relevant text. The proposed method, FISHQA was trained to detect loan arrears from financial text statements, similar to the compared baselines. The uniqueness of the proposed method lies in providing FISHQA with additional user queries. The model was able to highlight regions of the statement corresponding to the set of expert-defined concepts. This form of explanation allows users to verify if the model is focusing on the correct terms relating to the concept at hand (refer to Figure 10). Along the lines of hierarchical attention, [69] introduces a quantitative measure to evaluate the precision and recall of captured against various lexicon dictionaries and expert annotated lists. The approach, analogous to the former study can be seen as an extrinsic process of ensuring the correctness of concept identification, by capturing words associated with financial risk. [37] implements knowledge graphs to provide a visual linkage between event entities extracted from stock news articles. The approach offers users a visual understanding between the feature's relationship and the corresponding prediction. [58] introduces GINN, an interpretable neural network, the network is designed in a way that each layer represents different entities such as words and concepts at the node level. The approach identifies words attributing to the predicted sentiment labels, as well as the concepts it belongs to.
### Summary of textual information
TE techniques aim to augment existing input text or generate new text based on given inputs. Such explanations are commonly preferred since natural language is easily understood by humans if the explanation is concise and accurate. However, textual explanations may fail to capture the nuanced relationships between input features and model decisions. This can be especially detrimental and counterproductive to domain experts whose goal is to discover further improvements based on the provided explanations. Textual explanations might also require additional processing work to ensure fluency, coherence, and unambiguity. VE techniques in these works leverage the utility of attention to provide a glimpse into how the model is representing the input text, and the use of hierarchical attention allows for a more refined analysis. However, since attention captures the relationship between each word or sentence, such explanations might be overwhelming if the set of explainable features is too large. Audiences who are not well-versed in heatmaps or attention scores may have difficulty understanding the provided visuals.
## 5. XAI for Hybrid Information
The remaining studies implementing post-hoc explanation techniques utilize a combination of both textual and numerical/technical features. With respect to instance-level explanations, [11] combines the sentimental analysis of text with technical analysis of historical stock prices to train a random forest stock forecasting model, explained through LIME. The resulting explanation computes a set of relevant feature values and news wording
Figure 10. FISQHA: hierarchical attention model, different colors relating to different financial concepts, grey - company, light brown - executives, red - financing, blue - litigation, teal - personnel. [75].
corresponding to the respective outcome. Similarly, (Kumar et al., 2017) implements LIME with LSTM-CNN and accurately identifies attentive words in consonant with the target sentiment. (Kumar et al., 2017) predicts the possibility of litigation on financial firms from examining 10-K financial reports and numerical indicators concerning the firm's accounting knowledge. The author additionally carry out an ablation study on the utility of hybrid information as opposed to individual and validated the initial approach. Correspondingly, the explanation served to regulators is framed as the identification of text leading to the suspicion of insider trading, with the help of an attention mechanism. (Kumar et al., 2017) adopted the practice of shapley values and further integrate external knowledge regarding truth factors, namely Truth Default Theory (TDT) (Kumar et al., 2017) to detect information fraud. The explanation module incorporates both shapley values and TDT to generate a report highlighting numerical contributions of features as well as a _text explanation_. A union of _explanation by simplification_ and _feature relevance_ was proposed by (Kumar et al., 2017; Li et al., 2017). (Li et al., 2017) implements both LIME and SHAP, offering a global and local explanation of market fear prediction in the Indian financial market. (Kumar et al., 2017) uses SHAP and identifies textual information to be more important for classifying financial transactions and further performs clustering to identify top contributing keywords. (Kumar et al., 2017) interprets an RL-trained agent's behavior in algorithmic trading. The resulting explanation enables experts to focus on time-dependent variables alongside consideration of non-linearity effects, which are reduced to a small subset of initial variables. The learned policy is simplified via policy distillation, onto the space of linear regressions such that an interpretable Lasso regression model can be used as an interpretable approximation. Subsequently, \(k\)-degree polynomial analysis is conducted to select salient features, with \(k\) acting as an additional flexibility for the developer to decide. (Kumar et al., 2017) utilizes aspect-based sentiment analysis to study the relationship between stock price movement and top relevant aspects detected in tweets. The polarity of each aspect is derived from a SenticNet-based graph convolutional network (GCN) (Kumar et al., 2017). The proposed method can be seen as analogous to the feature relevance technique, aimed at deriving top contributing aspects with polarity values. The proposed work focuses on the relationship between financial variables instead of making financial predictions. Such information can allow for further analysis, leveraging the relationship between the price movement of individual stocks and individual sentiment of popular terms detected in tweets.
Hybrid information combines the utility of both numerical and textual information, which can lead to better performance and an increase in the number of compatible explanation techniques. For example, _text generation_ techniques can be used to generate natural language explanations for non-technical audiences, facilitating ease of understanding, while feature relevance approaches can be utilized to identify top contributing factors in the feature domain for technical experts. Models working with both numerical and textual information can also benefit from a performance point of view if such information can be processed without the risk of overfitting. However, it may be difficult for models to seamlessly perform with hybrid information, as it ultimately depends on the task at hand and may require complex feature engineering. For instance, the utility of text information largely depends on the source and often requires a significant amount of preprocessing before the data can be useful. The combination of both text and numerical features may increase the complexity of the explanation and end up being counterproductive. Such issues limit the inclusion of textual information in use cases such as stock trading or market index predictions. Nonetheless, we note that leveraging hybrid information to provide explanations can be a promising approach if the aforementioned issues are addressed.
## 6. Explainability in Transparent Models
The remaining studies look at instigating explainability from inherently transparent models. These models are typically restricted to ML models performing linear operations or rule extraction. The medium of explanation in transparent models is by nature model-specific, in the sense that the same mode of explaining how a model functions to an audience likely cannot be reused by a different model. The usability of transparent models in
complicated tasks is largely restricted due to their poor predictive strength. Nevertheless, transparent models still remain an attractive option if sufficient performance can be guaranteed.
_Linear/Logistic Regression_: Linear regression model is among the earliest ML models to be used for quantitative analysis. The prediction outcome can be easily derived as a weighted aggregate of input features. As such, the outcome can naturally be interpreted by inferring from the coefficients, \(W_{i}\) which serves as a quantitative measure of feature importance for the outcome. Attributing to the linearity assumption, the output \(y\) can be derived as such:
\[y=W_{0}+W_{1}x_{1}+W_{2}x_{2}+...W_{n}x_{n}+\epsilon \tag{3}\]
One can easily interpret the outcome as _"By increasing feature \(x_{i}\) by one unit, the output increases by \(W_{i}\)"_. On the other hand, the logistic regression model is interpreted in a slightly different manner, since the output is bounded between [0,1], a logistic function is used. Logistic regression looks at the probability ratio between both outputs: _"Increasing one unit of \(x_{i}\) is equivalent to increasing \(\frac{P(y=1)}{P(y=0)}\) by \(exp(W_{i})\)"_[88]. [39] addresses the trade-off between accuracy and interpretability by incorporating decision trees with logistic regression acting as the main operational backbone. The technique is coined as Penalised Logistic Tree Regression (PLTR). PLTR extracts binary variables from short-depth DTs, all the while establishing sparsity through a penalized lasso operation. The proposed model is able to account for non-linear effects in a credit-scoring dataset while retaining interpretability by observing the top-selected rules.
_Decision Trees_: Decision trees are one of the most commonly used techniques in machine learning problems due to their simplicity and easily understandable structure. Unlike linear/logistic regression, DT can approximate nonlinear relationships and yet remain interpretable via simple _if-else_ logic. However, the transparency of tree models diminishes with increasing depth, and popular ensemble tree models such as XGBoost or gradient-boosting tree models completely eliminate any form of interpretability. The user can interpret decision trees by traversing through the root node and upon arrival at each leaf node. The outcome can simply be explained as _"if \(x_{1}\) is \(\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim} \mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim} \mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim} \mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{ \sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{\sim}\mathord{}\mathord{\sim} \mathord{\sim}\mathord{\sim}\mathord{}\mathord{\sim}\mathord{}\mathord{\sim}\mathord{} \mathord{\sim}\mathord{}\mathord{\sim}\mathord{}\mathord{\sim}\mathord{}\mathord{}\mathord{ \sim}\mathord{}\mathord{\sim}\mathord{}\mathord{\sim}\mathord{}\mathord{\sim}\mathord{} \mathord{}\mathord{\sim}\mathord{}\mathord{}\mathord{\sim}\mathord{}\mathord{}\mathord{} \mathord{sim}\mathord{}\mathord{}\mathord{}\mathord{sim}\mathord{}\mathord{}\mathord{} \mathord{}\mathord{sim}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{sim} \mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{sim}\mathord{}\mathord{} \mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{} \mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{} \mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{} \mathord{}\mathord{}\mathord{}\mathord{}\mathord{}\mathord{} \mathord{}\mathord{}\mathord{}\mathord{}{\mathord{} \mathord{}}\mathord{}\mathord{}\mathord{}{\mathord{} \mathord{}}\mathord{}\mathord{}\mathord{}{}\mathord{} \mathord{}}\mathord{}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}}\mathord{}\mathord{}}\}\mathord{}\mathord{}}\)\)\)\(\mathord{\mathord{\{}}}\)\)\(\mathord{\mathord{}}}\)\)\(\mathord{}}\)\)\(\mathord{}}\)\)\(\mathord{}}\)\)\(\mathord{}}\)\)\(\)\(\mathord{}}\)\)\(\mathord{}}\)\)\(\)\(\mathord{}}\)\)\(\}}\)\)\(\)\(}\)\(}\)\)\(}\)\(}\)\)\(}\)\(}\)\)\(}\)\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(}\)\(}\)\(}\)\(\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(\)\(}\)\(}\)\(\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(}\)\(\)\(}\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(\)\(}\)\(}\)\)\(}\)\(\)\(}\)\(}\)\(\}\)\)\(}\)\(}\)\(\)\(}\)\(\)\(}\)\(}\)\(}\)\)\(}\)\(}\)\(\)\(}\)\(
visualized by observing the local coefficients.
Transparent models have the advantage of being interpretable without requiring additional approaches to interpret the model or outcome. However, there exists a clear trade-off between desired performance and sufficient interpretability. Certain works have introduced approaches that combine different transparent models to achieve better performance while still retaining as much model transparency as possible. Transparent models remain a popular choice in the financial domain, as companies must undergo routine audits that require audited firms to provide accountability for their algorithmic services offered to end-users. Nonetheless, given the monotonic success of deep learning models, companies seeking to maintain their competitive edge must either improve on the existing transparent models or balance the performance-interpretability trade-off. A recommended approach would be to stick with transparent models if their performance proves sufficient and proceed with less interpretable models otherwise. One could also break down the task in a hierarchical manner, using interpretable models for lower-level tasks and better-performing models for more complicated tasks.
## 7. FinXAI and Ethical Goals
In Sections 3-6, we have reviewed different FinXAI methods and summarized their technical strengths and weaknesses, based on representative papers over the past years. In this section, we analyze the contributions of these FinXAI techniques to the ethical goals that were set out in Section 2.2. We also discuss some of the goals lacking sufficient study in current FinXAI techniques. In this section, the goal of accessibility also encompasses the fact that developers and domain experts can easily access the decision-making mechanisms of complicated black-box models due to improved interpretability. This is slightly different from the narrative that we introduced in Section 2.2, which mainly focuses on accessibility for non-expert users. Such an extension can better explain the technical contributions of the reviewed works to XAI.
As seen in Table 4, different explainable methods contribute to the ethical goals from different aspects. XAI for numerical features has proposed several methods to approach ethical goals, regarding trustworthiness, fairness, informativeness, accessibility, confidence, and causality. For VE methods, [136] is one example that advocates for trustworthiness by generating counterfactual explanations, while being informative by detecting important features that can alter the prediction. Counterfactual explanations reveal the slightest modifications that are necessary on the input data to achieve an alternative outcome. It helps to earn trustworthiness from target audiences because counterfactual explanations provide possible rescue measures for them to achieve their targets with minimum effort, e.g., proposing possible improvements to help borrowers pass the qualification review of credit agencies. Counterfactual explanations can also help to justify predictions besides factual explanations. Both merits of generating counterfactual explanations improve the trustworthiness of audiences. Many VE-based approaches improve informativeness by gaining insights into the decision-making mechanisms of models and revealing feature correlations [2, 13, 24, 30, 40, 64, 110, 134, 139], because visualization takes the advantages of demonstrating patterns and trends of data, e.g., model parameters and numerical features. VE can be also used to discover valuable features [2, 30, 134]. It is easy to communicate with both experts and non-domain experts by using graphical representations or visual images whenever available. Thus, VE also improves accessibility for broader audiences. For ES methods, [28, 81, 127] can explain why a certain prediction was made from outputs, which helps to improve the trustworthiness of AI predictions. [85, 107] use LIME to detect socially discriminative features to prevent social bias. ES is the only approach that was used for improving fairness in finance. For FR methods, [8] uses SHAP to improve the trustworthiness of algorithmic traders in crypto markets. [26, 48] generate contrastive explanations to explain required changes for certain predictions. [17] visualizes similar outcomes that describe the risk of a company default with SHAP, while most SHAP-based methods [16, 12, 14, 15, 16, 20, 36, 38, 41, 42, 46, 57, 65, 66, 91, 97, 101, 116, 121, 122, 124, 131] improve accessibility for technical audiences
by discovering important features. [41] improves usability by constructing interactive graphical tools upon SHAP, which likewise promotes accessibility. [121] is one of the rare works that study causal inference based on generated counterfactuals. For EE methods, [33] generates counterfactuals to explain the required changes, based on representative instances. The selected representatives of similar instances by EE methods [33, 36] can be used to select instances to represent a particular cluster in the output space. Similar instances aligned to such representatives can assure and improve the confidence of stakeholders.
XAI for textual information targets to improve trustworthiness, informativeness, accessibility, and causality. For TE methods, [112, 128] similarly improves trustworthiness with counterfactual texts, with the latter providing alignment according to the user's prompt. The alignment from educational to actionable information enhances information flow, especially for individuals not familiar with the service interface. For VE techniques operating on text, attention weights are widely used for interpretation purposes. Such techniques enhance informativeness and accessibility by using the attention weights to understand regions of focus by the underlying model [69, 75, 129]. On the other hand, [37, 58] improve the interpretability of graph neural networks in the financial domain.
XAI for hybrid information leverages both textual and numerical features to improve informativeness and accessibility. These works interpret the black-box model's behavior [11, 29, 43, 44, 72, 79, 138], and provide textual evidence regarding predictions [11, 29, 43, 44, 79, 96]. [3, 21, 22, 39, 47, 92, 113] implements transparent models,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Method & Trust. & Fairness & Informative. & Accessibility & Privacy & Confidence & Caus. & Trans. \\ \hline \multirow{4}{*}{VE} & \multirow{4}{*}{[136]} & \multirow{4}{*}{[134]} & [2][13][24] & \multirow{4}{*}{[2][30][134]} & \multirow{4}{*}{[2][30][134]} & \multirow{4}{*}{[134]} & \multirow{4}{*}{[134]} & \multirow{4}{*}{[134]} \\ & & & & [30][40][64] & & & & \\ & & & [110][134] & & & & & \\ & & & [139] & & & & & \\ \cline
mitigating the need for post-hoc analysis, and the simplicity of such models improves upon the accessibility for non-expert users. Transparent models may be a suitable choice if the performance is satisfactory and the outcome has to be readily interpretable by non-technical stakeholders. Decision trees are one model which can be easily communicated to audiences without a technical background, given its easily understandable format.
From the above works, we can find that most of the XAI research lies in either studying the underlying model's behavior or identifying important features. Notably, the connotations of informativeness and accessibility goals are rich as seen in Table 4, no XAI technique can achieve all of the desired goals. It is therefore imperative that the XAI design process is tailored towards the desired goals of the target audience Fig. 2. Likewise, the format of presenting explanations is equally important. Non-technical audiences would very much prefer user-friendly visuals as compared to technical plots.
On the other hand, the ethical goal of preserving data-privacy has not been well studied in the works reviewed. Privacy-preserving techniques are a popular research direction, e.g., federated learning [130]. It is a decentralized learning method that allows parties to collaboratively train a model within a local environment without sharing their data with each other. Generating synthetic data in place of actual data for training models can be one such approach. One example is LIME which generates a local dataset given a target instance, without requiring access to other data instances. This can help to minimize the amount of information being accessed outside of the accountable circle. The understanding of how various features lead to a certain output creates the opportunity of generating more synthetic data. This can be seen as a form of self-supervised learning with the purpose of preserving privacy. However, XAI techniques can also become a double-edged sword, attributing to privacy leakage instead. Such concerns are especially prevalent in techniques manipulating decision boundaries including SVM, K-nearest neighbors, and counterfactual explanations [111]. For example, a counterfactual explanation on reserving a loan application might reveal a suite of sensitive information (location, 10-year income, marital status) to be modified, even though such information is meant to be anonymized. The leaked information can be accessed by third-party providers who may be part of the product design or malicious hackers. A key challenge is managing the balance between the fidelity of the delivered explanation and the sensitive features altered. Data leakage goes against the privacy awareness goal of XAI and such events are not rare in the financial sector where there exists a constant supply of computerized bots looking to capitalize on these openings. The consequences often affect a large group of public stakeholders [35], and the affected firm has to pay large fines and incur a loss of trust from their clients. In addition, overly-expressive explanations may allow external competitors to reverse-engineer the models and potentially replicate and improve upon them, thereby compromising the competitive edge a company holds.
XAI techniques that improve transferability are another less frequently studied area. In the field of general AI, transferable knowledge is usually acquired through transfer learning [93], multi-task learning [77], meta-learning [53], and domain adaptation [126]. However, the main carrier of these learning paradigms at present is usually deep neural networks. It is difficult to acquire explainability for a deep neural network by using these learning methods. In addition, knowledge forgetting also brings challenges to traditional neural network-based learning methods [52]. Thus, the old knowledge stored in the neural network is likely to be replaced by the learned new knowledge, if the old knowledge is not retained together with the new knowledge. In light of this, how can we leverage explainability and transferability, simultaneously? One possible direction is to utilize neural symbolic techniques. Neural symbolic AI has achieved significant impacts in natural language processing (NLP), e.g., sentiment analysis [18] and metaphor processing [78]. It takes the merits of both neural networks and symbolic representations. For example, neural networks have strong generalization ability in learning feature representations. Symbolic reasoning enables human-understandable explanations of the system's decision-making process through transparency and interpretability. Since symbolic knowledge can be readily stored in a knowledge base permanently, it avoids the problem of knowledge forgetting in neural networks. A comprehensive and accurate knowledge base can weaken the fitting ability of the neural network. As a result, a
more lightweight and transparent neural network can be used in a neural symbolic system. However, developing domain-specific knowledge for finance is costly. Besides, developing symbolic representations for numerical data is also challenging.
Finally, improving fairness, confidence, and causality is also important for ethical concerns. Whereas, the FinXAI research in these areas is very limited. As noted in Table 4, there are not many explanation methods that approach these goals, e.g., ES for fairness with numerical features; EE for confidence with numerical features; and FR and TE for causality with numerical and textual features, respectively. However, it is difficult for a one-fit-all explanation. Hence, we highlight the importance of an audience-centric XAI technique as a more realistic expectation.
## 8. Challenges and Future Directions
We exploit the knowledge and insights gained from the agglomeration of FinXAI research conducted thus far and put forward a list of challenges and directions we consider to be important for readers to consider. A few of these limitations have been similarly considered in previous works (Han et al., 2018), which have presented seven major challenges encountered in the context of presenting explanations to stakeholders. Some of these limitations are evident from the reviewed XAI methodologies and we further elaborate on them and cater avenues for improvement.
### Over-reliance
A means of interpreting the model can be helpful while transforming how users interact with data. However, it can cause users to over-rely on possibly inaccurate explanations. A survey was conducted to study how data scientists perceive explanations provided by different XAI tools and found out a large proportion tend to over-trust the explanations provided (Kumar et al., 2019), especially the ones which have received widespread usage. The visual explanations delivered by feature relevance techniques such as SHAP, tend to be absorbed at face value, which can cause researchers to not question their legitimacy. Concurrently, a data scientist who has spent an enormous amount of time designing the AI model may already have prior beliefs on the outcome or model and are more inclined to accept the explanation if aligned with their initial beliefs (Kumar et al., 2019). Such an occurrence is commonly known as confirmation bias. Over-trusting these explanations can be especially damaging if conveyed to the layperson and can result in the spread of misinformation to a wider audience.
### Social aspects
As mentioned by (Kumar et al., 2019), explanations are selective, the receiving users tend to only take a minor subset of the entire set of explanations, predominantly those that agree with their prior belief. This can sometimes cause the affected receiver to lose sight of the bigger picture and arrive at some misinterpreted conclusion. (Kumar et al., 2019) notes this as a mismatch between the solution's conceptual purpose and the receiver's mental model. XAI tools that produce a feature ranking figure may overcloud the users with excessive information, thereby increasing their cognitive load and rendering the tool counterproductive. It is also observed that the amount of trust is correlated with the level of appreciation the receiver has in the explanation (Kumar et al., 2019). Take for example the case of a rejected loan application, an under-appreciated explanation would just result in the applicant resubmitting the application to a different bank, without addressing the underlying root cause. Humans also tend to prefer contrastive explanations as opposed to visualizing a large number of probable causes, thus designing the explanation to be counterfactual can reduce under-appreciation and rejection of XAI tools. Future research on human-centric explanations can look to draw inspiration from social sciences and the study of human psychology to bridge the gap between the two ends of the explanation chain.
### Explanation evaluation
It is evident from Table 1, 2, 3, only a small subset of reviewed works attempt to provide some form of quantitative measurement of the proposed XAI technique. An even smaller number performs a comparison between multiple XAI techniques, possibly due to model incompatibility and differences between the explanatory structure of individual XAI techniques. [50] uses a variety of evaluation approaches, grounded on both statistical and human knowledge and involves experts and non-technical users, while admitting the limitation of ambiguity and inconsistency in human judgment. In the case of surrogate model explanations, the fidelity of the surrogate model can be used to measure the accuracy of the approximation. However, such an approach cannot be used for feature relevance tools like SHAP [5]. A common basis for instilling interpretability in financial solutions goes beyond the social responsibilities of the provider firms but also concerns the need to comply with the rules and regulations laid out. Even so, the mismatch between each party's perception of explanation sufficiency extends beyond the model itself and includes commonly neglected variables such as accountable personnel, feedback process, and personnel training procedure [63]. [55] highlights that explanations can be seen as a dynamic interaction between the conveyed message and the receiver's thought process. The effectiveness can be measured via goodness and satisfaction in the form of feedback upon receiving the message. This further exemplifies the fact that what makes an explanation good is largely subjective and coming to a consensus on a suitable set of metrics is no trivial task. Among the set of reviewed works in this paper, there exist two forms of evaluation, either through statistical approaches (F1-score, accuracy, and t-test) or opinions of a human expert. The latter is defined as plausibility and should be made distinct from faithfulness, which reflects how the AI model reasons about its behavior. [59] states that the assessment of faithfulness should be independent of human judgment and a common ground can be established by evaluating XAI techniques with respect to a pre-defined set of goals, rather than on the basis of achieving universal satisfaction. One criticized flaw in existing works on evaluating interpretation methods pertains to the distributional shift between training and test sets, as well as the infeasible requirement of retraining models. [117] creates a set of synthetic datasets with known discriminative features and additionally develops two new metrics which account for identifying top relevant time steps in terms of ranking and score. The proposed method takes into consideration the temporal elements in time-series analysis. We further note that it is imperative for future works on XAI evaluation criteria to precisely define the objective and target audience of the explanation.
### Trade-off between performance and interpretability
It is often common to ponder _"shouldn't we deploy more transparent models if no interpretability enhancement work is required?"_, such an initiative is often plagued by the limited representativeness of transparent models. The trade-off between performance and interpretability is quite commonly a major cause of the dilemma in selecting between black-box models and inherently transparent models. Though there exist studies that have shown that black-box models performing more complex operations do not necessarily lead to better performance [103; 104], it is often the case for unstructured information and noisy environments such as the financial markets. XAI tools explaining through a surrogate model have to face the burden of ensuring both the fidelity of the surrogate model and the effectiveness of the underlying AI model, all the while matching the required goals toward the receiving audience. In light of such a challenge, it highlights the necessity for a consensus metric to serve as a quantitative assessment. In general, it happens more often than not that the selected model at hand is more complex than required, resulting in additional explainability engineering. Moving forward, an efficient way of handling the trade-off is to prioritize the usage of transparent models if the obtained performance is satisfactory and progress to a more complex model when necessary.
### Better transparent models
We note that transparent models refer to models which exhibit inherent transparency without the need to apply post-hoc explainability techniques. However, caution should be taken in the assumption of such model (Steintein et al., 2019). The inherent transparency is dependent on the achievable explanation goals and the explanation receiver, while there is much doubt surrounding truly inherently transparent models (Krause et al., 2019). Nevertheless, there are numerous studies advocating for a greater need in adopting transparent models. A study by (Krause et al., 2019) argues that transparent models are essential for promoting fairness in machine learning, as they allow for easier identification and mitigation of biases in the decision-making process. A team of researchers (Krause et al., 2019) participated in an explainable machine learning challenge and concluded that transparent models do not only sidestep the common issues of trust and misinterpretation but also exhibit the potential to match complex models in terms of performance on specific tasks. It can therefore be for the well-being of society that researchers prioritize the development of more sophisticated and robust transparent machine learning models that are able to balance the trade-off between model accuracy and interpretability.
### Human-centric XAI
We note that, in order to effectively support human decision-making and ensure the interpretability of AI models, there is a growing need for human-centric XAI tools that prioritize user understandability and usability. Explanations are interactive and should be viewed as a bidirectional form of communication (Krause et al., 2019), with the XAI tool explaining to the user and the user reciprocating back for clarity. Incorporating Human-Computer Interaction (HCI) principles into the design of XAI systems is beneficial in translating interpretability, as HCI principles embrace user-friendly interfaces that enhance human comprehension and engagement. One important aspect of this is the development of interactive systems that enable users to actively engage with and explore XAI models, allowing them to gain a deeper understanding of how the models work and how they can be understood. Several studies have highlighted the importance of incorporating HCI principles into the development of interactive XAI systems (Krause et al., 2019). The results show that users had more trust when presented with virtual interactive explanations (Krause et al., 2019). Some popular examples of interactive XAI toolkits include Microsoft AI widgets (Bogogley et al., 2019) and What-if tool by Google (Krause et al., 2019). Besides being an easily approachable and interpretable tool, interactive systems improve system usability and entice users to frequent the financial services provided, thus adding to the benefits of financial firms prioritizing the development of human-centric, interactive XAI tools.
### Multimodal XAI
A less discussed avenue for improvement is in the incorporation of multimodal information, particularly natural language. Among the works reviewed, a large subset of processed input data only involves numerical features, while the inclusion of textual information remains a minority. An underlying reason might be due to the redundancy of using textual data or the lack of substantial increase in performance while requiring additional preprocessing works due to the inclusion of such information. Nevertheless, there exist benefits from a transparency point-of-view in incorporating textual information. (Krause et al., 2019) highlighted that the separation of the underlying AI model from the explainability tool is less distinct since NLP models, particularly through the use of attention can produce both the prediction and explanation. Likewise, NLP-type explanations can be attractive for the layperson since it exhibits a natural feel which makes the whole process interactive and efficient (Krause et al., 2019). Ultimately, the incorporation of multimodal information entails more flexibility in crafting a good explanation and is supported by the abundance of textual information available. We believe the inclusion of NLP in XAI presents an exciting opportunity to enhance our understanding of financial models and further promote better transparency and trustworthiness in today's AI models.
## 9. Conclusion
Overall, explainability will continue to be a critical area of focus in FinTech as companies seek to build trust and confidence with consumers and regulators alike. To conclude our work, we have provided a comprehensive review of XAI tools in the financial domain (FinXAI), highlighting the significant progress made in recent years toward developing explainable AI models for financial applications. This includes both inherently transparent models and post-hoc explainability techniques, the former of which we advocate for more improvements to be made. We provided a framework that establishes the selection of appropriate FinXAI tools as a sequential decision-making process, placing great emphasis on the audience and iterative assessment of produced explanation. The reviewed works are categorized according to their respective characteristics for ease of access by interested readers. We also examine the contributions of current FinXAI to several ethical goals, e.g., trustworthiness, fairness, informativeness, accessibility, privacy, confidence, causality, and transparency.
Though there have been many great works done thus far, the review also reveals some limitations and challenges associated with FinXAI. This includes appropriate metrics to measure both the faithfulness and plausibility of explanations, as well as issues concerning the over-reliance on potentially misleading explanations. Future research should focus on addressing these challenges, as well as exploring new directions for FinXAI, including integrating NLP into explanation-generating techniques and a greater focus on inherently transparent models. Nevertheless, there is great potential for XAI techniques to enhance transparency, trust, and accountability in the financial domain. This underscores the importance of active research and development in this field.
|
2306.00030 | Topos Many-Node Theory: Roots, Foundations and Predictions | Assuming that the first creatures of creation create a network, it is shown
that how the network can be mapped to a topos discrete quantum manifold which
is equipped with both the discrete calculus and the Alexandrov's algebra. We
assign a locale to each nodes of the space-time network and show that in
general, invariance under Lorentz transformations is no longer true. it is
shown that the cosmological constant is non-zero and is proportional to the
second power of the Hubble radius. By considering a population (set), it is
shown that the entropy of the space-time network is quantized and increases as
the network grows. In consequence the inflation of the world is expected
phenomenon. Also, we show that how world inflation can be described based on
the concept of truth object and truth value belong to the topos theory. It is
shown that the quanta of vibrations, called netons, can be attributed to the
vibration of space-time network, and it is expected that they will be observed
in the future experiments related to cosmic background radiation. Finally, it
is shown that the root of noncommutative geometry is in attributing the locale
to the nodes of the space-time network instead of a point. This theory, which
includes the roots, foundations and predictions, is called the many-node
theory. | Hamidreza Simchi | 2023-05-31T06:29:25Z | http://arxiv.org/abs/2306.00030v1 | # Topos Many-Node Theory: Roots, Foundations and Predictions
###### Abstract
Among the various existing theories, we show how the concept of the space-time network has entered the physics of quantum gravity by reviewing the theories of loop quantum gravity and causal sets. Assuming that the first creatures of creation create a network, it is shown that how the network can be mapped to a topos discrete quantum manifold which is equipped with both the discrete calculus and the Alexandrov's algebra. We assign a locale to each nodes of the space-time network and show that in general, invariance under Lorentz transformations is no longer true. it is shown that the cosmological constant is non-zero and is proportional to the second power of the Hubble radius. By considering a population (set), including newly born timid children and non-timid children who survive until the birth of the new network, it is shown that the entropy of the space-time network is quantized and increases as the network grows. In consequence the inflation of the world is expected phenomenon. Also, we show that how world inflation can be described based on the concept of truth object and truth value belong to the topos theory. Although, the temperature and pressure are both high in the early moments of the creation of the world, it is shown that the quanta of vibrations, called netons, can be attributed to the vibration of space-time network, and it is expected that they will be observed in the future experiments related to cosmic background radiation. Finally, it is shown that the root of noncommutative geometry |
2303.18089 | Heralded and high-efficient entanglement concentrations based on linear
optics assisted by time-delay degree of freedom | Entanglement concentration is a critical technique to prevent degraded
fidelity and security in long-distance quantum communication. We propose novel
practical entanglement concentration protocols (ECPs) for less-entangled Bell
and Greenberger-Horne-Zeilinger states with unknown parameters by solely using
simple linear optics. We avoid the need for the post-selection principles or
photon-number-resolving detector to identify the parity-check measurement
completely by orchestrating auxiliary time degree of freedom, and the success
of ECPs is exactly heralded by the detection signatures without destroying the
incident qubits. Additionally, the outting incident photons kept are in the
maximally entangled or the less-entangled state, and the success probability
can be increased by recycling the latter. The heralded and the basic linear
optical elements make our practical ECPs are accessible to experimental
investigation with current technology. | Gui-Long Jiang, Wen-Qiang Liu, Hai-Rui Wei | 2023-03-31T14:27:20Z | http://arxiv.org/abs/2303.18089v1 | Heralded and high-efficient entanglement concentrations based on linear optics assisted by time-delay degree of freedom
###### Abstract
Entanglement concentration is a critical technique to prevent degraded fidelity and security in long-distance quantum communication. We propose novel practical entanglement concentration protocols (ECPs) for less-entangled Bell and Greenberger-Horne-Zeilinger states with unknown parameters by solely using simple linear optics. We avoid the need for the post-selection principles or photon-number-resolving detector to identify the parity-check measurement completely by orchestrating auxiliary time degree of freedom, and the success of ECPs is exactly heralded by the detection signatures without destroying the incident qubits. Additionally, the outing incident photons kept are in the maximally entangled or the less-entangled state, and the success probability can be increased by recycling the latter. The heralded and the basic linear optical elements make our practical ECPs are accessible to experimental investigation with current technology.
## I Introduction
Entanglement, as a unique quantum mechanical phenomenon, plays an essential role in quantum information processing (QIP) theory, and has attracted widespread attentions [1; 2]. The entangled photons are often regarded as the most promising resource in a wide-range of long-distance quantum communication tasks: quantum key distribution [3; 4], quantum teleportation [5; 6; 7], quantum dense coding [8; 9], quantum secret sharing [10; 11], quantum secure direct communication [12; 13; 14], etc. Maximally optical entangled states are indispensable for most QIP applications, owning to their high-speed transmission and outstanding low-noise properties [1]. However, the maximally entangled states may be degraded to less-entangled states as the optical systems will inevitably interact with channel noise and their environment in an actual long-distance quantum communication, and these influences may degrade the fidelity, security, and success of the protocols. Fortunately, such degraded entangled state problems can be remedied well by employing entanglement purification [15] and entanglement concentration techniques [16].
Entanglement purification protocol (EPP) was first proposed by Bennett _et al._[15] in 1996 with controlled-NOT gate to extract a two-photon pure singlet state in a Werner state. Consequently, increasing efforts are being devoted to improving EPP [17; 18; 19; 20; 21; 22; 23]. Entanglement concentration protocol (ECP) [16] is another way to distill a maximally entangled state from a pure less-entangled state. The first ECP, based on Schmidt decomposition, was proposed by Bennett _et al._[16] in 1996. Later in 1999, Bose _et al._[24] proposed an ECP via entanglement swapping, and the improved work was proposed by Shi _et al._[25] in 2000. In 2001, Yamamoto _et al._[26] and Zhao _et al._[27] proposed an ECP for two partially entangled photon pairs with linear optics, and the schemes were experimentally demonstrated later in 2003 [28; 29]. In 2002, Paunkovic _et al._[30] proposed an ECP based on quantum statistics, and less knowledge of the initial states is required than most linear-optics-based ECPs. In 2008 and 2012, Sheng _et al._[31; 32] proposed ECPs by exploiting cross-Kerr medium. The efficiency of such ECPs is higher than the linear-optics-based ones.
The existing ECPs are mainly focused on two-photon Bell states, and they are also suitable to multi-photon Greenberger-Horne-Zeilinger (GHZ) states [31; 32], but not compatible with \(W\) state. In 2010, Wang _et al._[33] proposed a practical scheme for concentrating \(W\) state \(\alpha|HHV\rangle+\beta(|HVH\rangle+|VHH\rangle)\) with linear optics. Here \(|H\rangle\) and \(|V\rangle\) denote the photons in the horizontal and vertical linear polarizations states, respectively. Yildiz [34] proposed a scheme for distilling asymmetric \(W\) states \(\frac{1}{\sqrt{2}}|001\rangle+\frac{1}{2}|010\rangle+\frac{1}{2}|100\rangle\) and \(\frac{1}{2}|001\rangle+\frac{1}{2}|010\rangle+\frac{1}{\sqrt{2}}|100\rangle\). In 2012, Sheng _et al._[35] presented linear-optics-based and cross-Kerr-based ECPs for \(W\) states with known parameters. Yan _et al._[36] designed an ECP for four-photon cluster states, utilizing cross-Kerr nonlinearity and CNOT gate. In 2015, Sheng _et al._[37] used different parity check gates to concentrate \(N\)-particle \(W\) states. In 2017, Zhang _et al._[38]
proposed an ECP resorting to circuit quantum electrodynamics. In recent years, much attention has been paid to hyperentanglement concentration protocols (hyper-ECPs) due to their excellent properties such as high capacity, low loss rate, and fewer experimental requirements [39; 40; 41; 42; 43; 44; 45].
Parameter-splitting is the current optimal strategy to implement ECP for less-entangled states with known parameters [39]. Post-selection principles are necessary for existing linear optical ECP for unknown less-entangled states [27; 28; 29; 32] as polarizing beam splitters (PBSs) are employed to pick up the desired instances in which each of the spatial contains exactly one photon. The destructive photon-number-resolving detectors can be used to discard the case that the photon pair coincidence at one spatial. However, such sophisticated detectors are not likely to be available with current technology, which makes the linear optical ECPs cannot be accomplished simply. In addition, the recycling strategies are only introduced to increase the success probability of the cross-Kerr-based ECPs [31; 32]. Hence, it is significant to investigate heralded and recyclable ECPs for partially entangled states without post-selection or photon-number-resolving detectors.
In this paper, we first present a heralded ECP for unknown less-entangled Bell states resorting to linear optical elements. Compared to the previous schemes, we avoid the need for photon-number-resolving detectors or post-selection principles by introducing the time-delay degree of freedom (DOF). Our scheme is heralded unambiguously by the detection signatures, which makes our ECP much more practical. The incident photons where distillation fails are kept in the less-entangled Bell state, and employing the recycling strategies can improve the success probability of concentration from 0.5 to 0.75 in principle. Only the probability of approaching the target state in an open quantum system can reach unity by iteration due to quantum anti-Zeno effect [46; 47]. Moreover, the program is also available for ECP for multi-photon less-entangled unknown GHZ states, and the schemes are later designed in detail. The presented architectures for ECPs with linear optics can be exactly realized with current experimental technology.
## II heralded ECP for unknown Bell state with linear optics
In the section, we present a heralded ECP for two-photon polarization less-entangled Bell states with unknown parameters using linear optics. By introducing the time-delay DOF to the detected photons, our ECP can be exactly heralded by the detection signatures. The entanglement concentration process does not rely on post-selection principle.
Suppose two maximally entangled Bell states \(|\phi\rangle_{AB}\) and \(|\phi\rangle_{A^{\prime}B^{\prime}}\) are generated initially from \(S_{1}\) and \(S_{2}\), respectively. Here
\[|\phi\rangle_{AB}=\frac{1}{\sqrt{2}}(|HH\rangle+|VV\rangle)_{AB},\quad|\phi \rangle_{A^{\prime}B^{\prime}}=\frac{1}{\sqrt{2}}(|HH\rangle+|VV\rangle)_{A^{ \prime}B^{\prime}}. \tag{1}\]
Figure 1: Schematic diagram of the ECP for a partially entangled Bell state with unknown parameters. \(S_{1}\) and \(S_{2}\) are two pairs of identical entanglement sources for \(|\phi\rangle_{AB}\) and \(|\phi\rangle_{A^{\prime}B^{\prime}}\), respectively. BS denotes the 50:50 beam splitter. PBS\({}_{i}\) (\(i=1,2,\cdots,6\)) is a polarizing beam splitter which transmits the \(H\)-polarization component and reflects the \(V\)-polarization component. HWP\({}^{45^{\circ}}\) and HWP represent half-wave plates oriented at \(45^{\circ}\) and \(22.5^{\circ}\), respectively. \(D_{i}\) (\(i=1,2,3,4\)) is a single-photon detector. The optical circle on the spatial mode denotes time delay \(t_{0}\) or \(t_{1}\).
The state of four-photon system composed of photons \(A\), \(B\), \(A^{\prime}\), and \(B^{\prime}\) can be described as
\[\begin{split}|\Phi_{0}\rangle&=|\phi\rangle_{AB} \otimes|\phi\rangle_{A^{\prime}B^{\prime}}\\ &=\frac{1}{2}(|HH\rangle+|VV\rangle)_{AB}\otimes(|HH\rangle+|VV \rangle)_{A^{\prime}B^{\prime}}.\end{split} \tag{2}\]
Then as shown in Fig. 1, photons \(A\) and \(B^{\prime}\) immediately pass through a 50:50 beam splitter (BS), resulting in the following transformations
\[\begin{split}|\Gamma\rangle_{A}\stackrel{{\text{BS}}} {{\longrightarrow}}\tfrac{1}{\sqrt{2}}(|\Gamma\rangle_{A}+|\Gamma\rangle_{B^{ \prime}}),\quad|\Gamma\rangle_{B^{\prime}}\stackrel{{\text{BS}}} {{\longrightarrow}}\tfrac{1}{\sqrt{2}}(-|\Gamma\rangle_{A}+|\Gamma\rangle_{B^ {\prime}}),\end{split} \tag{3}\]
where \(|\Gamma\rangle\) represents the polarization state \(|H\rangle\) or \(|V\rangle\). Considering Eq. (3), after passing the BS, \(|\Phi_{0}\rangle\) is transformed into
\[\begin{split}|\Phi_{1}\rangle=&\frac{1}{4}[(|H \rangle_{A}+|H\rangle_{B^{\prime}})|H\rangle_{B}+(|V\rangle_{A}+|V\rangle_{B^ {\prime}})|V\rangle_{B}]\\ &\otimes[|H\rangle_{A^{\prime}}(|H\rangle_{B^{\prime}}-|H\rangle_ {A})+|V\rangle_{A^{\prime}}(|V\rangle_{B^{\prime}}-|V\rangle_{A})].\end{split} \tag{4}\]
Subsequently, photons \(A\) and \(A^{\prime}\) (\(B\) and \(B^{\prime}\)) of the state \(|\Phi_{1}\rangle\) is sent to Alice (Bob), and owing to the noisy channels, \(|\Phi_{1}\rangle\) may decay to a partially less-entangled state
\[\begin{split}|\Phi_{2}\rangle=&\frac{1}{2}[\alpha(| H\rangle_{A}+|H\rangle_{B^{\prime}})|H\rangle_{B}+\beta(|V\rangle_{A}+|V\rangle_{B^ {\prime}})|V\rangle_{B}]\\ &\otimes[\alpha|H\rangle_{A^{\prime}}(|H\rangle_{B^{\prime}}-|H \rangle_{A})+\beta|V\rangle_{A^{\prime}}(|V\rangle_{B^{\prime}}-|V\rangle_{A} )],\end{split} \tag{5}\]
where the unknown parameters \(\alpha\) and \(\beta\) satisfy the normalization relation \(|\alpha|^{2}+|\beta|^{2}=1\).
In order to distill the maximally entangled Bell state \((|HH\rangle+|VV\rangle)/\sqrt{2}\) from \(|\Phi_{2}\rangle\), the two distant parties, Alice and Bob, need to complete the operations shown in Fig. 1. To describe this process more clearly, combined with the Hong-Ou-Mandel effect, we rewrite Eq. (5) in the following normalized form
\[\begin{split}|\Phi_{2}\rangle=&\frac{1}{\sqrt{2}}[ \alpha^{2}|HH\rangle_{A^{\prime}B}(|HH\rangle_{B^{\prime}B^{\prime}}-|HH \rangle_{AA})+\beta^{2}|VV\rangle_{A^{\prime}B}(|VV\rangle_{B^{\prime}B^{ \prime}}-|VV\rangle_{AA})]\\ &+\frac{1}{2}[\alpha\beta(|VH\rangle+|HV\rangle)_{A^{\prime}B}| HV\rangle_{B^{\prime}B^{\prime}}-\alpha\beta(|VH\rangle+|HV\rangle)_{A^{\prime}B}| HV\rangle_{AA}\\ &+\alpha\beta(|VH\rangle-|HV\rangle)_{A^{\prime}B}|HV\rangle_{ AB^{\prime}}-\alpha\beta(|VH\rangle-|HV\rangle)_{A^{\prime}B}|VH\rangle_{AB^{ \prime}}].\end{split} \tag{6}\]
Specifically, Alice flips the state of photon \(A^{\prime}\) by using a half-wave plate oriented at \(45^{\circ}\). That is, \(\text{HWP}^{45^{\circ}}\) completes the transformations \(|H\rangle\xrightarrow{\text{HWP}^{45^{\circ}}}|V\rangle\) and \(|V\rangle\xrightarrow{\text{HWP}^{45^{\circ}}}|H\rangle\). Hence, \(\text{HWP}^{45^{\circ}}\) transforms \(|\Phi_{2}\rangle\) into
\[\begin{split}|\Phi_{3}\rangle=&\frac{1}{\sqrt{2}}[ \alpha^{2}|VH\rangle_{A^{\prime}B}(|HH\rangle_{B^{\prime}B^{\prime}}-|HH \rangle_{AA})+\beta^{2}|HV\rangle_{A^{\prime}B}(|VV\rangle_{B^{\prime}B^{ \prime}}-|VV\rangle_{AA})]\\ &+\frac{1}{2}[\alpha\beta(|HH\rangle+|VV\rangle)_{A^{\prime}B}| HV\rangle_{B^{\prime}B^{\prime}}-\alpha\beta(|HH\rangle+|VV\rangle)_{A^{\prime}B}| HV\rangle_{AA}\\ &+\alpha\beta(|HH\rangle-|VV\rangle)_{A^{\prime}B}|HV\rangle_{ AB^{\prime}}-\alpha\beta(|HH\rangle-|VV\rangle)_{A^{\prime}B}|VH\rangle_{AB^{ \prime}}].\end{split} \tag{7}\]
Nextly, by using an unbalanced interferometer consisting of two PBSs, Alice (Bob) introduces time-delays \(t_{0}\) and \(t_{1}\) to the \(H\)- and \(V\)-polarization components of photon \(A\) (\(B^{\prime}\)), respectively. Here \(t_{0}\) and \(t_{1}\) satisfy \(\omega(t_{0}-t_{1})=2n\pi\), where \(n\) is the nonzero integer. And then, the state \(|\Phi_{3}\rangle\) becomes
\[\begin{split}|\Phi_{4}\rangle=&\frac{1}{\sqrt{2}}[ \alpha^{2}|VH\rangle_{A^{\prime}B}(|H_{t_{0}}H_{t_{0}}\rangle_{B^{\prime}B^{ \prime}}-|H_{t_{0}}H_{t_{0}}\rangle_{AA})\\ &+\beta^{2}|HV\rangle_{A^{\prime}B}(|V_{t_{1}}V_{t_{1}}\rangle_{B ^{\prime}B^{\prime}}-|V_{t_{1}}V_{t_{1}}\rangle_{AA})]\\ &+\frac{1}{2}[\alpha\beta(|HH\rangle+|VV\rangle)_{A^{\prime}B}|H_ {t_{0}}V_{t_{1}}\rangle_{BA^{\prime}B^{\prime}}\\ &-\alpha\beta(|HH\rangle+|VV\rangle)_{A^{\prime}B}|H_{t_{0}}V_{t _{1}}\rangle_{AA}\\ &+\alpha\beta(|HH\rangle-|VV\rangle)_{A^{\prime}B}|H_{t_{0}}V_{t _{1}}\rangle_{AB^{\prime}}\\ &-\alpha\beta(|HH\rangle-|VV\rangle)_{A^{\prime}B}|V_{t_{1}}H_ {t_{0}}\rangle_{AB^{\prime}}].\end{split} \tag{8}\]
After that, Alice (Bob) performs a Hadamard operation on photon \(A\) (\(B^{\prime}\)) with a half wave plate (HWP). That is, HWP completes the transformations
\[|H\rangle\xrightarrow{\text{HWP}}\tfrac{1}{\sqrt{2}}(|H\rangle+|V\rangle),\quad|V \rangle\xrightarrow{\text{HWP}}\tfrac{1}{\sqrt{2}}(|H\rangle-|V\rangle). \tag{9}\]
Those rotations convert \(|\Phi_{4}\rangle\) into
\[\begin{split}|\Phi_{5}\rangle=&\frac{\sqrt{| \alpha|^{4}+|\beta|^{4}}}{2\sqrt{2}}|\phi_{1}^{+}\rangle_{A^{\prime}B}[D_{B^{ \prime}B^{\prime}}(0)(|HH\rangle+|VV\rangle)_{B^{\prime}B^{\prime}}-D_{AA}(0)(| HH\rangle+|VV\rangle)_{AA}]\\ &+\frac{\sqrt{|\alpha|^{4}+|\beta|^{4}}}{2\sqrt{2}}|\phi_{1}^{-} \rangle_{A^{\prime}B}[D_{B^{\prime}B^{\prime}}(0)(|HV\rangle+|VH\rangle)_{B^{ \prime}B^{\prime}}-D_{AA}(0)(|HV\rangle+|VH\rangle)_{AA}]\\ &+\frac{\alpha\beta}{2\sqrt{2}}|\phi^{+}\rangle_{A^{\prime}B}[(|H_ {t_{0}}H_{t_{1}}\rangle-|H_{t_{0}}V_{t_{1}}\rangle+|V_{t_{0}}H_{t_{1}}\rangle- |V_{t_{0}}V_{t_{1}}\rangle)_{B^{\prime}B^{\prime}}-(|H_{t_{0}}H_{t_{1}}\rangle \\ &-|H_{t_{0}}V_{t_{1}}\rangle+|V_{t_{0}}H_{t_{1}}\rangle-|V_{t_{0} }V_{t_{1}}\rangle)_{AA}]+\frac{\alpha\beta}{2\sqrt{2}}|\phi^{-}\rangle_{A^{ \prime}B}[(|H_{t_{0}}H_{t_{1}}\rangle-|H_{t_{0}}V_{t_{1}}\rangle\\ &+|V_{t_{0}}H_{t_{1}}\rangle-|V_{t_{0}}V_{t_{1}}\rangle-|H_{t_{1} }H_{t_{0}}\rangle-|H_{t_{1}}V_{t_{0}}\rangle+|V_{t_{1}}H_{t_{0}}\rangle+|V_{t_ {1}}V_{t_{0}}\rangle)_{AB^{\prime}}].\end{split} \tag{10}\]
where
\[|\phi^{\pm}\rangle_{A^{\prime}B}=\frac{1}{\sqrt{2}}(|HH\rangle\pm|VV\rangle)_{ A^{\prime}B}, \tag{11}\]
\[|\phi^{\pm}_{1}\rangle_{A^{\prime}B}=(\alpha^{\prime}|VH\rangle\pm\beta^{ \prime}|HV\rangle)_{A^{\prime}B} \tag{12}\]
with \(\alpha^{\prime}=\frac{\alpha^{2}}{\sqrt{|\alpha|^{4}+|\beta|^{4}}}\), \(\beta^{\prime}=\frac{\beta^{2}}{\sqrt{|\alpha|^{4}+|\beta|^{4}}}\). Here \(D_{AA(B^{\prime}B^{\prime})}(0)\) represents that there is no relative time-delay between two photons \(A\) (\(B^{\prime}\)). That is, there is no time interval between the reaction of the single-photon detector held by Alice or the single-photon detector held by Bob.
Finally, Alice (Bob) uses PBS\({}_{5}\) (PBS\({}_{6}\)) and two single-photon detectors \(\{D_{1},D_{2}\}\) (\(\{D_{3},D_{4}\}\)) to complete the measurement on the outing photon \(A\) (\(B^{\prime}\)) in the basis \(\{|H\rangle,|V\rangle\}\). The relationship between the detection signatures, the corresponding output states, and the feed-forward operations on photon \(B\) is given in Tab. 1. If detector pair \((D_{i},D_{j})\) (\(i,j=1,2,3,4\)) triggers with a time interval of \(|t_{0}-t_{1}|\), they will get the desired maximally entangled state \(|\phi^{+}\rangle_{A^{\prime}B}\) with a success probability of \(P=2|\alpha\beta|^{2}\) after applying the corresponding feed-forward operation shown in Tab. 1. Otherwise, it means that detector pair \((D_{i},D_{j})\) fires without time interval. In such case, performing the feed-forward operation, they can get the normalization state \(|\phi^{+}_{1}\rangle_{A^{\prime}B}\) with a probability of \(|\alpha|^{4}+|\beta|^{4}=1-2|\alpha\beta|^{2}\).
The solid line in Fig. 2 shows the success probability of the presented ECP, and \(|\alpha|=\sqrt{1-|\beta|^{2}}\in(0,1)\) are taken. Besides, it is obvious that the photons \(A^{\prime}\) and \(B\) kept are in the state \(|\phi^{+}\rangle_{A^{\prime}B}\) or \(|\phi^{+}_{1}\rangle_{A^{\prime}B}\). Alice and Bob can further distill \(|\phi^{+}\rangle_{A^{\prime}B}\) from \(|\phi^{+}_{1}\rangle_{A^{\prime}B}\), because \(|\phi^{+}_{1}\rangle_{A^{\prime}B}\) passing through a HWP\({}^{45}\) for photons \(A^{\prime}\) has the similar form as \(|\phi\rangle_{AB}\) subjected to channel noise [27; 31]. Therefore, by recycling the state \(|\phi^{+}_{1}\rangle_{A^{\prime}B}\) and applying the ECP of Ref. [27] or [32], the success probability can be efficiently improved by \((|\alpha|^{4}+|\beta|^{4})\cdot 2|\alpha^{\prime}\beta^{\prime}|^{2}=\frac{2| \alpha\beta|^{4}}{|\alpha|^{4}+|\beta|^{4}}\). As the dotted line in Fig. 2 shows, the total success probability has increased from 0.5 to 0.75 in principle.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Single-photon & Outcomes & Feed-forward & Success \\ detectors & of \(A^{\prime}\) and \(B\) & on \(B\) & probability \\ \hline \(D_{1},D_{2},D_{3},D_{4}\) & \(|\phi^{+}_{1}\rangle_{A^{\prime}B}\) & none & \(|\alpha|^{4}+|\beta|^{4}\) \\ \((D_{1},D_{2}),(D_{3},D_{4})\) & \(|\phi^{+}_{1}\rangle_{A^{\prime}B}\) & \(\sigma_{z}\) & \\ \hline \((D_{1}^{t_{0}},D_{1}^{t_{1}}),(D_{2}^{t_{0}},D_{2}^{t_{1}}),(D_{3}^{t_{0}},D_{3 }^{t_{1}}),(D_{4}^{t_{0}},D_{4}^{t_{1}})\) & \(|\phi^{+}\rangle_{A^{\prime}B}\) & none & \\ \((D_{1}^{t_{0}},D_{2}^{t_{1}}),(D_{1}^{t_{1}},D_{2}^{t_{2}}),(D_{3}^{t_{0}},D_{4 }^{t_{1}}),(D_{3}^{t_{1}},D_{4}^{t_{0}})\) & & \(2|\alpha\beta|^{2}\) \\ \((D_{1}^{t_{0}},D_{3}^{t_{1}}),(D_{1}^{t_{1}},D_{3}^{t_{3}}),(D_{2}^{t_{0}},D_{4 }^{t_{1}}),(D_{2}^{t_{1}},D_{4}^{t_{0}})\) & \(|\phi^{-}\rangle_{A^{\prime}B}\) & \(\sigma_{z}\) & \\ \((D_{2}^{t_{0}},D_{3}^{t_{1}}),(D_{2}^{t_{2}},D_{3}^{t_{2}}),(D_{1}^{t_{0}},D_{4 }^{t_{1}}),(D_{1}^{t_{1}},D_{4}^{t_{0}})\) & \(|\phi^{-}\rangle_{A^{\prime}B}\) & \(\sigma_{z}\) & \\ \hline \hline \end{tabular}
\end{table}
Table 1: The relations between the detection signatures and the classical feed-forward operations to complete the ECP for Bell state. The operation \(\sigma_{z}=|H\rangle\langle H|-|V\rangle\langle V|\) can be accomplished with a half-wave plate oriented at \(0^{\circ}\). \((D_{i}^{t_{0}},D_{j}^{t_{1}})\) or \((D_{i}^{t_{1}},D_{j}^{t_{0}})\) (\(i,j=1,2,3,4\)) indicates that \((D_{i},D_{j})\) triggers with a time interval of \(|t_{0}-t_{1}|\).
## III Heralded ECP for Unknown GHZ state with linear optics
Our heralded ECP for polarization unknown Bell states with linear optics can be generalized to the case of multi-photon GHZ states. Suppose two maximally entangled GHZ states \(|\psi\rangle_{ABC}\) and \(|\psi\rangle_{A^{\prime}B^{\prime}C^{\prime}}\) are generated initially from \(S_{1}\) and \(S_{2}\), respectively. Here
\[|\psi\rangle_{ABC}=\frac{1}{\sqrt{2}}(|HHH\rangle+|VVV\rangle)_{ABC},\ \ |\psi \rangle_{A^{\prime}B^{\prime}C^{\prime}}=\frac{1}{\sqrt{2}}(|HHH\rangle+|VVV \rangle)_{A^{\prime}B^{\prime}C^{\prime}}. \tag{13}\]
The state of the six-photon system composed of photons \(A\), \(B\), \(C\), \(A^{\prime}\), \(B^{\prime}\), and \(C^{\prime}\) is given by
\[\begin{split}|\Psi_{0}\rangle=&|\psi\rangle_{AB} \otimes|\psi\rangle_{A^{\prime}B^{\prime}}\\ =&\frac{1}{2}(|HHH\rangle+|VVV\rangle)_{ABC} \otimes(|HHH\rangle+|VVV\rangle)_{A^{\prime}B^{\prime}C^{\prime}}.\end{split} \tag{14}\]
As shown in Fig. 3, photons \(A\) and \(B^{\prime}\) pass through a BS, and then photons pairs \(AA^{\prime}\), \(BB^{\prime}\), and \(CC^{\prime}\) of the state \(|\Psi_{0}\rangle\) is sent to three distant parties Alice, Bob, and Charlie, respectively. Owing to the noisy channels, \(|\Psi_{0}\rangle\) may
Figure 3: Schematic diagram of the ECP for a three-photon GHZ states with unknown parameters. \(S_{1}\) and \(S_{2}\) are entanglement sources for \(|\psi\rangle_{ABC}\) and \(|\psi\rangle_{A^{\prime}B^{\prime}C^{\prime}}\), respectively. The setups in dashed boxes held by Alice and Bob are shown in Fig. 1.
Figure 2: The success probability of the ECP for Bell states as a function of parameter \(\alpha\). Here \(|\alpha|=\sqrt{1-|\beta|^{2}}\in(0,1)\). The dotted line depicts the total success rate after recycling \(|\phi_{1}^{+}\rangle_{A^{\prime}B}\).
decay to a partially less-entangled state
\[\begin{split}|\Psi_{1}\rangle=&\frac{1}{2}[\alpha(|H \rangle_{A}+|H\rangle_{B^{\prime}})|H\rangle_{B}|H\rangle_{C}+\beta(|V\rangle_{A }+|V\rangle_{B^{\prime}})|V\rangle_{B}|V\rangle_{C}]\\ &\otimes[\alpha|H\rangle_{A^{\prime}}(|H\rangle_{B^{\prime}}-|H \rangle_{A})|H\rangle_{C^{\prime}}+\beta|V\rangle_{A^{\prime}}(|V\rangle_{B^{ \prime}}-|V\rangle_{A})|V\rangle_{C^{\prime}}]\end{split} \tag{15}\]
where the unknown parameters \(\alpha\) and \(\beta\) satisfy the normalization relation \(|\alpha|^{2}+|\beta|^{2}=1\).
The dashed boxes in Fig. 3 held by Alice and Bob are the same setups shown in Fig. 1. To be specific, Alice performs the \(\sigma_{x}\) operation on photon \(A^{\prime}\). After executing this operation, time delays \(t_{0}\) and \(t_{1}\) are introduced by Alice (Bob) to photons \(A\) (\(B^{\prime}\)) by using the balanced interferometers, i.e., \(|H\rangle_{A}\rightarrow|H_{t_{0}}\rangle_{A}\), \(|V\rangle_{A}\rightarrow|V_{t_{1}}\rangle_{A}\) (\(|H\rangle_{B^{\prime}}\rightarrow|H_{t_{0}}\rangle_{B^{\prime}}\), \(|V\rangle_{B^{\prime}}\rightarrow|V_{t_{1}}\rangle_{B^{\prime}}\)). Then, \(|\Psi_{1}\rangle\) is converted to
\[\begin{split}|\Psi_{2}\rangle=&\frac{1}{\sqrt{2}}[ \alpha^{2}|VHHH\rangle_{A^{\prime}BCC^{\prime}}(|H_{t_{0}}H_{t_{0}}\rangle_{B^{ \prime}B^{\prime}}-|H_{t_{0}}H_{t_{0}}\rangle_{AA})\\ &+\beta^{2}|HVVV\rangle_{A^{\prime}BCC^{\prime}}(|V_{t_{1}}V_{t_{ 1}}\rangle_{B^{\prime}B^{\prime}}-|V_{t_{1}}V_{t_{1}}\rangle_{AA})]\\ &+\frac{\alpha\beta}{2}[(|HHHV\rangle+|VVVH\rangle)_{A^{\prime} BCC^{\prime}}|H_{t_{0}}V_{t_{1}}\rangle_{B^{\prime}B^{\prime}}\\ &-(|HHHV\rangle+|VVVH\rangle)_{A^{\prime}BCC^{\prime}}|H_{t_{0}}V _{t_{1}}\rangle_{AA}\\ &+(|HHHV\rangle-|VVVH\rangle)_{A^{\prime}BCC^{\prime}}|H_{t_{0}}V _{t_{1}}\rangle_{AB^{\prime}}\\ &-(|HHHV\rangle-|VVVH\rangle)_{A^{\prime}BCC^{\prime}}|V_{t_{1}}H _{t_{0}}\rangle_{AB^{\prime}}].\end{split} \tag{16}\]
Then, as shown in Fig. 3, Alice, Bob, and Charlie lead photons \(A\), \(B^{\prime}\), and \(C^{\prime}\) to pass through HWP, respectively. These half-plate waves transform \(|\Psi_{2}\rangle\) into
\[\begin{split}|\Psi_{3}\rangle=&\frac{\sqrt{|\alpha |^{4}+|\beta|^{4}}}{2\sqrt{2}}|\psi_{1}^{+}\rangle_{A^{\prime}BC}[D_{B^{\prime} B^{\prime}}(0)(|HHH\rangle+|HVV\rangle+|VHV\rangle+|VVH\rangle)_{B^{\prime}B^{ \prime}C^{\prime}}\\ &-D_{AA}(0)(|HHH\rangle+|HVV\rangle+|VHV\rangle+|VVH\rangle)_{ AAC^{\prime}}]\\ &+\frac{\sqrt{|\alpha|^{4}+|\beta|^{4}}}{2\sqrt{2}}|\psi_{1}^{-} \rangle_{A^{\prime}BC}[D_{B^{\prime}B^{\prime}}(0)(|HHV\rangle+|HVH\rangle+|VHH \rangle+|VVV\rangle)_{B^{\prime}B^{\prime}C^{\prime}}\\ &-D_{AA}(0)(|HHV\rangle+|HVH\rangle+|VHH\rangle+|VVV\rangle)_{ AAC^{\prime}}]\\ &+\frac{\alpha\beta}{2}|\psi^{+}\rangle_{A^{\prime}BC}[(|H_{t_{0}}H _{t_{1}}H\rangle-|H_{t_{0}}V_{t_{1}}H\rangle+|V_{t_{0}}H_{t_{1}}H\rangle-|V_{t_{ 0}}V_{t_{1}}H\rangle)_{B^{\prime}B^{\prime}C^{\prime}}\\ &-(|H_{t_{0}}H_{t_{1}}H\rangle-|H_{t_{0}}V_{t_{1}}H\rangle+|V_{t_ {0}}H_{t_{1}}H\rangle-|V_{t_{0}}V_{t_{1}}H\rangle)_{AAC^{\prime}}\\ &-(|H_{t_{0}}H_{t_{1}}V\rangle-|H_{t_{0}}V_{t_{1}}V\rangle+|V_{t_ {0}}H_{t_{1}}V\rangle-|V_{t_{0}}V_{t_{1}}V\rangle)_{AB^{\prime}C^{\prime}}\\ &+(|H_{t_{1}}H_{t_{0}}V\rangle+|H_{t_{1}}V_{t_{0}}V\rangle-|V_{t_ {1}}H_{t_{0}}V\rangle-|V_{t_{1}}V_{t_{0}}V\rangle)_{AB^{\prime}C^{\prime}}]\\ &+\frac{\alpha\beta}{2}|\psi^{-}\rangle_{A^{\prime}BC}[(|H_{t_{0}} H_{t_{1}}V\rangle-|H_{t_{0}}V_{t_{1}}V\rangle+|V_{t_{0}}H_{t_{1}}V\rangle-|V_{t_ {0}}V_{t_{1}}V\rangle)_{AAC^{\prime}}\\ &-(|H_{t_{0}}H_{t_{1}}V\rangle-|H_{t_{0}}V_{t_{1}}V\rangle+|V_{t_ {0}}H_{t_{1}}V\rangle-|V_{t_{0}}V_{t_{1}}V\rangle)_{B^{\prime}B^{\prime}C^{ \prime}}\\ &+(|H_{t_{0}}H_{t_{1}}H\rangle-|H_{t_{0}}V_{t_{1}}H\rangle+|V_{t_ {0}}H_{t_{1}}H\rangle-|V_{t_{0}}V_{t_{1}}H\rangle)_{AB^{\prime}C^{\prime}}\\ &-(|H_{t_{1}}H_{t_{0}}H\rangle+|H_{t_{1}}V_{t_{0}}H\rangle-|V_{t_ {1}}H_{t_{0}}H\rangle-|V_{t_{1}}V_{t_{0}}H\rangle)_{AB^{\prime}C^{\prime}}]. \end{split} \tag{17}\]
where
\[|\psi^{\pm}\rangle_{A^{\prime}BC}=\frac{1}{\sqrt{2}}(|HHH\rangle\pm|VVV\rangle)_ {A^{\prime}BC}, \tag{18}\]
\[|\psi^{\pm}_{1}\rangle_{A^{\prime}BC}=\frac{\alpha^{2}}{\sqrt{|\alpha|^{4}+| \beta|^{4}}}|VHH\rangle_{A^{\prime}BC}\pm\frac{\beta^{2}}{\sqrt{|\alpha|^{4}+| \beta|^{4}}}|HVV\rangle_{A^{\prime}BC}. \tag{19}\]
Finally, the outcomes of photons \(A\), \(B^{\prime}\), and \(C^{\prime}\) are measured by PBSs and single-photon detectors. Tab. 2 depicts the detection signatures and the output states. When the detectors held by Alice and Bob are triggered with a time interval of \(|t_{0}-t_{1}|\), Alice, Bob, and Charlie will get the desired maximally entangled state \(|\psi^{+}\rangle_{A^{\prime}BC}\) with a success probability of \(2|\alpha\beta|^{2}\), after performing the feed-forward operations on photon \(B\). If the detectors held by Alice and Bob are triggered simultaneously, they will get the recyclable normalization state \(|\psi^{+}_{1}\rangle_{A^{\prime}BC}\) with a probability of
\(|\alpha|^{4}+|\beta|^{4}\). The same argument as that made in Section II, the success probability can be increased by recycling \(|\psi_{1}^{+}\rangle_{A^{\prime}BC}\) and that equals to the dotted line in Fig. 2.
The schematic setup to implement heralded and non-postselection ECP for multi-photon GHZ states with unknown parameters is shown in Fig. 4. The ECP has the same success probability as the presented ECP for Bell states, and can also be increased to 0.75.
## IV Discussion and Summary
Entanglement concentration is a powerful way to deal with the negative effects of environmental noise in the long-distance quantum communication. The ECP with unknown parameters is more efficient than the one with known parameters as the environmental decoherence causes the parties blind to the information about the state. However, the existing ECP for an unknown less-entangled state cannot be accomplished simply with linear optics as PBSs cannot exactly complete parity-check measurement without photon-number-resolving detectors. Moreover, the destructive measurements, involving the photon pair coincidence at a detector, make the recycling strategy impossible. Cross-Kerr medium and matter platforms such as atom, quantum dot, and nitrogen-vacancy color centre in diamonds have been employed to bypass the destructive measurements and photon-number-resolving detectors, and to increase the success probability. However, the giant Kerr nonlinearities are difficult to achieve in the experiment with current technology, and \(\theta\approx 10^{-18}\) rad for natural Kerr media is extremely small. Implementations and manipulations of matter-qubit are challenged in experiment by inefficiency and impracticality.
The key element of the presented schemes is the time delay. By introducing time-delay DOF to the photons \(A\) and \(B^{\prime}\), the detection signatures can exactly distinguish the even-parity state \(\{|H_{t_{0}}H_{t_{0}}\rangle_{AA},|V_{t_{1}}V_{t_{1}}\rangle_{AA},|H_{t_{0}} H_{t_{0}}\rangle_{B^{\prime}B^{\prime}},|V_{t_{1}}V_{t_{1}}\rangle_{B^{\prime}B^{ \prime}}\}\) from the odd-parity state \(\{|H_{t_{0}}V_{t_{1}}\rangle_{AB^{\prime}},\)\(|V_{t_{1}}H_{t_{0}}\rangle_{AB^{\prime}},|H_{t_{0}}V_{t_{1}}\rangle_{AA},|H_{t_{0}} V_{t_{1}}\rangle_{B^{\prime}B^{\prime}}\}\), which allow the schemes accomplished without post-selection principles. Moreover, since the time-delay trick allows the undesired terms indistinguishable, the incident photons held by parties keep in the maximally entangled state or in the recyclable less-entangled state. Though the total success probability of \((2|\alpha\beta|^{2}+\frac{2|\alpha\beta|^{4}}{|\alpha|^{4}+|\beta|^{2}})\) by recycling the less-entangled states is still lower than the nonlinear ECP of Ref. [32], linear optic implementations of our schemes are high-efficient in practice. However, the inevitable imperfect
\begin{table}
\begin{tabular}{c c c c} \hline \hline Single- photon & Outcomes & Feed-forward & Success \\ detectors & of \(A^{\prime}\), \(B\), and \(C\) & on \(B\) & probability \\ \hline \((D_{1},D_{2},D_{5}),(D_{3},D_{4},D_{5})\) & \(|\psi_{1}^{+}\rangle_{A^{\prime}BC}\) & none & \\ \((D_{1},D_{6}),(D_{2},D_{6}),(D_{3},D_{6}),(D_{4},D_{6})\) & & & \(|\alpha|^{4}+|\beta|^{4}\) \\ \((D_{1},D_{2},D_{6}),(D_{3},D_{4},D_{6})\) & \(|\psi_{1}^{-}\rangle_{A^{\prime}BC}\) & \(\sigma_{z}\) & \\ \((D_{1},D_{5}),(D_{2},D_{5}),(D_{3},D_{5}),(D_{4},D_{5})\) & & & \\ \hline \((D_{1}^{t_{0}},D_{1}^{t_{1}},D_{6}),(D_{2}^{t_{0}},D_{1}^{t_{1}},D_{6})\) & & & \\ \((D_{3}^{t_{0}},D_{3}^{t_{1}},D_{6}),(D_{4}^{t_{1}},D_{6})\) & & & \\ \((D_{1}^{t_{0}},D_{2}^{t_{1}},D_{6}),(D_{1}^{t_{1}},D_{2}^{t_{0}},D_{6})\) & \(|\psi^{+}\rangle_{A^{\prime}BC}\) & none & \\ \((D_{2}^{t_{0}},D_{4}^{t_{1}},D_{6}),(D_{3}^{t_{1}},D_{4}^{t_{0}},D_{6})\) & \(|\psi^{+}\rangle_{A^{\prime}BC}\) & none & \\ \((D_{1}^{t_{0}},D_{3}^{t_{1}},D_{5}),(D_{1}^{t_{1}},D_{3}^{t_{0}},D_{5})\) & & & \\ \((D_{2}^{t_{0}},D_{4}^{t_{1}},D_{5}),(D_{2}^{t_{1}},D_{4}^{t_{0}},D_{5})\) & & & \(2|\alpha\beta|^{2}\) \\ \((D_{2}^{t_{0}},D_{3}^{t_{1}},D_{5}),(D_{1}^{t_{1}},D_{4}^{t_{0}},D_{5})\) & & & \\ \hline \((D_{1}^{t_{0}},D_{1}^{t_{1}},D_{5}),(D_{2}^{t_{0}},D_{2}^{t_{1}},D_{5})\) & & & \\ \((D_{2}^{t_{0}},D_{3}^{t_{1}},D_{5}),(D_{3}^{t_{0}},D_{4}^{t_{1}},D_{5})\) & & & \\ \((D_{1}^{t_{0}},D_{2}^{t_{1}},D_{5}),(D_{1}^{t_{1}},D_{2}^{t_{0}},D_{5})\) & & & \\ \((D_{2}^{t_{0}},D_{4}^{t_{1}},D_{5}),(D_{1}^{t_{1}},D_{2}^{t_{0}},D_{5})\) & \(|\psi^{-}\rangle_{A^{\prime}BC}\) & \(\sigma_{z}\) \\ \((D_{1}^{t_{0}},D_{4}^{t_{1}},D_{6}),(D_{1}^{t_{1}},D_{3}^{t_{0}},D_{6})\) & & & \\ \((D_{2}^{t_{0}},D_{4}^{t_{1}},D_{6}),(D_{2}^{t_{1}},D_{4}^{t_{0}},D_{6})\) & & & \\ \((D_{2}^{t_{0}},D_{3}^{t_{1}},D_{6}),(D_{2}^{t_{1}},D_{3}^{t_{0}},D_{6})\) & & & \\ \((D_{1}^{t_{0}},D_{4}^{t_{1}},D_{6}),(D_{t}^{t_{1}},D_{4}^{t_{0}},D_{6})\) & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: The relations between the detection signatures and the classical feed-forward operations to complete the ECP for GHZ states.
linear optical elements or dark count will degrade the fidelity of the schemes. Recently, the effect of noise on the measurement is experimentally studied, leading to a reduction in the measurement precision. One effective way to solve this problem is to use the quantum Zeno effect, which can improve the measurement accuracy of entangled probes [48].
In summary, we have presented ECPs for Bell states and GHZ states with unknown parameters. The schemes are constructed by solely using linear optics, including HWP, PBS, and single-photon detector. Our protocols have several characteristics: First, the protocols can be exactly heralded by the detection signatures, and the photon-number-resolving detections or post-selection principles are not required. Second, exact parameters \(\alpha\) and \(\beta\) are not required. Third, the failed state has a good form for increasing the success probability of the protocols without resorting to the cross-Kerr media. Fourth, linear optical implementations of the heralded protocols are feasible in the experiment with current technology. These characteristics make our protocols more useful in long-distance quantum communication.
## Acknowledgements
This work is supported by the Fundamental Research Funds for the Central Universities under Grants FRF-TP-19-011A3.
|
2306.17564 | A Cost-aware Study of Depression Language on Social Media using Topic
and Affect Contextualization | Depression is a growing issue in society's mental health that affects all
areas of life and can even lead to suicide. Fortunately, prevention programs
can be effective in its treatment. In this context, this work proposes an
automatic system for detecting depression on social media based on machine
learning and natural language processing methods. This paper presents the
following contributions: (i) an ensemble learning system that combines several
types of text representations for depression detection, including recent
advances in the field; (ii) a contextualization schema through topic and
affective information; (iii) an analysis of models' energy consumption,
establishing a trade-off between classification performance and overall
computational costs. To assess the proposed models' effectiveness, a thorough
evaluation is performed in two datasets that model depressive text. Experiments
indicate that the proposed contextualization strategies can improve the
classification and that approaches that use Transformers can improve the
overall F-score by 2% while augmenting the energy cost a hundred times.
Finally, this work paves the way for future energy-wise systems by considering
both the performance classification and the energy consumption. | Andrea Laguna, Oscar Araque | 2023-06-30T11:34:48Z | http://arxiv.org/abs/2306.17564v1 | [
###### Abstract
Depression is a growing issue in society's mental health that affects all areas of life and can even lead to suicide. Fortunately, prevention programs can be effective in its treatment. In this context, this work proposes an automatic system for detecting depression on social media based on machine learning and natural language processing methods. This paper presents the following contributions: (i) an ensemble learning system that combines several types of text representations for depression detection, including recent advances in the field; (ii) a contextualization schema through topic and affective information; (iii) an analysis of models' energy consumption, establishing a trade-off between classification performance and overall computational costs. To assess the proposed models' effectiveness, a thorough evaluation is performed in two datasets that model depressive text. Experiments indicate that the proposed contextualization strategies can improve the classification and that approaches that use Transformers can improve the overall F-score by 2% while augmenting the energy cost a hundred times. Finally, this work paves the way for future energy-wise systems by considering both the performance classification and the energy consumption.
A Cost-aware Study of Depression Language on Social Media using Topic and Affect Contextualization]A Cost-aware Study of Depression Language on Social Media using Topic and Affect Contextualization +
Footnote †: mode=none]Andrea [email protected]
Oscar Araque]Andrea [email protected]
## 1 Introduction
Depression is a common mental disorder that affects 3.8% of the global population and has a varying prevalence in different demographic groups. For example, it is more prevalent in adults older than 60 years and in women than men. This disease can affect all areas of life and, in severe cases, lead to suicide. However, there are effective treatments for depression, and prevention programs against depression have shown effectiveness [50]. Depression is said to be underdiagnosed and under-treated, but early detection and treatment can improve its outcome [17].
Where mental health used to be taboo, now lies a much greater acceptance of the public discussion of mental disorders, in part due to the rise of social media. This has motivated a boom in discourse on online platforms regarding the topic, and a great abundance of new data that is generated daily. In this context, we strive to utilize this data to generate knowledge that could help improve the treatment given to people that suffer from depression.
The objective of this work is to use Natural Language Processing (NLP) and Machine Learning (ML) techniques to analyse language associated with depression and generate automatic systems capable of classifying the depression shown in texts. To do so, this work addresses text captured from popular social media platforms. The aim is to include different contextual information and study what knowledge can be extracted through it. We study systems in both Spanish and English. In this way, this study intends to gain insights into the challenge of automatically assessing depression through natural language. This would allow future work on using such techniques to perform automatic screening on social media, weighting the prevalence of depression in the population.
The objectives of this paper are summarized in the following research questions:
**RQ1: Can contextualization through emotion and topic information be used for depression detection?**
Depression is a mood disorder that affects a person's emotions [33]. For this reason, it is expected that contextual information addressing emotion analysis [48] will provide useful insight when applied to depression detection. Also, due to the nature of social media, where a specific type of discourse takes place, the topics about which users with
and without depression post about could be indicative of the presence of the disease. Thus, we consider the contextual information provided both by emotion and topic analysis as potentially being relevant for the detection of depression.
**RQ2: Can we assess model alternatives by means of a trade-off between detection performance and energy efficiency?**
In the last years, great advances have been made both in the realm of NLP as well as Machine Learning. These recent, more complex algorithms, often based on neural networks, typically provide better classification performance [28]. However, they tend to have a greater computational cost compared to simpler algorithms. This work studies whether some applications are susceptible of achieving a trade-off between detection performance and energy efficiency, while still being relevant to the task at hand. In this way, we aim to explore whether it is possible to reduce energy costs for NLP and ML applications, and thus reduce the carbon footprint generated by these technologies.
The paper is organized as follows: After this introduction (Section 1), related work covering the topics of word embeddings, emotion lexicons and Transformer models will be presented in Section 2. Next, the models used in this application will be described in Section 3. In Section 4, the datasets and resources used are described, as well as the experiment design; and the results are presented. In Section 5, conclusions are presented, as well as limitations and future work. Finally, in Section A, the reader will find appended material.
## 2 Related work
In this section, several related works are presented. However, to the best of our knowledge, there are no works that address depressive language and the study of contextualization and computational costs.
### Detection of depression on social media using NLP methods
A study by Park et al. [35] found that the language used in social media could provide valuable information regarding the mental life of the person who wrote it, finding that their method could be used to complement traditional methods. This is based on the notion that language contains relevant information about the psychological status and individual personality. Such a method consisted of building a prediction model for personality traits based on texts extracted from the social media platform Facebook. They found the predictions from this model to be coherent with self-reported personality traits and informant reports. In addition, Choudhury et al. [8] found that social media contains useful information for the detection of depression by studying the publications on the social media platform Twitter of users who reported having a depression diagnosis. Also, De Choudhury et al. [12] showed that it was possible to discern mental health discourse on social media from discourse pertaining to mental illness, specifically, containing suicide risk and ideation. Finally, Coppersmith et al. [11] found that simple natural language processing methods were capable of providing information about mental health and mental disorders. In particular depression, bipolar disorder, post-traumatic stress disorder, and seasonal affective disorder were analysed based on texts extracted from the social media platform Twitter.
In light of these works, it can be seen that many efforts to use social media platforms as tools to detect depression and other mental illnesses have been made. In a parallel effort, we strive to automatically assess depression in natural language through the exploitation and combination of different representations and knowledge sources.
### Language Representation
When processing natural language one of the main challenges is the representation of texts fed to learning models. Given the nature of such models, it is necessary to transform texts into numerical representations. To do so, there are several different approaches to perform this mapping [23]. From the simple to the more complex, these vectorizers tackle the task in different ways. In this paper, we explore the use of two approaches: word embeddings and Transformers.
#### 2.2.1 Word embeddings
As mentioned, in order to be able to analyse texts with ML algorithms, these must be converted into vectors of numbers. Word embeddings, unlike previous techniques, allow the text to be represented as a continuous vector, as opposed to a sparse vector [19].
Among other characteristics, these types of word vector representations are capable of capturing both semantic and syntactic regularities [26]. These regularities are captured by the offsets between vectors, similar words tend to have similar vectors. So much so, that a particular relationship between words will have an associated offset. For example,
the relationship between the vector representations of pairs of the plural and singular of a noun (apples/apple) is close to that of an unrelated noun (cars/car). Thus, algebraic operations performed on the word vectors reveal meaning. For example, the operation of subtracting the vector representation for "Man" to "King", and adding the vector for "Woman", will yield a vector most closely related to the vector for "Queen" [27]. There are many implementations of word embedding models [25].
Word embedding models have been previously applied to the detection of mental illness, specifically anorexia and depression, by Trotzek et al. [46]. Similarly to our own approach, pre-trained word embedding models were also used in several implementations which have word embeddings as an input. As such, GloVe Embeddings were used, as well as a custom-trained fastText embedding model. Similarly, Perez et al. [39] also used word embeddings to tackle the problem of depression on texts extracted from the social media Reddit. In their paper, however, rather than a classification, the severity of depression symptoms was estimated.
#### 2.2.2 Transformers and topic modelling
The Transformer architecture [47] uses an encoder-decoder network architecture based only on layers with multi-headed self-attention mechanisms. This architecture originally showed to be superior to others for translation tasks, and it was also proven that it can be successfully generalized to other tasks. These Transformer models are trained on large datasets (generating the so-called pre-trained models) and can be later fine-tuned for specific tasks, resulting in improved performance in many tasks, including language modelling and sentiment analysis. This attention mechanism allows for a greater contextual understanding, and thus allowing for better predictions [44]. The aim of attention mechanisms is to find long-term dependencies in phrases [15].
Suri et al. [45] aimed to detect depressive tendencies on the social media platform Twitter. To do so, they used a Transformer model to process the textual features of their texts. This information was then combined with information regarding the online behaviour and interaction of users. This architecture improved the results of their baseline by 12%.
For the extraction of topics, we use the **BERTopic** model [16]. This model was proposed to uncover themes and narratives in texts in an unsupervised manner. It uses a pre-trained transformer-based language model to generate word embeddings, for their capabilities to represent texts in the vector space in a way that makes it possible to obtain their semantic similarity. After the documents have been transformed, the dimensionality of the embeddings is reduced to optimise the clustering that is done afterwards. Finally, using a custom class-based variation of TF-IDF, topic representations are extracted from the clusters of documents. In our paper, this model will be used to extract contextual topic information from the texts. This model has been previously applied in the area of mental health by Baird et al. [5]. In this paper, BERTopic was applied to texts extracted from Twitter about telehealth for either mental health or substance abuse.
In addition, BERTopic has also been applied by Alhaj et al. [1] to detect cognitive distortions in the Arabic content of Twitter. This implementation used the previously mentioned word embeddings (implemented by word2vec [25]) together with the topic distribution generated by BERTopic, to generate a contextual topic embedding (CTE). This CTE aimed to both keep the semantic information and the contextual topic representation by concatenating the vectors produced by both of them. The classification performance of the CTE was bench-marked against just a word2vec embedding and was found to improve the results.
Another work, by Sarkar et al. [42], aimed to predict depression and anxiety on the social media platform Reddit with a multi-task learning approach. A combination of word embedding features, using a pre-trained BERT model, and topic modelling features (LDA and BERtopic) were used. This work concluded that this combination of features can be leveraged for domain-specific tasks.
### Emotion detection and emotion lexicons
Emotion Detection or Sentiment Analysis is a field of study that covers emotion detection and recognition from text [51]. In this field, we stress two types of analyses. If positive, negative, or neutral feelings are studied, this consists of Sentiment analysis, which aims to determine polarity. Emotion analysis, on the other hand, can detect types of feelings such as happiness, sadness, etc. Emotion models may be dimensional(that is, representing emotion based on its valence, arousal or power) or categorical (where emotions are discrete, such as anger or happiness) [18, 32].
Word-affect association lexicons, also known as emotion lexicons, are a compilation of words associated with the affect that they convey, which includes emotions, sentiments and similar affect concepts. These emotion lexicons may be generated through manual annotation or automatically [2]. Emotion lexicons have many applications, among which the study of health disorders is most relevant for the case at hand. Lexicon-based emotion analyses have important
advantages, such as being interpretable and having a low carbon footprint, which makes it a popular technique for real-world applications [30]. Previously, Li et al. [22] have generated and used a domain-specific emotion lexicon for the detection of depression, obtaining better results than with general-purpose emotion lexicons.
Another study by Choudhury et al. [7] used texts extracted from the social media platform Facebook in order to implement prediction systems capable of detecting postpartum depression based on the user's emotional and linguistic expression, as well as their activity and interactions. Contrary to the implementation of our paper, though, the emotions considered were only "positive affect" and "negative affect", which would be considered as sentiment analysis. This work found that mothers from the cohort suffering from postpartum depression experienced higher levels of negative affect, and lower levels of positive affect, compared with their non-depressed counterparts, although in a less statistically significant way than other previous work in the social media platform Twitter. The authors proposed that the nature of the social media platform chosen could have affected the emotional depression of users. One of these works was done by Choudhury et al. [8], where the detection of depression on the social media platform Twitter was studied, and again, positive and negative affect were considered, as well as activation and dominance as determined by the ANEW lexicon [34].
## 3 Generating Contextualized Representations for Depression Detection
In the following paragraphs, an overview of the models used in this implementation will be given. The first model that will be described is a feature extractor named **SIMilarity-based sentiment projectON (SIMON)**[4], which uses embedding-based textual representations to generate a fixed length vector, proposing an improvement from regular word-embedding applications. This improvement consists of the following: by using a domain-specific lexicon, the model then represents the words from the input text as a projection to the said lexicon. This projection is done through the semantic similarity of words as given by the word embedding model [4].
In the original application of this model, an opinion lexicon was used to generate the domain-specific lexicon. However, in this application, the lexicon was generated using the frequency with which words appeared in the texts used for training the model, which corresponded to the training split of the datasets, as was done in the method _FreqSelect_[3]. In this way, the information encoded in the generated representations contains implicit signals with regard to the objective classes. As such, the SIMON method extracts distributed representations exploiting both a word embedding model and a domain lexicon. In this application, two embedding models were explored, as will be further described in the following sections.
**Transformer Models**[49] were also used in this work to obtain the vector representation for these texts. For the DepSign dataset, the multilingual xlm-roberta-base [10] was used, whilst a Spanish RoBERTa model [14] was used for the Spanish dataset 1. These transformer models were used without fine-tuning so that overall complexity is reduced in order to generate a vector which could be used as features for a classifier.
Footnote 1: [https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne)
Information pertaining to emotions was extracted using **EmoFeat**[3]. EmoFeat uses an emotion lexicon for each language to perform a statistical summary of the annotated values for the words contained in a text, thus reducing the matrix generated into a vector. In this particular application, both the maximum and mean emotion were calculated for each emotion in every text. As such, the vectors generated for the texts in English had a length of 16 (as they were 8 annotated emotions), while for the Spanish texts, the final length was of 12. The emotion lexicons used are further described in the next section.
In order to extract topic information to include as additional characteristics for classification, **BERTopic** was used. This is a topic modelling and visualization library based on Hugging Face Transformers and c-TF-IDF [16]. It was used to extract topics from both datasets. In the case of the DepSign dataset, originally 291 topics were extracted by BERTopic. The number of topics was then manually reduced to 128 and 64 from the original topic model, and all three of these cases were then considered for evaluation to see how the size of the topic information vector could affect the classification. In the case of the Spanish dataset, only 21 topics were detected, so no reduction was carried out. Fine-tuning of the model was carried out on the training split of the datasets, and the refined model was used to generate the additional topic characteristic information for all texts in the datasets. For the vectorization, unigrams, bigrams and trigrams were considered. The graph 1 is meant to show a simplified structure of the implementation, featuring the models that were just described.
## 4 Evaluation
The proposed models for depression detection have been evaluated using the datasets described in Section 4.1, following the evaluation design that we detail in Section 4.2. Following, Section 4.3 thoroughly describes the obtained results, which are aimed at gaining insight into the models' reasoning, their performance and energy consumption.
### Datasets and resources
Two datasets consisting of texts from social media platforms have been selected for this paper. The first dataset is an English dataset made available for the competition "Detecting Signs of Depression from Social Media Text-LT-EDI@ACL 2022" [13], which was created by Kayalvizhi and Thenomzhi [20], S et al. [41]. In this dataset, the texts were extracted from thematically relevant subreddits from the social media platform Reddit and manually annotated by two domain experts with the following labels: "not depression", "moderate depression" and "severe depression". These classes are unbalanced (see Figure 2), and the total number of texts used was of 13,387. From now on, we refer to this dataset as "DepSign".
The second dataset is a Spanish dataset comprised in its entirety of manually selected depressive texts from the social media platform Twitter, provided by Leis et al. [21]. In order to have both examples of depressive and non-depressive texts, the Spanish dataset was mixed with random tweets in Spanish retrieved from Perez et al. [38]. This dataset was made to be balanced (see Figure 2), and the total amount of instances is 2,000. We denote this dataset as "Spanish dataset". A summary of the dataset characteristics can be seen in Table 1.
For the application of SIMON to the DepSign dataset, the original word2vec [25] embedding model was used. In contrast, we used the GloVe embeddings from SBWC [6, 37] for the Spanish dataset, thus allowing us to represent the Spanish text using embedding similarities. If we were to represent text in other languages, it is necessary to consider that an embedding model generated for said languages is needed.
In addition to datasets, emotion lexicons were used, both in Spanish and in English. The Spanish lexicon was retrieved from Rangel et al. [40] and Sidorov et al. [43]. This lexicon has a length of 1,909 words and covers 6 different emotions: anger, disgust, fear, joy, sadness, and surprise. For English, we use the lexicon from Mohammad [29] and Mohammad and Kiritchenko [31], which has a length of 16,861 tokens and covers 8 emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust.
Figure 1: General architecture used in this work.
### Design
For both datasets, a test-train split approach was followed, with 33% of the dataset being used for testing. The metric on which the evaluation is assessed is the macro-averaged F1-score, although other metrics, such as the classification report and the F1-score for each class, were also computed and will be used for a finer analysis of the results. Different feature combinations were considered, and their results were compared for both datasets. The feature combinations evaluated are indicated in Table 2. The objective of assessing these combinations was to consider the effect these features could have on the classification metrics, as well as other factors, such as the computational costs, which will be described in the results subsection (Section 4.3).
Such experimental design raises the question as to why these feature combinations were considered. Firstly, both SIMON and Transformers, however differently, extract semantic information from the texts. In addition to this, due to the nature of depression, it made sense to use information on the emotions shown in these texts. Finally, the topics expressed in these texts were considered useful in detecting certain topics of conversation which correlate with depressive symptoms and, as such, could serve as an indication of the presence of the disease, even for a human observer. The reader will find further information on these models in the previous section (Sect. 3).
After the generation of the representations summarised in Table 2, these were fed to different classifiers, and their results were compared. Each of these scenarios was evaluated with the following **classifiers**: Forest of randomised trees, KNeighbors Classifier, Linear SVM and Polynomial SVM, as implemented by Sci-Kit learn [36]. The choice to include several classifiers is based on two main reasons. Firstly, in order to reduce involuntary bias that may be introduced by the classifiers. Secondly, to produce a more robust evaluation with more evidence to support it.
In addition to all the elements mentioned before, we define a **baseline** as a means for comparison. As such, the baseline consists of a straightforward unigram representation. This benchmark was also evaluated using the four classifiers previously mentioned.
As mentioned before, the computational costs of the generation of each of these features were likewise calculated. This was done with the use of **CodeCarbon**[9]. This tool tracks several different metrics for computational costs. In this work, we use the computation duration and the total energy consumed. The cost to fine-tune the BERTopic model
\begin{table}
\begin{tabular}{l c c} \hline \hline & **DepSign dataset** & **Spanish dataset** \\ \hline
**Number of classes** & 3 & 2 \\
**Number of texts** & 13387 & 2000 \\
**Class distribution** & Unbalanced & Balanced \\ \hline \hline \end{tabular}
\end{table}
Table 1: Characteristics of the datasets
Figure 2: Class distribution of the datasets
is considered, but not the costs to train the embedding models or Transformers, which are unknown. Similarly, we do not consider the associated cost to generate the emotion lexicons used, as we consider it to be outside the scope of this work.
Finally, in order to obtain a visualisation of the output of the classifiers, the **SHAP** method [24] was used. This allows us to assess the importance given to different features for a particular classifier, which effectively provides insight into the models' reasoning.
### Results
Several things are considered to analyse the results obtained from this set of experiments. Firstly, the value of the macro-averaged F-score is compared for both datasets, as well as all feature combinations. Then, computational costs are considered and weighed in conjunction with F-score values. Finally, the relevance of individual features is analysed.
#### 4.3.1 Results of the DepSign dataset
As can be seen in Table 3, the number of topics used for the classification has an effect on its outcome as measured by the **macro-averaged F-score**. In order to simplify the interpretation of these results, Figure 3 shows the effect on such metrics of the number of topics for different combinations of features, for all the classifiers used. The vertical axis shows
\begin{table}
\begin{tabular}{l l l} \hline \hline Feature extractor & Emotions & Topics \\ \hline SIMON & No & No \\ SIMON & No & Yes \\ SIMON & Yes & No \\ SIMON & Yes & Yes \\ Transformers & No & No \\ Transformers & No & Yes \\ Transformers & Yes & No \\ Transformers & Yes & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 2: Combination of features used on the datasets
the macro-averaged F-score, while the horizontal axis shows the number of topics used for that specific classification. Each of the subgraphs shows a different scenario in regard to the features used, as specified in their respective titles. It can be observed that adding topic information can either improve or worsen the classification metric, depending on the classifier used or the number of topics considered.
In regards to the best outcomes according to the macro-averaged F-score, Table 3 shows that the best results are obtained by the use of Transformers, together with information regarding emotions, but without topics, and the usage of the forest of randomized trees classifier, with a value of **73.35%**. However, it is worth mentioning that using this same classifier with SIMON and emotion representations nearly achieves the same performance, with an F-score of **71.11%**.
It must be noted that the work that originally presented the DepSign dataset a fundamental analysis was made. In this analysis, the highest metric was of a weighted average F1-score of 64.7% [20, 41]. In order to compare to such a number, we have also computed the weighted average F1-score which, for our best model, is 79.0%.
As previously described (see Sect. 4.2), we thoroughly measure the energy costs of the proposed models. The normalized results, per processed instance, can be found in Table 4. As seen, the usage of Transformers over SIMON has a higher cost both in terms of time and energy of two orders of magnitude. Because the potential applications of the detection of depression on social media could imply the analysis of large amounts of texts, it is worth considering whether this large increase in energy costs is justifiable with a 2% of improvement in the F-score.
In order to analyze the **individual feature importance**, Figure 4 was generated using SHAP, as described. It represents the classification criteria for the class "severe depression", using the classifier forest of randomized trees.
Figure 3: Effect of the number of topics used on the macro-averaged F1-score of the DepSign dataset.
\begin{table}
\begin{tabular}{l l l l l} \hline
**Model** & **Normalized training time (s)** & **Normalized prediction time (s)** & **Normalized training energy (MW)** & **Normalized prediction energy (MW)** \\ \hline Transformers & 5.75E-02 & 5.83E-02 & 2.28E-06 & 2.32E-06 \\ SIMON & 3.98E-04 & 3.98E-04 & 1.49E-08 & 1.48E-08 \\ Emotions & **1.27E-04** & **1.37E-04** & **3.91E-09** & **4.21E-09** \\ All topics & 1.50E-02 & 1.59E-02 & 3.35E-07 & 3.40E-07 \\
128 topics & 1.89E-03 & 1.09E-02 & 4.05E-08 & 2.35E-07 \\
64 topics & 1.61E-03 & 1.13E-02 & 3.46E-08 & 2.40E-07 \\ \hline \end{tabular}
\end{table}
Table 4: Computational costs in terms of energy and time consumption for the DepSign dataset. These costs have been normalized, meaning that the values displayed on the table are the time consumed or energy used per text, during the process of either training or prediction, for the different models.
Figure 4: SHAP representation of the Random Forest algorithm application on the DepSign dataset, for the class “severe depression”, considering the features SIMON, emotions and topics. The black tags represent semantic information, while the blue tags represent topic information, and the green ones, emotions.
The class "severe depression" was chosen, as the diagram produced was deemed to be the most informative. Still, diagrams for the other two classes were likewise generated (see Figure 7 and Figure 8), which can be found appended in Section A. It must be noted that this feature combination is not the best in terms of classification metrics. However, the combination of features it employs is the most informative, which is why it will be used for the following analysis. The classifier forest of random trees was chosen for this analysis, as has shown to be the option with higher performance scores. For analyzing the feature importances of the text representation module we select SIMON since computing such analyses using larger Transformers models incurs in much higher computational costs. The interpretation of said analyses is shown in Figure 4.
In this figure, the horizontal axis is the SHAP value that encodes the impact on the model output. Thus, all the positive values on the X-axis indicate that that feature, in particular, has an effect on the classification of the text as "severe depression". In contrast, the negative values of the x-axis indicate that the text does not belong to the aforementioned class. The colours of the graph, as can be seen on the right border of the image, describe the feature value, with red being high and blue being low, which indicates the intensity of each feature. Finally, the tags that appear on the left column of the image represent a selection of the most relevant features. In this way, the texts in blue indicate that that feature belongs to topic information, and the ones in green that the feature represents emotion information. The semantic tokens are represented in black. The features are listed from more relevant to less relevant for the model.
Taking this type of representation into account, we consider some of the observations made. For instance, the blue tag _[number mg, side effects, sertraline, adderall]_, which represents a topic regarding medications (sertraline is an antidepressant, while Adderall is used for the treatment of ADHD and narcolepsy), has a very strong effect in favour of classifying this text as "severe depression". It is interesting that topic features are so relevant for the classification task, as seen in Figure 3. Considering the overall performance, we see that the addition of topics may hinder the performance of the model. Despite reducing the macro-averaged F-score, the classifier is using the information from these characteristics in an assiduous manner. This indicates that it is not only necessary to add new information sources but model them in a way that the subsequent classifier can efficiently exploit.
By observing Figure 3 it is possible to see which topics indicate the presence of severe depression. These topics include themes such as medications, as well as suicide, fear, anxiety, feeling one's friends are not genuine, and psychiatrists, among others. These themes are not surprising to find since they correlate well with signs for depression [33], but having concrete themes which people talk about could be useful to gather signs of alert that could be useful for online monitoring of depression in social media.
The importance of emotions must also be noted, being some of the most important features. The presence of positive emotions, such as anticipation, could indicate ironic language or noisy instances in the annotation. However, it could also be due to other phenomena, such as mood disorders that can be caused by depression [33].
To offer some insight into the evaluation, a manual analysis of the texts in the DepSign dataset [20, 41] was performed. The presence of sadness is apparent throughout the whole dataset, but the presence of joy is of more interest for this analysis. Different examples were found. For example, an instance of a user describing having a good day despite their illness, or a person describing their improvement after beginning therapy. Other people also described improving with their medication, although sometimes it stopped working after some time. Such an example of a text with words that could have been understood by EmoFeat as having "joy" but, in reality, expresses the opposite, is as follows:
_"Life is overrated imo. : My life is not precious and important. When people say life is precious and important, I just did not understand that and become baffled."._
Another case similar to the previous one that also seems to convey some sense of anticipation is:
_"Fantasizing about suicide is the ONLY nice thought I have now. Happy memories, people I've loved, achievements I've had, books I've read and loved, I know they "must have happened", but it's like I can't access that stuff."_
In addition, many cases of drug abuse were observed, which would be an example of the aforementioned high-risk activities.
After this keyword-based manual analysis of the texts, a more thorough computational analysis was made. To do so, the texts were ranked based on their value as given by the different emotion characteristics. Then, the words contained in these texts were compared to the words present in the emotion lexicon, in order to see which words had triggered the
emotion. As such, a few examples will be presented, with the trigger words in bold. For instance, for the sadness_max characteristic, the following text was recovered. It must be noted that four of the trigger words can be seen as semantic information on the SHAP graph.
_"Anyone **feel** their reason/rationality **slipping**?: This is hard to describe, but I feel like the **depression** is overall getting worse because I'm less **able** to distinguish between "my voice" and "the **depression** voice." It wasn't long ago that I would get negative thoughts (my **friends** tolerate me, I'll never reach this goal, etc) and be able to take a second and go ok, this isn't me talking, it's the **depression**. Like I could point out an irrational belief to myself. Now I'm not sure if my thoughts are rational or not, like the "**depressed** me" and "me" have merged. I genuinely don't know if it's irrational to think that I'll never be in a relationship, thinking that some **svicides** can be considered rational vs the result of a sick brain, or thinking that my **depression** will only get worse. I thought I would always have reason (barring something like dementia), but even that is questionable at this point and it's scary as hell."_
The following is also an example where a seemingly positive emotion is detected. In this case, such emotion is joy_mean, with the trigger words being _happy, goofy, symptoms, life, developing_. Some of these words correspond to what one would expect of positive words, albeit used in a negative context. In contrast, others (_developing, symptoms_) are possibly examples of the previously mentioned noisy instances in the annotation.
_"Desperately need help but can never seem to get results : I'm at a complete loss and have no idea where to look. My whole **life** people have perceived me as the funny, **goofy** extrovert. To be fair it's a role I've given to myself, growing up I would always try to dismiss anything I found negative with a joke. Being the funny one allowed me to hide from reality and kept me **happy**. This worked fine throughout elementary school and made me a pretty **happy** kid, yet in middle **school** things started taking a turn for the worse. For as I can remember I've struggled with ADHD and anger issues. After middle school and throughout high school I started **developing symptoms** of anxiety, depression, insomnia, and borderline bipolarism."_
Similarly, the semantic information has high importance, with the words _depression_, _svicide_ and _depressed_ being the three more relevant features. This could indicate that depressed people tend to be quite direct when speaking about their condition online, and thus can indicate what type of online communities should be monitored if one wanted to be able to apply these systems to detect depressed individuals.
Despite being identified as relevant by the classifier, it has been observed that adding contextualization information does not always lead to performance improvement. This indicates that the proposed contextualization mechanism can be useful for depression detection but needs to be implemented to avoid undesired effects such as overfitting.
#### 4.3.2 Results of the Spanish dataset
In the case of the Spanish dataset, the **macro-averaged F-score** results can be found in Table 5. The classifier that obtains the best score is Linear SVM using Transformers without emotion or topic information, with a macro-averaged F-score of **93.32%**. In contrast to the case of the DepSign dataset, the feature extraction done by the Transformer models provides a much higher macro-averaged F-score than the model using SIMON, having a difference of roughly 10 points. While the best score is not achieved using additional context, it can be seen that there are other cases where adding emotion and topic information can lead to performance improvements. This, again, suggests that these contextualizations can improve overall performance. It can be observed that for certain combinations of features and classifiers, the unigram baseline can offer a fairly strong performance.
The **computational costs** for this dataset can be found in Table 6. In this case, the normalized training time, training energy and prediction energy are three orders of magnitude larger for the Transformer models. In comparison, the normalized prediction energy is two orders of magnitude greater. It is possible that the amount of time and energy required are not linear during the execution of these models. Because this dataset has a much smaller size, the costs are less distributed among the samples. Again, this increase in costs must be weighed against the improvement of the classification capabilities of the method.
In order to summarize the information about costs and performance, Figure 5 displays the best F-score obtained for any given combination of characteristics against their cost (in mW). This graph summarizes this information for both datasets, displaying the qualities that have been previously discussed. Firstly, it is possible to see that the DepSign
dataset achieves a lower F-score than the Spanish dataset. In addition, it can be seen that, for both cases, the usage of Transformer models incurs in a higher cost. In the case of the Spanish dataset, these are greater, possibly for the reasons that have already been mentioned.
Another relevant aspect regarding energy consumption is that, similarly to what we observed in the DepSign dataset, the energy required to extract emotion features is lower than all other methods used. These results respond to the simplicity of the EmoFeat method. Also, this indicates that adding such a method with little extra energy cost consistently improves the results obtained in the DepSign dataset. However, this is not always the case for the Spanish dataset since adding emotion information can reduce the performance score. Since the Spanish emotion lexicon is of a much smaller size than the DepSign dataset, we hypothesize that this difference may be caused by the Spanish emotion lexicon's reduced coverage.
Figure 5: Comparison of the normalized cost in terms of energy consumed per text processed in milliwatts during the prediction phase, plotted against the best F-score obtained by that feature combination.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{2}{c}{**Dataset**} & \multicolumn{2}{c}{**Spanish**} \\ \hline & & & **Classifier** & **Forest of randomized trees** & **KNeighbors Classifier** & **Linear SVM** & **Polynomial SVM** \\ \hline
**Feature extraction** & **Emotions** & **Topics** & **Number of topics** & & & \\ \hline
**Unigram baseline** & **NO** & **NO** & **0** & 83.18 & 79.59 & 82.87 & 80.42 \\ \hline & & **NO** & **0** & 82.27 & 77.11 & 80.75 & 80.57 \\ \multirow{3}{*}{SIMON} & **NO** & **YES** & **All (21)** & 83.30 & 79.08 & 82.42 & 82.57 \\ \cline{2-7} & & **NO** & **0** & 82.42 & 75.28 & 79.84 & 79.24 \\ \cline{2-7} & & **YES** & **All (21)** & 83.31 & 77.40 & 82.73 & 80.61 \\ \hline & & **NO** & **0** & 90.14 & **89.67** & **93.32** & **83.57** \\ \multirow{3}{*}{Transformers} & **NO** & **YES** & **All (21)** & **90.60** & 88.29 & 90.76 & 83.33 \\ \cline{2-7} & & **NO** & **0** & 89.99 & 89.37 & 93.02 & **83.57** \\ \cline{2-7} & **YES** & **All (21)** & 90.60 & 88.45 & 90.30 & 83.33 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Macro-averaged F-score of the proposed models in the Spanish dataset.
Following, Figure 6 illustrates the SHAP analysis for **individual feature importance**2. As before, topics are coloured in blue, emotions are coloured in green, and semantic information is in black. The forest of randomized
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Model** & **Normalized training time (s)** & **Normalized prediction time (s)** & **Normalized training energy (MW)** & **Normalized prediction energy (MW)** \\ \hline
**Transformers** & 1.02E-01 & 9.40E-02 & 3.99E-06 & 3.69E-06 \\
**SIMON** & 1.45E-04 & 1.40E-04 & 4.62E-09 & 4.79E-09 \\
**Emotions** & **4.69E-05** & **6.13E-05** & **1.05E-09** & **1.56E-09** \\
**All topics** & 2.17E-02 & 1.79E-02 & 4.57E-07 & 3.77E-07 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Computational costs for the Spanish dataset.
Figure 6: Translated SHAP representation of the Random Forest algorithm application on the **Spanish dataset**, considering the features SIMON, emotions, and topics. The black tags represent semantic information, the blue tags represent topic information, and the green ones, emotions.
trees algorithm was also used for this diagram. Because this dataset contains a binary classification task, the positive SHAP values can be interpreted as indicating depression, while the negative ones indicate its absence. When analysing this dataset, it can be seen that the emotions display a lower importance for classification, with only _"sadness_max"_ appearing in the tags displayed and indicating the presence of depression. Again, the reduced importance of emotions in the Spanish dataset compared to the DepSign dataset could be due to a decreased coverage of the emotion lexicon (see Sect. 4.1).
About the topic information, the first two items on the graph (_{retweet, url}_, _{there is no, url hashtag, of, over}_) are topics that convey information about behaviours and language about this specific social media. The presence of these topics indicates that it is likely that the tweet is not depressive, but their absence does not give information about the classification of the text. Still, it provides information on the usage of links to outside content, hashtags, and retweets that could be useful for initial filtering.
Both topic and semantic information relate to signs of depression, such as problems with sleep, tiredness, and hopelessness [33]. This suggests that said information is indicative of depression. While emotion contextual information is used to some degree to classify this dataset, the contextual information provided by topic information is much more relevant.
## 5 Conclusions
This work presents a machine learning method for automatically assessing depression in natural language. As a pivotal component of this method, we consider the contextualization through two relevant information sources: affect and topic analysis. It has been seen that adding such additional representations can enhance performance, although this depends on the specific learning model used. Furthermore, we consider the energy consumption of the proposed models, with a focus on finding a trade-off between classification performance and computational complexity, aiming at usability. We have thoroughly evaluated the proposed models using two datasets.
In light of the analysis of the used data, we draw several conclusions. Answering **RQ1**, as mentioned during the study of the results for both datasets, contextualization through emotion and topic information has proven to be informative in the detection of depression, being capable of generating valuable information beyond the current task. This information has been shown to relate well to signs that indicate depression but is further capable of providing concrete examples of behaviours that may be indicative of its presence. Specific procedures on how to include such contextual information need to be developed since the improvements are not always consistent across learners.
Addressing research question **RQ2**, we have identified a clear candidate application in our experiments in which the trade-off between computational costs and classification performance is appropriate. Thus, while it is undeniable that state-of-the-art techniques provide a unique opportunity for applications in many fields, it must be noted that, for others, such advanced techniques may not be necessary as other, pre-existing techniques, offer a better balance between computational cost and results for suitable scenarios.
Finally, it is worth mentioning that the interpretability of classifiers is of great value as the analyses performed in this paper depend on the interpretability of the different learners. This allows us to gain insights into the classification process and extract aggregated knowledge. As such, not only can this information be applied for further uses, but the functioning of the algorithm can be better assessed and understood. This is of great importance, specifically in any areas related to health, as the implementation of these methods will require the understanding and acceptance of clinicians.
The are some limitations to this work that must be noted. Firstly, the datasets used the model "depression" as a discrete category, but the depression condition is far more complex, including major depression and persistent depressive disorder, among others [33]. This nuisance may affect the generalization capabilities of the developed learning models, as it simplifies the original problem. Finally, simple learning algorithms were applied to all models in order to preserve their interpretability and thoroughly study the effect of the computed features, avoiding the side effect of more complex methods. However, it is likely that more complex learning algorithms would have outperformed the ones chosen. Instead, this work addresses the study of depressive language, and thus we have not aimed to maximize the classification performance.
Despite these limitations, we consider that the models described in this paper offer useful insights. For future work, in order to surpass these limitations, the usage of more nuanced datasets that make a distinction between types of depression is proposed. This would require clinical diagnosis by trained professionals. This interdisciplinary approach of cooperation with mental health professionals could improve the outcomes of these experiments, as well as the quality of the knowledge extracted. In addition, in this paper, we have considered depression as a static label. However, this
disease develops over a period of time. For this reason, further analysis of users' posting history, that is, taking time variations into account, could be used to detect early signs of the disorder. Furthermore, additional testing combining the text representations provided by SIMON and Transformer models could be used to analyze whether the combination of both methods could provide additional information. |
2309.11790 | Randers metrics on two-spheres of revolution with simple cut locus | In the present paper, we study the Randers metric on two-spheres of
revolution in order to obtain new families of Finsler of Randers type metrics
with simple cut locus. We determine the geodesics behavior, conjugate and cut
loci of some families of Finsler metrics of Randers type whose navigation data
is not a Killing field and without sectional or flag curvature restrictions.
Several examples of Randers metrics whose cut locus is simple are shown. | Rattanasak Hama, Sorin V. Sabau | 2023-09-21T05:35:37Z | http://arxiv.org/abs/2309.11790v1 | # Randers metrics on two-spheres of revolution with simple cut locus 1
###### Abstract
In the present paper, we study the Randers metric on two-spheres of revolution in order to obtain new families of Finsler of Randers type metrics with simple cut locus. We determine the geodesics behavior, conjugate and cut loci of some families of Finsler metrics of Randers type whose navigation data is not a Killing field and without sectional or flag curvature restrictions. Several examples of Randers metrics whose cut locus is simple are shown.
## 1 Introduction
A two-sphere of revolution is a compact Riemannian surface \((M,h)\), which is homeomorphic to the sphere \(\mathbb{S}^{2}\subset\mathbb{R}^{3}\). If this manifold is endowed with a Randers metric \(F=\alpha+\beta\), or more generally, with an arbitrary positive defined Finsler metric \(F\), then \((M,F)\) is called a Randers or Finsler two-sphere of revolution, respectively.
One of the major problems in Differential Geometry (see [14], [15]) and Optimal Control (see [5]) is the study of geodesics, conjugate points and cut points of Riemannian or Finsler manifolds. We recall that a vector field \(J\) along a unit speed geodesic \(\gamma:[0,a]\to M\) is said to be a Jacobi field if it satisfies the well-known Jacobi equation (see for instance [3], Chapter 7 for details). A point \(p\) is said to be conjugate to \(q:=\gamma(0)\) along \(\gamma\) if there exists a non-zero Jacobi field \(J\) along \(\gamma\) which vanishes at \(p\) and \(q\). The set of conjugate points of \(q\) along all curves \(\gamma\) starting at \(q\) is called the conjugate locus of \(q\).
If \(\gamma:[0,l]\to M\) is a minimal geodesic on a such manifold, then its end point \(\gamma(l)\in M\) is called the cut point of the initial point \(q=\gamma(0)\in M\), in the sense that any extension of \(\gamma\) beyond \(\gamma(l)\) is not a minimizing geodesic from \(q\) anymore. The cut locus \(Cut(q)\) is defined as the set of cut points of \(q\), and on Riemannian or Finslerian surfaces, it has the structure of a local tree. Moreover, the cut points \(p\in Cut(q)\) of \(q\) are characterized by the property that the distance \(d(q,\cdot)\) from \(q\) is not smooth any more at \(p\) (see [12] for
details). The cut points \(p\) along a geodesic \(\gamma\) emanating from the point \(q=\gamma(0)\) can appear either before or at the first conjugate point of \(q\) along \(\gamma\), but not after that (see [3]).
To determine the precise structure of the cut locus on a Riemannian or Finsler manifold is not an easy task. The majority of known results concern Riemannian or Randers surfaces of revolution (see [15], [16] for the Riemannian, and [7], [8] for the Randers case).
A Randers metric \(F=\alpha+\beta\) is a special Finsler metric obtained by the deformation of a Riemannian metric \(\alpha\) by a one-form \(\beta\) whose Riemannian \(\alpha\)-length is less than one in order to assure that \(F\) is positively defined ([10]). These Finsler metrics are intuitive generalizations of the Riemannian ones having most of the geometrical objects relatively easy to compute (see [3]).
An equivalent characterization of Randers metrics is through the Zermelo's navigation problem. We recall that a Finsler metric \(F\) is characterized by its indicatrix \(\{(x,y)\in TM:F(x,y)=1\}\) (see [3]). In particular, a Randers metric indicatrix is obtained as the rigid translation of the unit sphere \(\{y\in T_{x}M:h(x,y)=1\}\) of a Riemannian metric \((M,h)\) by a vector field \(W\in\mathcal{X}(M)\) whose Riemannian length is less than one. The pair \((h,W)\) will be called the navigation data of the Randers metric \(F=\alpha+\beta\). Conversely, the Randers metric \(F=\alpha+\beta\) will be called the solution of Zermelo's navigation problem \((h,W)\). In the case when \(W\) is an \(h\)-Killing field, provided \(h\) is not flat, the geodesics, conjugate points and cut points of the Randers metric \(F=\alpha+\beta\) can be obtained by the translation of the corresponding geodesics, conjugate points and cut points, of the Riemannian metric \(h\) by the flow of \(W\), respectively (see [8], [11]). More generally, new Finsler metrics \(F\) can be obtained by the rigid translation of the indicatrix of a given Finsler metric \(F_{0}\) by a vector field \(W\), such that \(F_{0}(-W)<1\) (see [6], [13]). In this case, the pair \((F_{0},W)\) will be called the general navigation data of \(F\).
Another case when the Randers geodesics can be easily related to the Riemannian ones is when the deformation one-form \(\beta\) is closed. Indeed, the Randers metric \(F=\alpha+\beta\) is projectively equivalent to the underlying Riemannian metric \(\alpha\) if and only if \(d\beta=0\). In this case, the \(\alpha\)-geodesics, conjugate points and cut points coincide with the \(F\)-geodesics, cut points and conjugate points, respectively (see [3]).
We combine these two cases of Randers metrics in order to obtain new families of Finsler of Randers type with simple cut locus (see Section 2 for the definition). The originality of our paper lies in the followings:
1. We determine the geodesics behavior, conjugate and cut loci of some families of Finsler metrics of Randers type whose navigation data do not necessarily include a Killing field.
2. We show that the structure of the cut locus of these families can be determined without any sectional or flag curvature restrictions. These are generalizations of the results in [16] to the Randers case.
3. We construct a sequence of Randers metrics whose cut locus structure is simple.
4. We extend some classical results from the case of Randers metrics to \(\beta\)-changes of Finsler metrics and give new proofs to some known results.
If we start with a Riemannian two-sphere of revolution (\(M\simeq\mathbb{S}^{2},h\)) and the vector fields \(V_{0},V,W\in\mathcal{X}(M)\). Then the following construction gives the Randers metric \(F_{0},\ F_{1}\) and \(F_{2}\) as solutions of the Zermelo's navigation problem with data \((h,V_{0}),(F_{0},V)\) and \((F_{1},W)\), respectively,
which are positively defined provided \(\|V_{0}\|_{h}<1\), \(F_{0}(-V)<1\), and \(F_{1}(-W)<1\), respectively. If we impose conditions that \(V_{0}\) and \(V\) to be \(h\)- and \(F_{0}\)-Killing fields, respectively, and \(d\beta_{2}=0\), then the geodesics, conjugate and cut loci of \(F_{2}\) can be determined.
Remarkably, a shortcut of this construction would be to simply impose \(\|V_{0}+V+W\|_{h}<1\), that guarantees \(F_{2}\) is positively defined, and \(V_{0}+V+W\) to be \(h\)-Killing. In this case, \(F_{2}\) is also with simple cut locus having the same structure with the cut locus of \(h\). Obviously, the cut loci of these metrics are slightly different as set of points on \(M\).
This construction can be extended to a sequence of Randers metrics \(\{F_{i}=\alpha_{i}+\beta_{i}\}_{i=1\dots,n}\) whose cut loci are simple (see Remark 3.10).
Here is the structure of our paper.
In Section 2, we review the geometry of Riemannian two-spheres of revolution from [15] and [16].
In Section 3, we describe the geometry of some families of Randers metrics obtained as generalizations to the Finslerian case of the Riemannian metrics in [16]. We use the Hamiltonian formalism for giving and proving some basic results to be used later in the section. Lemma 3.1 is an important result that generalizes a well-known result ([9]) for Randers metrics to more general Finsler metrics obtained by \(\beta\)-changes. The relation with \(F\)-Killing fields are given in Lemma 3.2 and the basic properties of our family of Randers metrics are in Lemma 3.3. Some of these results are indirectly suggested in [6], but here we clarify all the basic aspects and prove them in our specific formalism. Lemma 3.4 gives the concrete expressions of \(\widetilde{\alpha}\) and \(\widetilde{\beta}\) in the families of our Randers metric, formulas that provide a better understanding of the positive definiteness of these metrics.
Lemma 3.6 gives the behavior of geodesics, conjugate and cut points of the \(\beta\)-change of a Randers metric generalizing the results in [11]. Lemma 3.7 gives the conditions for the one-form \(\beta\) to be closed in terms of the navigation data. Finally, we sum up the results in all these lemmas in Theorem 3.8, which is the main result of the present paper. In Remark 3.10, we show how an infinite sequence of such Randers metrics can be obtained.
In Section 4, we construct one example of the Randers metric on the two-sphere of revolution that satisfies the conditions in Theorem 3.8.
Two-spheres of revolution
Classically, surfaces of revolution are obtained by rotating a curve \((c)\) in the \(xz\) plane around the \(z\) axis. More precisely, if the profile curve \((c)\) is given parametrically
\[(c):\begin{cases}x=\varphi(u)\\ z=\psi(u)\end{cases}\ \,\ \varphi>0,\ u\in I\subset\mathbb{R}, \tag{2.1}\]
then, in the case \((\varphi^{\prime}(u))^{2}+(\psi^{\prime}(u))^{2}\neq 0\), for all \(u\in I\), the curve can be written explicitly \(x=f(z)\) or implicitly by \(\Phi(x,z)=0\), where \({}^{\prime}\) is the derivative with respect to \(u\).
In the case of the parametric representation (2.1) one obtains an \(\mathbb{R}^{3}\)-immersed surface of revolution \(\psi:\Omega\subset\mathbb{R}^{2}\to\mathbb{R}^{3}\), given by
\[\psi(u,v)=\left(\varphi(u)\cos v,\varphi(u)\sin v,\psi(u)\right),\ u\in I,\ v \in[0,2\pi). \tag{2.2}\]
**Remark 2.1**: The immersed surface of revolution (2.2) is called of _elliptic type_, while
\[\psi(u,v)=\left(\varphi(u)\cosh v,\varphi(u)\sinh v,\psi(u)\right)\]
is called a _hyperbolic type_. Since we are interested in compact surfaces of revolution, only the elliptic case will be considered hereafter, leaving the hyperbolic type for a future research.
Even though the representation (2.2) is quite intuitive, it has two major disadvantages:
* it leads to quite complicated formulas for the induced Riemannian metric, geodesic equations, Gauss curvature, etc.,
* it excludes the case of abstract surfaces of revolution which cannot be embedded in \(\mathbb{R}^{3}\).
The first disadvantage can be easily fixed by taking the curve \((c)\) to be unit speed parameterized in the Euclidean plane \(xz\), i.e.,
\[[\varphi^{\prime}(u)]^{2}+[\psi^{\prime}(u)]^{2}=1,\]
which leads to the warped Riemannian metric
\[ds^{2}=du^{2}+\varphi^{2}(u)dv^{2}.\]
This simplification suggests the following definition (which also fixes the second disadvantage).
**Definition 2.2**: ([15]) Let \((M,h)\) be a compact Riemannian surface homeomorphic to \(\mathbb{S}^{2}\). If \(M\) admits a _pole_\(p\in M\), and for any two points \(q_{1},q_{2}\in M\), such that \(d_{h}(p,q_{1})=d_{h}(p,q_{2})\), there exists an Riemannian isometry \(i:M\to M\) for which
\[i(q_{1})=q_{2},\ i(p)=p,\]
then \((M,h)\) is called a _two-sphere of revolution_. Here \(d_{h}\) is the distance function associated to the Riemannian metric \(h\).
**Remark 2.3**: One example of compact surface of revolution that cannot be embedded in \(\mathbb{R}^{3}\) is the real projective space \(\mathbb{R}P^{2}\). It is compact being homeomorphic to \(\mathbb{S}^{2}/_{\sim}\), where \(\mathbb{S}^{2}\) is the unit sphere in \(\mathbb{R}^{3}\) and \(\sim\) is the equivalence relation \(x\sim-x\), for all \(x\in\mathbb{S}^{2}\). It is a surface of revolution because it can be obtained by rotating the Mobius strip along its center line.
Finally, it cannot be embedded in \(\mathbb{R}^{3}\) because it is non-orientable. More generally, it is known that any embedding of a non-orientable surface in \(\mathbb{R}^{3}\) must create self-intersections and this is not allowed. Nevertheless, \(\mathbb{R}P^{2}\) can be immersed in \(\mathbb{R}^{3}\), and therefore can be locally embedded in \(\mathbb{R}^{3}\), but not globally (see [4] for properties of projective spaces).
Another example is the so-called Lorentz surface, obtained by rotating the hyperbola \(x^{2}-y^{2}=1\) around the \(x\)-axis. This surface is orientable but cannot embedded in \(\mathbb{R}^{3}\) because it has a self-intersection at origin.
This definition allows to introduce the geodesic polar coordinates \((r,\theta)\in(0,2a)\times[0,2\pi)\) around \(p\), such that the Riemannian metric is given as
\[h=dr^{2}+m^{2}(r)d\theta^{2}\]
on \(M\setminus\{p,q\}\), where \(q\) is the unique cut point of \(p\) and
\[m(r)=\sqrt{h\left(\frac{\partial}{\partial\theta},\frac{\partial}{\partial \theta}\right)}\]
(see [14] or [15] for details).
Moreover, the functions \(m(r)\) and \(m(2a-r)\) can be extended to smooth odd function around \(r=0\), where \(d_{h}(p,q)=2a\), \(m^{\prime}(0)=1=-m^{\prime}(2a)\).
It is well-known that any pole \(p\in M\) must have a unique cut point \(q\in M\), and that any geodesic starting from \(p\) contains \(q\).
For the sake of simplicity, we consider \(a=\frac{\pi}{2}\), that is \(m:[0,\pi]\to[0,\infty)\) will satisfy \(m(0)=0\), \(m^{\prime}(0)=1\), \(m(\pi-r)=m(r)>0\), for all \(r\in(0,\pi)\), see Figure 1.
Recall (see [14]) that the equations of an \(h\)-unit speed geodesic \(\gamma(s):=(r(s),\theta(s))\) of
Figure 1: A two-sphere of revolution.
\((M,h)\) are
\[\begin{cases}\frac{d^{2}r}{ds^{2}}-mm^{\prime}\left(\frac{d\theta}{ds}\right)^{2}= 0,\\ \frac{d^{2}\theta}{ds^{2}}+2\frac{m^{\prime}}{m}\left(\frac{dr}{ds}\right) \left(\frac{d\theta}{ds}\right)=0,\end{cases}\]
where \(s\) is the arclength parameter of \(\gamma\) with the \(h\)-unit speed parameterization condition
\[\left(\frac{dr}{ds}\right)^{2}+m^{2}\left(\frac{d\theta}{ds}\right)^{2}=1.\]
It follows that every profile curve, or meridian, \(\{\theta=\theta_{0}\}\) with \(\theta_{0}\) constant is an \(h\)-geodesic, and that a parallel \(\{r=r_{0}\}\), with \(r_{0}\in(0,2a)\) constant, is geodesic if and only if \(m^{\prime}(r_{0})=0\). We observe that the geodesics equations implies
\[\frac{d\theta(s)}{ds}m^{2}(r(s))=\nu,\mbox{ where }\nu\mbox{ is constant},\]
that is, the quantity \(\frac{d\theta}{ds}m^{2}\) is conserved along the \(h\)-geodesics.
**Lemma 2.4**: _(The Clairaut relation) Let \(\gamma(s)=(r(s),\theta(s))\) be an \(h\)-unit speed geodesic on \((M,h)\). There exists a constant \(\nu\) such that_
\[m(r(s))\cos\Phi(s)=\nu\]
_holds for any \(s\), where \(\Phi(s)\) denotes the angle between the tangent vector of \(\gamma(s)\) and profile curve._
The constant \(\nu\) is called the Clairaut constant of \(\gamma\).
Several characterization of the cut locus of a Riemannian two-sphere of revolution are known (see [5], [15], [16]).
We recall the following important result from [16].
**Proposition 2.5**: _Let \(h:[0,\pi]\rightarrow\mathbb{R}\) be a smooth function that can be extended to an odd smooth function on \(\mathbb{R}\). If_
1. \(h(\pi-r)=\pi-h(r)\)_, for any_ \(r\in[0,\pi]\)_;_
2. \(h^{\prime}(r)>0\)_, for any_ \(r\in\left[0,\frac{\pi}{2}\right)\)_;_
3. \(h^{\prime\prime}(r)>0\)_, for any_ \(r\in\left(0,\frac{\pi}{2}\right)\)_,_
_then_
1. _the function_ \(m:[0,\pi]\rightarrow\mathbb{R}\) _given by_ \(m(r):=a\sin h(r)\)_, where_ \(a=\frac{1}{h^{\prime}(0)}\)_, is the warp function of a two-sphere of revolution_ \(M\)_._
2. _Moreover, if_ \(h^{\prime\prime}(r)>0\) _on_ \(\left(0,\frac{\pi}{2}\right)\)_, then the cut locus of a point_ \(q=(r_{0},0)\in M\) _coincides with a subarc of the antipodal parallel_ \(r=\pi-r_{0}\)
_Proof._ We give only the proof outline here, for details please consult [16]. It can be seen that conditions (c1), (c2) imply that the function \(m:[0,\pi]\rightarrow\mathbb{R}\) is positive, and \(m(0)=0\), \(m^{\prime}(0)=1\), \(m(\pi-r)=m(r)>0\) for \(r\in(0,\pi)\), hence the two surface of revolution is well-defined.
Moreover, if (c3) holds good, then it can be proved that the half period function
\[\varphi_{m}(\nu):=2\int_{m^{-1}(\nu)}^{\frac{\pi}{2}}\frac{\nu}{m(r)\sqrt{m^{2 }(r)-\nu^{2}}}dr\]
is decreasing, where \(\nu\) is the Clairaut constant, hence the conclusion follows (see Lemma 1 and Proposition 1 in [16]).
\(\Box\)
**Remark 2.6**: Observe that \(h(0)=0\), \(h\left(\frac{\pi}{2}\right)=\frac{\pi}{2}\), \(h(\pi)=\pi\), and the graph of \(h\) looks like in Figure 2.
**Definition 2.7**: A Riemannian (or Finsler) two-sphere of revolution whose cut locus is a subarc of a parallel will be called with simple cut locus.
**Remark 2.8**: This naming is related to graph theory in the sense that simple cut locus means that the cut locus is a simple graph with 2 vertices and one edge.
We recall some examples given in [16].
**Example 2.9**:
1. If \(h(r)=r-\alpha\sin(2r)\), for any \(\alpha\in\left(0,\frac{1}{2}\right)\), one can see that \[m(r)=a\sin(r-\alpha\sin(2r)).\]
It follows that \[\begin{array}{l}m^{\prime}(r)=a\cos(r-\alpha\sin(2r))[1-2\alpha\cos(2r)],\\ m^{\prime\prime}(r)=a\cos(r-\alpha\sin(2r))[-4\alpha\cos(2r)]-a[1-2\alpha\cos( 2r)]\sin(r-\alpha\sin(2r))[1-2\alpha\cos(2r)].\end{array}\]
Figure 2: The outline of the graph of \(h\).
Observe that the Gaussian curvature is \[G(r) =-\frac{m^{\prime\prime}(r)}{m(r)}\] \[=\frac{1}{a\sin(r-\alpha\sin(2r))}\left\{-a\cos(r-\alpha\sin(2r))[- 4\alpha\cos(2r)]\right.\] \[\quad\left.+a[1-2\alpha\cos(2r)]\sin(r-\alpha\sin(2r))[1-2\alpha \cos(2r)]\right\}\] \[=4\alpha\cos(2r)\cot(r-\alpha\sin(2r))+[1-2\alpha\cos(2r)]^{2},\] which clearly is not monotone on \([0,\pi]\), see Figure 3. On the other hand, it is easy to check that this \(h\) satisfies conditions (c1), (c2), (c3) in Proposition 2.5, hence it results that the Riemannian surface of revolution with the warp function \(m\) has simple cut locus.
2. If \(h(r)=\arcsin\frac{\sin r}{\sqrt{1+\lambda\cos^{2}r}}\), for any \(\lambda\geq 0\), it follows that \[m(r)=a\sin\left(\arcsin\frac{\sin r}{\sqrt{1+\lambda\cos^{2}r}}\right)=\frac{ a\sin r}{\sqrt{1+\lambda\cos^{2}r}},\] therefore \[m^{\prime}(r) =\frac{a}{1+\lambda\cos^{2}r}\left[\cos\sqrt{1+\lambda\cos^{2}r }+\frac{\lambda\cos r\sin^{2}r}{\sqrt{1+\lambda\cos^{2}r}}\right]=\frac{a(1+ \lambda)\cos r}{(1+\lambda\cos^{2}r)^{3/2}},\] \[m^{\prime\prime}(r) =\frac{a(1+\lambda)}{(1+\lambda\cos^{2}r)^{3}}\left[-(1+\lambda \cos^{2}r)^{3/2}\sin r+3\lambda\cos^{2}r\sin r(1+\lambda\cos^{2}r)^{1/2}\right]\] \[=\frac{a(1+\lambda)}{(1+\lambda\cos^{2}r)^{5/2}}\left[-\sin r(1+2 \lambda\cos^{2}r+\lambda^{2}\cos^{4}r)+3\lambda\cos^{2}r\sin r\right]\] \[=\frac{a(1+\lambda)\sin r}{(1+\lambda\cos^{2}r)^{5/2}}\left[-1+ \lambda\cos^{2}r-\lambda^{2}\cos^{4}r\right].\]
We obtain the Gaussian curvature as follows
\[G(r)=-\frac{m^{\prime\prime}(r)}{m(r)}=\frac{(1+\lambda)(1-\lambda\cos^{2}r+ \lambda^{2}\cos^{4}r)}{(1+\lambda\cos^{2}r)^{2}}\]
which again is not monotone on \([0,\pi]\), see Figure 4.
This second example also satisfies the conditions (c1), (c2), (c3) in Proposition 2.5. Hence it provides a two-sphere of revolution with simple cut locus.
**Remark 2.10**: A more complicated sequence of functions \(h_{n}(r)\) with simple cut locus is constructed in [16], Theorem 1.__
## 3 Randers two-spheres of revolution
We will show the existence of Randers two-spheres of revolution with simple cut locus using the following basic construction:
\[\begin{array}{c}(M,h)\\ V_{0},\|V_{0}\|_{h}<1\\ V_{0}\mbox{: $h$-Killing}\end{array}\]
\[\begin{array}{c}\mbox{Navigation data: $(h,V_{0}+V+W)$}\\ (M,F_{2}=\alpha_{2}+\beta_{2})\\ V_{0}\mbox{: $F_{0}$-Killing}\end{array}\]
\[\begin{array}{c}\mbox{Navigation data: $(h,V_{0})$}\end{array}\]
\[\begin{array}{c}\mbox{Navigation data: $(h,V_{0}+V)$}\end{array}\]
\[\begin{array}{c}\mbox{Navigation data: $(h,V_{0}+V)$}\end{array}\]
where \((M,h)\) is a Riemannian manifold, and \(V_{0},V,W\in\mathcal{X}(M)\) are vector fields on \(M\).
It is known that in general, the navigation data \((h,V)\), where \((M,h)\) is a Riemannian metric and \(V\) a vector field on \(M\) such that \(\|V\|_{h}<1\), induces the Randers metric
\[F=\alpha(x,y)=\beta(x,y)=\frac{\sqrt{\lambda\|y\|_{h}^{2}+h(y,V)}}{\lambda}- \frac{h(y,V)}{\lambda}.\]
Here \(\lambda:=1-\|V\|_{h}^{2}\) and \(h(y,V)=h_{ij}V^{i}y^{j}\) is the \(h\)-inner product of the vectors \(V\) and \(y\).
Conversely, the Randers metric \(F=\alpha+\beta\), where \(\alpha=\sqrt{a_{ij}(x)y^{i}y^{j}}\) is a Riemannian metric and \(\beta=b_{i}(x)y^{i}\) a linear one-form on \(TM\), induces the navigation data \((h,V)\) given by
\[h^{2}=\varepsilon(\alpha^{2}-\beta^{2}),\ V=-\frac{1}{\varepsilon}\beta^{\#}.\]
Here \(h^{2}=h_{ij}(x)y^{i}y^{j}\), \(\varepsilon:=1-\|b\|_{\alpha}^{2}\), and \(\beta^{\#}\) is the Legendre transform of \(\beta\), i.e.,
\[\beta^{\#}=b_{i}y^{i}=a_{ij}b^{i}y^{j}\]
(see [1], [2], [11] for details).
We recall some definitions for later use.
A vector field \(X\) on \(T^{*}M\) is called Hamiltonian vector field if there exists a smooth function \(f:T^{*}M\to\mathbb{R}\), \((x,p)\mapsto f(x,p)\) such that
\[X_{f}=\frac{\partial f}{\partial p_{i}}\frac{\partial}{\partial x^{i}}-\frac{ \partial f}{\partial x^{i}}\frac{\partial}{\partial p_{i}}.\]
For instance, we can consider the Hamiltonian vector fields of the lift \(W^{*}:=W^{i}(x)p_{i}\) of \(W=W^{i}\frac{\partial}{\partial x^{i}}\) to \(T^{*}M\), or of the Hamiltonian \(\mathcal{K}(x,p)\), the Legendre dual of any Finsler metric \(F(x,y)\) on \(M\) (see [9]).
Indeed, on a Finsler manifold \((M,F)\), for any \(y\in T_{x}M\setminus\{0\}\) one can define
\[p(y):=\frac{1}{2}\frac{d}{dt}\left[F^{2}(x,y+tv)\right]\big{|}_{t=0},\ v\in T _{x}M,\]
and obtain in this way the map
\[\mathcal{L}:TM \to T^{*}M,\] \[(x,y) \mapsto(x,p),\]
called the Legendre transformation of \(F\).
The curve \(\hat{\gamma}(t)=(x(t),p(t)):[a,b]\to T^{*}M\) is called the integral curve (or sometimes the flow) of a Hamiltonian vector field \(X_{f}\in\mathcal{X}(T^{*}M)\) if
\[\frac{d\hat{\gamma}(t)}{dt}=X_{f}|_{\hat{\gamma}(t)}.\]
More precisely, the mapping \(\phi:\mathbb{R}\times T^{*}M\to T^{*}M\), \((t,(x,p))\mapsto\phi(t,(x,p))\), denoted also by \(\phi_{t}(x,p)\) or \(\phi_{(x,p)}t\), satisfying the properties
* \(\phi(0,(x,p))=(x,p)\), for any \((x,p)\in T^{*}M\);
* \(\phi_{s}\circ\phi_{t}=\phi_{s+t}\), for all \(s,t\in\mathbb{R}\);
* \(\frac{d\phi_{(x,p)}t}{dt}|_{t=0}=X|_{(x,p)},\)
is called the one-parametric group, or simply the flow, of the vector field \(X\in{\cal X}(T^{*}M)\). A given one-parametric group always induces a vector field \(X\in{\cal X}(T^{*}M)\). Conversely, a given vector field \(X\in{\cal X}(T^{*}M)\) induces only locally a one-parametric group, sometimes called the local flow of \(X\).
A smooth vector field \(X\in{\cal X}(M)\) on a Finsler manifold \((M,F)\) is called F-Killing field if every local one-parameter transformation group \(\{\varphi_{t}\}\) of \(M\) generated by \(X\) consists of local isometries of \(F\). The vector field \(X\) is F-Killing if and only if \(L_{\widehat{X}}F=0\), where \(L\) is the Lie derivative, and \(\widehat{X}:=X^{i}\frac{\partial}{\partial x^{i}}+y^{j}\frac{\partial X^{i}}{ \partial x^{j}}\frac{\partial}{\partial y^{i}}\) is the canonical lift of \(X\) to \(TM\), or, locally \(X_{i|j}+X_{j|i}+2C^{p}_{ij}X_{p|q}y^{q}=0\), where " \(\mid\) " is the \(h\)-covariant derivative with respect to the Chern connection.
Moreover, in the Hamiltonian formalism, the vector field \(X\) on \(M\) is Killing field with respect to \(F\) if and only if
\[\{{\cal K},W^{*}\}=0,\]
where \({\cal K}\) is the Legendre dual of \(F\) (see [9]), \(W^{*}=W^{i}(x)p_{i}\) and \(\{\cdot,\cdot\}\) is the Poisson bracket.
**Lemma 3.1** (Generalization of Hrimiuc-Shimada's result, see [9]): _Let \((M,\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta})\) be a Randers metric with general navigation data \((F=\alpha+\beta,W)\), \(F(-W)<1\). Then the Legendre dual of \(\widetilde{F}\) is \(\widetilde{\cal K}:T^{*}M\to\mathbb{R}\), \(\widetilde{\cal K}={\cal K}+W^{*}\), where \({\cal K}\) is the Legendre dual of \(F\) and \(W^{*}=W^{i}(x)p_{i}\)._
_Proof._ Indeed, let \(F=\alpha+\beta\) be a positive defined Randers metric on a differentiable manifold \(M\) with indicatrix \(\sum_{F}(x)=\{y\in T_{x}M:\ F(x,y)=1\}\subset T_{x}M\), and let \(W\in{\cal X}(M)\) be a vector field such that \(F(-W)<1\).
Let us denote by \(\widetilde{\sum}(x):=\sum_{F}(x)+W(x)\) the rigid translation of \(\Sigma_{F}(x)\) by \(W(x)\), i.e.,
\[\widetilde{\sum}(x):=\{y=u+W\in T_{x}M:\ F(u)=1\}.\]
Firstly, observe that by rigid translation, the tangent vectors to \(\sum_{F}\) and \(\widetilde{\sum}\) remain parallel, i.e., there exists a smooth function \(c(u)\neq 0\) such that
\[\widetilde{Y}_{u+W_{x}}=c(u)(F_{x})_{*,u}, \tag{3.1}\]
where \(\widetilde{Y}_{u+W_{x}}\) is the tangent vector to \(\widetilde{\sum}\) at \(u+W_{x}\), and \((F_{x})_{*,y}:T_{y}(T_{x}M)\to T\mathbb{R}\equiv\mathbb{R}\) is the tangent map of \(F_{x}:T_{x}M\to[0,\infty)\), see Figure 5.
The solution of the Zermelo's navigation problem with data \((F,W)\) is a Finsler metric \(\widetilde{F}\) such that
\[\widetilde{F}_{x}(u+W_{x})=1,\]
where \(u\in T_{x}M\), \(F(x,u)=1\), and \(\widetilde{F}_{x}\) is the restriction of \(\widetilde{F}\) to \(T_{x}M\). Since \(\widetilde{\sum}\) is the rigid translation of \(\sum\) such a Finsler metric must exist.
Second, with these notations, observe that in \(T_{x}^{*}M\) we have
\[\mathcal{L}_{\widetilde{F}}(u+W_{x})=c(u)\mathcal{L}_{F}(u), \tag{3.2}\]
where \(\mathcal{L}_{\widetilde{F}}\) and \(\mathcal{L}_{F}\) are the Legendre transformations of \(\tilde{F}\) and \(F\), respectively. This formula follows directly from (3.1) and the definition of the Legendre transformation.
Since relation (3.2) is between one-forms, actually this is a relation between linear transformations of the tangent space \(T_{x}M\). If we pair (3.2) with \(W_{x}\) and \(u\), we get
\[\langle\mathcal{L}_{\tilde{F}(u+W_{x})},W_{x}\rangle=c(u)\langle\mathcal{L}_{ F}(u),W\rangle \tag{3.3}\]
and
\[\langle\mathcal{L}_{\tilde{F}(u+W_{x})},u\rangle=c(u)\langle\mathcal{L}_{F}(u ),u\rangle=c(u), \tag{3.4}\]
respectively, where we have used the fact that \(F(u)=1\) is equivalent to \(\langle\mathcal{L}_{F}(u),u\rangle=1\). Here, \(\langle\cdot,\cdot\rangle\) denotes the usual pairing of a one-form with a vector field.
Therefore, by the same reason, since \(\tilde{F}(u+W)=1\) we have
\[1=\langle\mathcal{L}_{\tilde{F}}(u+W_{x}),u+W_{x}\rangle=\langle\mathcal{L}_{ \tilde{F}}(u+W_{x}),u\rangle+\langle\mathcal{L}_{\tilde{F}}(u+W_{x}),W_{x} \rangle=c(u)+c(u)\langle\mathcal{L}_{F}(u),W\rangle,\]
where we use (3.3), (3.4). By the way, observe that
\[c(u)=\frac{1}{1+\langle\mathcal{L}_{F}(u),W\rangle}=\frac{1}{1+\langle u,W_{x }\rangle_{g_{x}(u)}},\]
where \(\langle\cdot,\cdot\rangle_{g_{x}(u)}\) is the inner product in \(T_{x}M\) by \(g_{x}(u)\), i.e. \(\langle X,Y\rangle_{g_{x}(u)}=g_{ij}(x,u)X^{i}Y^{j}\).
Next, let us denote by \(\widetilde{\mathcal{K}}\) and \(\mathcal{K}\) the Legendre dual metrics of \(\widetilde{F}\) and \(F\), respectively. It follows that
\[1=\widetilde{\mathcal{K}}[\mathcal{L}_{\widetilde{F}}(u+W_{x})]=c(u) \widetilde{\mathcal{K}}(\mathcal{L}_{F}(u)),\]
Figure 5: The rigid translation of the indicatrix.
and thus
\[\widetilde{\cal K}({\cal L}_{F}(u))=\frac{1}{c(u)}=1+\langle{\cal L}_{F}(u),W \rangle={\cal K}({\cal L}_{F}(u))+\langle{\cal L}_{F}(u),W\rangle.\]
If we denote \({\cal L}_{F}(u)=\omega_{x}=(x,p)\in T^{*}M\), then
\[\widetilde{\cal K}_{x}(p)={\cal K}_{x}(p)+\omega_{x}(W), \tag{3.5}\]
where \({\cal K}_{x}\) is the \({\cal L}\)-dual of \(F=\alpha+\beta\).
Therefore, if \(\widetilde{F}\) is the solution of the Zermelo's navigation (i.e. it is the rigid translation of the indicatrix \(\sum_{F}\) by \(W\)) with navigation data \((F,W)\), then
\[\widetilde{\cal K}_{x}(p)={\cal K}_{x}(p)+W_{x}^{*}(p), \tag{3.6}\]
where \(\widetilde{\cal K}\) and \({\cal K}\) are the Hamiltonians of \(\widetilde{F}\) and \(F\), respectively, and \(W^{*}=W^{i}(x)p_{i}\).
\(\Box\)
**Lemma 3.2**: _Let \((M,F=\alpha+\beta)\) be a Randers metric, the vector field \(W\in{\cal X}(M)\) with flow \(\psi_{t}\). Then the Hamiltonian vector field \(X_{\cal K}\) on \(T^{*}M\) is invariant under the flow \(\psi_{t,*}\) of \(X_{W^{*}}\) if and only if \(W\) is an \(F\)-Killing field, where \({\cal K}\) is the Legendre dual of \(F\)._
_Proof._ Indeed, the invariance condition \(\psi_{t,*}(X_{\cal K})=X_{\cal K}\) is equivalent to \({\cal L}_{X_{W^{*}}}X_{\cal K}=0\) by definition, hence \([X_{W^{*}},X_{\cal K}]=0\), i.e. \(X_{\{W^{*},{\cal K}\}}=0\). This shows that \(W\) is actually \(F\)-Killing field.
\(\Box\)
**Lemma 3.3**: _Let \((M,F)\) be a Randers metric and \(W\in TM\) a vector field on \(M\). Then_
* _The navigation data of_ \(\widetilde{F}\) _is_ \((h,V+W)\)_, where_ \((h,V)\) _is the navigation data of_ \(F=\alpha+\beta\)_, and_ \(\widetilde{F}\) _is the solution of Zermelo's navigation problem for_ \((F,W)\)_._
* _The Randers metric_ \(\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}\) _is positive defined if and only if_ \(F(-W)<1\)_._
_Proof._ (i) Recall that (see [11], [2]) the indicatrix of \(F\) is obtained by a rigid translation of the \(h\)-unit sphere \(\sum_{h}(x)\) by \(V\), i.e. for any \(x\in M\)
\[\sum_{F}(x)=\sum_{h}(x)+V(x),\]
where \(\sum_{F}(x)=\{y\in T_{x}M:\ F(x,y)=1\}\), \(\sum_{h}(x)=\{y\in T_{x}M,\ \|y\|_{h}=1\}\), and \(\|V\|_{h}<1\). Then, if \(\widetilde{F}\) is the solution of the Zermelo's navigation problem for \((F,W)\), we have
\[\sum_{\widetilde{F}}(x)=\sum_{F}(x)+W(x)=\sum_{h}(x)+V(x)+W(x),\]
i.e., navigation data of \(\widetilde{F}\) is \((h,V+W)\).
2. If we use (i), then \(\widetilde{F}\) is positive defined Randers metric if and only if \(\|V+W\|_{h}<1\). Observe that \[\alpha^{2}(-W)=\alpha^{2}(W)=a_{ij}W^{i}W^{j}=\frac{1}{\lambda}h_{ij}W^{i}W^{j}+ \left(\frac{V_{i}}{\lambda}W^{i}\right)^{2}=\frac{1}{\lambda}\|W\|_{h}^{2}+ \frac{1}{\lambda^{2}}\langle V,W\rangle_{h}^{2},\] where \(\lambda=1-\|V\|_{h}^{2}>0\), and \[\beta(-W)=-\beta(W)=-b_{i}W^{i}=\frac{V_{i}}{\lambda}W^{i}=\frac{1}{\lambda} \langle V,W\rangle_{h}.\] It follows that \[F(-W)=\sqrt{\frac{1}{\lambda}\|W\|_{h}^{2}+\frac{1}{\lambda^{2}}\langle V,W \rangle_{h}^{2}}+\frac{1}{\lambda}\langle V,W\rangle_{h},\] hence \(F(-W)<1\) is equivalent to \[\sqrt{\lambda\|W\|_{h}^{2}+\langle V,W\rangle_{h}^{2}}+\langle V,W\rangle_{h} <\lambda,\] where we use \(\lambda>0\) due to the fact that \(F\) is positive defined Randers metric. Therefore, we successively obtain \[\lambda\|W\|_{h}^{2}+\langle V,W\rangle_{h}^{2} <\{\lambda-\langle V,W\rangle_{h}\}^{2},\] \[\lambda\|W\|_{h}^{2}+\langle V,W\rangle_{h}^{2} <\lambda^{2}-2\lambda\langle V,W\rangle_{h}+\langle V,W\rangle_{h }^{2},\] \[\lambda\|W\|_{h}^{2} <\lambda^{2}-2\lambda\langle V,W\rangle_{h},\] \[\|W\|_{h}^{2} <\lambda-2\langle V,W\rangle_{h},\] which is equivalent to \(\|V+W\|<1\), hence \(\widetilde{F}\) is positive defined. The converse implication is trivial.
\(\Box\)
**Lemma 3.4**: _If \(\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}\) is the Randers metric obtained in Lemma 3.3, then we have_
\[\widetilde{\alpha}^{2} =\frac{1}{\eta}(\alpha^{2}-\beta^{2})+\langle\frac{\widetilde{W} }{\eta},y\rangle_{\alpha},\] \[\widetilde{\beta} =-\langle\frac{\widetilde{W}}{\eta},y\rangle,\]
_where_
\[\eta :=[1+F(W)][1-F(-W)],\] \[\widetilde{W}_{i} :=W_{i}-b_{i}[1+\beta(W)],\text{ and }W_{i}=a_{ij}W^{j}.\]
_Proof._ Since the Zermelo's navigation data for \(\widetilde{F}\) is \((h,U:=V+W)\), as shown in Lemma 3.3, it follows (see [11], [1])
\[\widetilde{a}_{ij}=\frac{1}{\sigma}h_{ij}+\frac{U_{i}}{\sigma}\frac{U_{j}}{ \sigma},\quad\widetilde{b}_{i}=-\frac{U_{i}}{\sigma}, \tag{3.7}\]
where
\[U_{i}=h_{ij}U^{j}=h_{ij}(V^{i}+W^{i}),\quad\sigma:=1-\|V+W\|_{h}^{2}.\]
Recall that the navigation data \((h,V)\) of a Randers metric \(F=\alpha+\beta\) can be computed by
\[h_{ij}=\varepsilon(a_{ij}-b_{i}b_{j}),\quad V^{i}=-\frac{b^{i}}{\varepsilon},\]
where \(\varepsilon:=1-\|b\|_{\alpha}^{2}\), \(b^{i}=a^{ij}b_{j}\) (see [1], p. 233). Observe that as value \(\varepsilon=1-\|b\|_{\alpha}^{2}=1-\|V\|_{h}^{2}=\lambda\).
We have
\[\langle V,W\rangle_{h} =h_{ij}V^{i}W^{j}=\varepsilon(a_{ij}-b_{i}b_{j})\left(-\frac{b^{ i}}{\varepsilon}\right)W^{j}\] \[=-(a_{ij}b^{i}W^{j}-b_{i}b^{i}b_{j}W^{j})-(b_{j}W^{j}-\|b\|_{ \alpha}^{2}b_{j}W^{j})\] \[=-(\beta(W)-\|b\|_{\alpha}^{2}\beta(W))=-\varepsilon\beta(W),\]
i.e.,
\[\langle V,W\rangle_{h}=-\varepsilon\beta(W)\]
and
\[\|W\|_{h}^{2}=\varepsilon(a_{ij}-b_{i}b_{j})W^{i}W^{j}=\varepsilon\{\alpha^{2} (W)-\beta^{2}(W)\}.\]
It results
\[\sigma =1-\|U\|_{h}^{2}=1-\|V\|_{h}^{2}-2\langle V,W\rangle_{h}-\|W\|_{ h}^{2}\] \[=\varepsilon+2\varepsilon\beta(W)-\varepsilon\{\alpha^{2}(W)- \beta^{2}(W)\}\] \[=\varepsilon\{1+2\beta(W)+\beta^{2}(W)-\alpha^{2}(W)\}\] \[=\varepsilon\{[1+\beta(W)]^{2}-\alpha^{2}(W)\}\] \[=\varepsilon[1+\beta(W)+\alpha(W)][1+\beta(W)-\alpha(W)]\] \[=\varepsilon[1+F(W)][1-F(-W)],\]
i.e.,
\[\sigma=\varepsilon\eta, \tag{3.8}\]
where \(\eta=[1+F(W)][1-F(-W)]\).
Moreover, we have
\[U_{i} =h_{ij}U^{j}=h_{ij}(V^{j}+W^{j})=\varepsilon(a_{ij}-b_{i}b_{j})V^{j }+\varepsilon(a_{ij}-b_{i}b_{j})W^{j}\] \[=\varepsilon(a_{ij}-b_{i}b_{j})\left(-\frac{b^{j}}{\varepsilon} \right)+\varepsilon(a_{ij}-b_{i}b_{j})W^{j}\] \[=-[b_{i}-b_{i}\|b\|_{\alpha}^{2}]+\varepsilon[W_{i}-b_{i}\beta(W)]\] \[=-\varepsilon b_{i}+\varepsilon[W_{i}-b_{i}\beta(W)]\] \[=\varepsilon\{W_{i}-b_{i}[1+\beta(W)]\}=\varepsilon\widetilde{W} _{i},\]
i.e., \(U=\varepsilon\widetilde{W}\).
With these results, we compute
\[\widetilde{a}_{ij} =\frac{1}{\sigma}h_{ij}+\frac{U_{i}}{\sigma}\frac{U_{j}}{\sigma}\] \[=\frac{1}{\varepsilon\eta}\varepsilon(a_{ij}-b_{i}b_{j})+\frac{ \varepsilon\widetilde{W}_{i}}{\varepsilon\eta}\frac{\varepsilon\widetilde{W} _{j}}{\varepsilon\eta}\] \[=\frac{1}{\eta}(a_{ij}-b_{i}b_{j})+\frac{\widetilde{W}_{i}}{ \eta}\frac{\widetilde{W}_{j}}{\eta}\]
and
\[\widetilde{b}_{i}=-\frac{U_{i}}{\sigma}=-\frac{\varepsilon\widetilde{W}_{i}}{ \varepsilon\eta}=-\frac{\widetilde{W}_{i}}{\eta},\]
hence the conclusion follows.
\(\Box\)
**Remark 3.5**: We observe that \(\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}\) is positive defined if and only if \(\|\widetilde{b}\|_{\widetilde{\alpha}}<1\), i.e., \(\sigma=1-\|U\|_{h}^{2}=1-\|\widetilde{b}\|_{\widetilde{\alpha}}^{2}>0\).
On the other hand, (3.8) implies that
\[\sigma>0\ \Leftrightarrow\ \varepsilon[1+F(W)][1-F(-W)]>0\ \Leftrightarrow\ 1-F(-W)>0,\]
since \(\varepsilon>0\) due to the fact that \(F\) is assumed positive defined and \(F(W)>0\).
In other words, we have shown that
\[F(-W)<1\ \Leftrightarrow\ \|\widetilde{b}\|_{\widetilde{\alpha}}<1,\]
that is another proof and more a intuitive explanation of positive definiteness condition \(F(-W)<1\) (compare to [6]).
We will show a generic result on geodesics, conjugate and cut loci of a Randers metric.
**Lemma 3.6**: _Let \((M,F=\alpha+\beta)\) be a not flat Randers metric, let \(W\in\mathcal{X}(M)\) be a vector field on \(M\) such that \(F(-W)<1\) and let \(\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}\) be the solution of navigation problem for \((F,W)\). If \(W\) is \(F\)-Killing field, then_
1. _the_ \(\widetilde{F}\)_-unit speed geodesics_ \(\widetilde{\mathcal{P}}\) _are given by_ \[\widetilde{\mathcal{P}}(t)=\psi_{t}(\mathcal{P}(t)),\] _where_ \(\mathcal{P}\) _is an_ \(F\)_-unit speed geodesic and_ \(\psi_{t}\) _is the flow of_ \(W\)_;_
2. _the point_ \(\widetilde{\mathcal{P}}(l)\) _is conjugate to_ \(q=\widetilde{\mathcal{P}}(0)\) _along the_ \(\widetilde{F}\)_-geodesic_ \(\widetilde{\mathcal{P}}:[0,l]\to M\) _if and only if the point_ \(\mathcal{P}(l)\) _is conjugate to_ \(q=\mathcal{P}(0)\) _along the corresponding_ \(F\)_-geodesic_ \(\mathcal{P}(t)=\psi_{-t}(\widetilde{\mathcal{P}}(t))\)_, for_ \(t\in[0,l]\)_;_
3. _the point_ \(\hat{p}\) _is an_ \(\widetilde{F}\)_-cut point of_ \(q\) _if and only if_ \(p=\psi_{-l}(\hat{p})\) _is an_ \(F\)_-cut point of_ \(q\)_,_
_where \(l=d_{\widetilde{F}}(q,\hat{p})\)._
_Proof._ We will prove (i).
For simplicity, if we also denote by \(\psi_{t}:T^{*}M\to T^{*}M\) the flow of \(X_{W^{*}}\), then for a curve \(\mathcal{P}(t)\) on \(T^{*}M\) we denote
\[\hat{\mathcal{P}}(t)=\psi_{t}(\mathcal{P}(t)),\]
i.e., we map \(\mathcal{P}(t)\mapsto\hat{\mathcal{P}}(t)\) by the flow \(\psi_{t}\).
By taking the tangent map
\[(\psi_{t,*})_{\mathcal{P}(t)}:T_{\mathcal{P}(t)}(T^{*}M)\to T_{\hat{ \mathcal{P}}(t)}(T^{*}M),\]
we have
\[X\big{|}_{\mathcal{P}(t)}\mapsto(\psi_{t,*})_{\mathcal{P}(t)}(X\big{|}_{ \mathcal{P}(t)})=(\psi_{t,*}X)_{\hat{\mathcal{P}}(t)},\]
for any vector field \(X\) on \(T^{*}M\).
If \(\mathcal{P}(t)\) is an integral curve of the Hamiltonian vector field \(X_{\mathcal{K}}\), i.e. \(\frac{d\mathcal{P}(t)}{dt}=X_{\mathcal{K}}\big{|}_{\mathcal{P}(t)}\), where \(\mathcal{K}\) is the Legendre dual of \(F\), then the derivative formula of a function of two variables give
\[\frac{d}{dt}(\hat{\mathcal{P}}(t)) =\frac{d}{dt}\psi(t,\mathcal{P}(t))=X_{W^{*}}\big{|}_{\mathcal{P} (t)}+\psi_{t,*}\left(\frac{d\mathcal{P}(t)}{dt}\right)\] \[=X_{W^{*}}\big{|}_{\hat{\mathcal{P}}(t)}+\psi_{t,*}\left(X_{ \mathcal{K}}\big{|}_{\mathcal{P}(t)}\right)\] \[=X_{W^{*}}\big{|}_{\hat{\mathcal{P}}(t)}+(\psi_{t,*}X_{\mathcal{K }})_{\hat{\mathcal{P}}(t)}\] \[=X_{W^{*}}\big{|}_{\hat{\mathcal{P}}(t)}+(X_{\mathcal{K}})_{ \hat{\mathcal{P}}(t)}\] \[=(X_{W^{*}+\mathcal{K}})_{\hat{\mathcal{P}}(t)}=\left(X_{\tilde{ \mathcal{K}}}\right)_{\hat{\mathcal{P}}(t)},\]
where we have used that the Legendre dual of \(\widetilde{F}\) is \(\widetilde{\mathcal{K}}=\mathcal{K}+W^{*}\), and \(\psi_{t,*}X_{\mathcal{K}}=X_{\mathcal{K}}\) (see Lemmas 3.1 and 3.2), hence (i) is proved.
Next, we will prove (ii).
If we denote by \(\mathcal{P}_{s}:[0,l]\to M\), \(-\varepsilon<s<\varepsilon\) a geodesic variation of the \(F\)-geodesic \(\mathcal{P}\), such that all curves in the variation are \(F\)-geodesics, then we obtain the variation vector field
\[J:=\frac{\partial\mathcal{P}_{s}}{\partial s}\big{|}_{s=0},\]
which clearly is an \(F\)-Jacobi field.
Taking now into account (i), which shows that
\[\widetilde{J}=\psi_{*}(J)\]
is a Jacobi vector field along \(\widetilde{\cal P}\), hence the conjugate points along \({\cal P}\) and \(\widetilde{\cal P}\) correspond each other under the flow \(\psi_{t}\) of \(W\), hence (ii) is proved.
Finally, we will prove (iii). From (ii) it is easy to see that since \(W\) is \(F\)-Killing field, the arclength parameter of the \(F\)-geodesic \({\cal P}\) and of the \(\widetilde{F}\)-geodesic \(\widetilde{\cal P}\) coincide.
It can be seen, like in the Riemannian case, that the points where the distance function \(d_{F}(p,\cdot)\) looses its differentiability coinciding by the flow \(\psi_{t}\) to the points where the distance function \(d_{\widetilde{F}}(p,\cdot)\) looses its differentiability (see [12], Theorem A for the characterization of cut points in terms of differentiability of distance function). Hence, (iii) follows.
\(\Box\)
**Lemma 3.7**: _Let \((M,F=\alpha+\beta)\) be a Randers metric with navigation data \((h,W)\). The followings are equivalent_
* \(d\beta=0\)_,_
* \(dW^{\#}=d\log\lambda\wedge W^{\#}\)_,_
_where the one-form \(W^{\#}\) is the \(h\)-Legendre transformation of \(W\) and \(\lambda=1-\|W\|_{h}^{2}\)._
_Proof._ Indeed, observe that from the Zermelo's navigation formulas we get (see for instance (3.7), or [11], [1], [2]) we get
\[\beta=-\frac{W_{i}}{\lambda}dx^{i}=-\frac{1}{\lambda}W^{\#},\]
where \(W^{\#}={\cal L}_{h}W\). Here, \({\cal L}_{h}\) is the Legendre transform with respect to \(h\).
By differentiation, we get
\[d\beta=-d\left(\frac{1}{\lambda}W^{\#}\right)=-\left[-\frac{1}{\lambda^{2}}d \lambda\wedge W^{\#}+\frac{1}{\lambda}dW^{\#}\right]=-\frac{1}{\lambda}\left[ -d\log\lambda\wedge W^{\#}+dW^{\#}\right],\]
hence the desired equivalence follows.
\(\Box\)
Summing up, here is our main result.
**Theorem 3.8**: _Let \((M,h)\) be a Riemannian manifold and let \(V_{0},V,W\in{\cal X}(M)\) be vector fields on \(M\). If \(\|V_{0}\|_{h}<1\), we denote by \(F_{0}=\alpha_{0}+\beta_{0}\) the positive defined Randers metric obtained as solution of the Zermelo's navigation problem \((h,V_{0})\)._
* _(i._ 1.1_) If_ \(F_{0}(-V)<1\)_, then_ \(F_{1}=\alpha_{1}+\beta_{1}\) _is a positive defined Randers metric, where_ \(F_{1}\) _is the solution of Zermelo's navigation problem_ \((F_{0},V)\)_._
* _(ii._ 2.2_) If_ \(F_{1}(-W)<1\)_, then_ \(F_{2}=\alpha_{2}+\beta_{2}\) _is a positive defined Randers metric, where_ \(F_{2}\) _is the solution of Zermelo's navigation problem_ \((F_{1},W)\)_._
_._
2. _The Randers metric_ \(F_{1}=\alpha_{1}+\beta_{1}\) _is the solution of Zermelo's navigation problem_ \((h,V_{0}+V)\)_._ * _The Randers metric_ \(F_{2}=\alpha_{2}+\beta_{2}\) _is the solution of Zermelo's navigation problem_ \((h,V_{0}+V+W)\)_._
3. _If the following conditions are satisfied_ * \(V_{0}\) _is_ \(h\)_-Killing,_ * \(V\) _is_ \(F\)_-Killing,_ * \(d(V_{0}+V+W)^{\#}=d\log\widetilde{\lambda}\wedge(V_{0}+V+W)\)_,_ _where_ \((V_{0}+V+W)^{\#}=\mathcal{L}_{h}(V_{0}+V+W)\) _is the Legendre transformation of_ \(V_{0}+V+W\) _with respect to_ \(h\)_, and_ \(\widetilde{\lambda}:=1-\|V_{0}+V+W\|_{h}^{2}\)_, then_ * _The_ \(F_{0}\)_-unit speed geodesics_ \(\mathcal{P}_{0}\)_, and the_ \(F_{1}\)_-unit speed geodesics_ \(\mathcal{P}_{1}\) _are given by_ \[\begin{split}\mathcal{P}_{0}(t)&=\varphi_{t}(\rho(t )),\\ \mathcal{P}_{1}(t)&=\psi_{t}(\mathcal{P}_{0}(t))=\psi_ {t}\circ\varphi_{t}(\rho(t)),\end{split}\] (3.9) _where_ \(\rho(t)\) _is an_ \(h\)_-unit speed geodesic and_ \(\varphi_{t}\) _and_ \(\psi_{t}\) _are the flows of_ \(V_{0}\)_, and_ \(V\)_, respectively._ _The_ \(F_{2}\)_-unit speed geodesic_ \(\mathcal{P}_{2}(t)\) _coincides as points set with_ \(\mathcal{P}_{1}(t)\)_._ * _The conjugate points of_ \(q=\mathcal{P}_{2}(0)\) _along the_ \(F_{2}\)_-geodesic_ \(\mathcal{P}_{2}\) _coincide to the conjugate points of_ \(q=\mathcal{P}_{1}(0)\) _along the_ \(F_{1}\)_-geodesic_ \(\mathcal{P}_{1}\)_, up to parameterization. The point_ \(\mathcal{P}_{1}(l)\) _is conjugate to_ \(q=\mathcal{P}_{1}(0)\) _along the_ \(F_{1}\)_-geodesic_ \(\mathcal{P}_{1}:[0,l]\to M\) _if and only if the point_ \(\mathcal{P}_{0}(l)\) _is conjugate to_ \(q=\mathcal{P}_{0}(0)\) _along the corresponding_ \(F_{0}\)_-geodesic_ \(\mathcal{P}_{0}(t)=\psi_{-t}(\mathcal{P}_{1}(t))\)_, for_ \(t\in[0,l]\)_. The point_ \(\mathcal{P}_{0}(l)\) _is conjugate to_ \(q=\mathcal{P}_{0}(0)\) _along the_ \(F_{0}\)_-geodesic_ \(\mathcal{P}_{0}:[0,l]\to M\) _if and only if the point_ \(\rho(l)\) _is conjugate to_ \(q=\rho(0)\) _along the corresponding_ \(h\)_-geodesic_ \(\rho(t)=\varphi_{-t}(\mathcal{P}_{0}(t))\)_, for_ \(t\in[0,l]\)_, where_ \(\varphi_{t}\)_, and_ \(\psi_{t}\) _are the flows of_ \(V_{0}\)_, and_ \(V\)_, respectively._ * _The_ \(F_{2}\)_-cut locus of_ \(q\) _coincide as points set with the_ \(F_{1}\)_-cut locus of_ \(q\)_, up to parameterization._ * _The point_ \(\hat{p}_{1}\) _is an_ \(F_{1}\)_-cut point of_ \(q\)_, if and only if_ \(\hat{p}_{0}=\psi_{-l}(\hat{p}_{1})\) _is an_ \(F_{1}\)_-cut point of_ \(q\)_, where_ \(l=d_{F_{1}}(q,\hat{p}_{1})\)_. The point_ \(\hat{p}_{0}\) _is an_ \(F_{0}\)_-cut point of_ \(q\)_, if and only if_ \(p_{0}=\varphi_{-l}(\hat{p}_{0})\) _is an_ \(h\)_-cut point of_ \(q\)_, where_ \(l=d_{F_{0}}(q,\hat{p}_{0})\)_._
_Proof._ **(Proof of (i), (ii))** The proof of (i.1), (ii.1) follows immediately from Lemma 3.3 for \((F_{0},V)\). Likewise, (i.2) and (ii.2) follows from Lemma 3.3 for \((F_{1},W)\).
_Proof of (iii)._ The proof will be given in two steps.
**Step 1.** (Properties of \(F_{0}\), \(F_{1}\)) With the notations in hypothesis, conditions (C0), (C1) imply that the geodesics, conjugate points and cut points of the Randers metrics \(F_{0}\), \(F_{1}\) have the properties in (iii) due to Lemma 3.6.
**Step 2.** (Properties of \(F_{2}\)) By taking into account Lemma 3.7 one can see that condition (C2) is actually equivalent to \(d\widetilde{\beta}=0\), that is the Randers metrics \(F_{1}=\alpha+\beta\)
and \(F_{2}=\widetilde{\alpha}+\widetilde{\beta}\) are projectively related ([3]), therefore having the same geodesics as non-parameterized curves, same conjugate points and same cut points. Hence, the desired properties of \(F_{2}\) follows (see Figure 6).
**Remark 3.9**: Under the hypotheses in the Theorem 3.8 we have
1. conditions (C0), (C1) are equivalent to (C1), (C2)\({}^{\prime}\), where (C2)\({}^{\prime}\): \(V\) is \(h\)-Killing;
2. if we replace conditions (C0), (C1), (C2) with (C3): \(V_{0}+V+W\) is \(h\)-Killing,
then the \(F_{2}\)-geodesics, conjugate locus and cut locus is obtained from the \(h\)-geodesics, conjugate locus and cut locus deformed by the flow of \(V_{0}+V+W\), respectively. Observe that in this case, the \(F_{2}\)-geodesics, conjugate locus and cut locus are different from these in Theorem 3.8.
**Remark 3.10**: The construction presented here can now be extended to a sequence of Finsler metrics.
Our sequence construction has two steps. Let \((M,h)\) be a Riemannian two-sphere of revolution, and \(V_{0},V_{1},\ldots,V_{k-1},W_{k},\ldots,W_{n}\in\mathcal{X}(M)\), \(n=k-l\), a sequence of vector fields on \(M\).
**Step 1.** A sequence of vector fields: \(V_{0},V_{1},\ldots,V_{k-1}\), such that all \(V_{i}\) are \(F_{i}\)-Killing fields, \(i\in\{0,1,\ldots,k-1\}\),
\((M,h)\)\(\underbrace{V_{0},\|V_{0}\|_{h}<1}_{h\text{-Killing}}(M,F_{1}=\alpha_{1}+\beta_{2}) \underbrace{V_{1},F_{1}(-V_{1})<1}_{F_{1}\text{-Killing}}(M,F_{2}=\alpha_{2}+ \beta_{2})\underbrace{V_{2},F_{2}(-V_{2})<1}_{F_{2}\text{-Killing}}\cdots\)
\(\ldots\)\(\underbrace{V_{k-1},F_{k-1}(-V_{k-1})<1}_{F_{k-1}\text{-Killing}}(M,F_{k}=\alpha_{k}+ \beta_{k})\);
**Step 2.** A sequence of vector fields: \(W_{k}\ldots,W_{l}\), such that each \(\beta_{j}\) is closed one-form, for \(j\in\{k,k+1,\ldots,l\}\).
\[(M,F_{k}+\alpha_{k}+\beta_{k})\underbrace{W_{k},F_{k}(-W_{k})<1}_{h\text{-Killing}}( \underbrace{M,F_{k+1}=\alpha_{k+1}+\beta_{k+1}}_{h\text{-Killing}}(\underbrace{ M,F_{k+2}=\alpha_{k+2}+\beta_{k+2}}_{d\beta_{k+2}})\]
\(\ldots\)\(\underbrace{W_{k+2},F_{k+2}(-W_{k+2})<1}_{h\text{-Killing}}(\underbrace{M,F_{n}=\alpha_{n}+ \beta_{n}}_{d\beta_{n}})\).
Theorem 3.8 can be naturally extended to the two-step construction above. Indeed, if we start with a Riemannian structure \((M,h)\) and a sequence of vector fields \(V_{0},V_{1},\ldots,V_{k-1}\in\mathcal{X}(M)\), the Zermelo's navigation problems for
\[(h,V_{0})\text{ with solution }F_{1}=\alpha_{1}+\beta_{1},\] \[(F_{1},V_{1})\text{ with solution }F_{2}=\alpha_{2}+\beta_{2},\] \[\vdots\] \[(F_{k-1},V_{k-1})\text{ with solution }F_{k}=\alpha_{k}+\beta_{k},\]
will generate a sequence of positive defined Randers metrics provided \(\|V_{0}\|_{h}<1\), \(F_{i}(-V_{i})<1\), \(i\in\{1,\ldots,k-1\}\). The Zermelo's navigation data for \(F_{i}\) is also \((h,V_{0}+\ldots+V_{i})\), for all \(i\in\{1,\ldots,k-1\}\), hence \(F_{k}\) is positive defined if and only if \(\|V_{0}+\ldots+V_{k-1}\|_{h}<1\).
Next, if we start with \((M,F_{k})\) and the sequence of vector fields \(W_{k},\ldots,W_{n-1}\in\mathcal{X}(M)\) the Zermelo's navigation problems for
\[(F_{k}=\alpha_{k}+\beta_{k},W_{k})\text{ with solution }F_{k+1}= \alpha_{k+1}+\beta_{k+1},\] \[(F_{k+1}=\alpha_{k+1}+\beta_{k+1},W_{k+1})\text{ with solution }F_{k+2}= \alpha_{k+2}+\beta_{k+2},\] \[\vdots\] \[(F_{n-1},W_{n-1})\text{ with solution }F_{n}=\alpha_{n}+\beta_{n},\]
will generate another sequence of positive defined Randers metrics provided \(F_{k+j}(-W_{k+j})<1\), \(j\in\{0,1,2,\ldots,n-k-1\}\).
Observe again that by combining these with the sequence of Randers metrics constructed at first step, we can easily see that the Zermelo's navigation data of \(F_{k+j}\), \(j\in\{0,1,\ldots,n-k\}\) is \((h,V_{0}+\ldots+V_{k-1}+W_{k}+\ldots+W_{k+j})\), hence the final Randers metric \(F_{n}=\alpha_{n}+\beta_{n}\) is positive defined if and only if
\[\left\|\sum_{i=0}^{k-1}V_{i}+\sum_{j=0}^{n-k-1}W_{j+k}\right\|_{h}<1.\]
Moreover, if we impose conditions
* \(V_{0}\) is \(h\)-Killing;
* \(V_{i}\) is \(F_{i}\)-Killing, \(i\in\{1,\ldots,k-1\}\);
* \(W_{k+j}\) is chosen such that \(d\beta_{k+j}=0\), \(j\in\{0,\ldots,n-k\}\).
Clearly the geodesics, conjugate and cut loci of \(F_{n}\) can be obtained from the geodesics, conjugate locus, cut locus of \(h\) through the flow of \(V:=\sum_{i=0}^{k-1}V_{i}\), respectively.
Observe that condition \((C_{2j})\) are similar to (C2) in Theorem 3.8, but we prefer not to write them here explicitly, for simplicity.
This is the generalization of Theorem 3.8 to the sequence of Finsler metrics \(\{F_{1},\ldots,F_{n}\}\).
Nevertheless, there is a shortcut in this construction in the spirit of Remark 3.9. Indeed, if \(V+W\) is \(h\)-Killing, where \(V=\sum_{i=0}^{k-1}V_{i}\), \(W=\sum_{j=0}^{n-k-1}W_{k+j}\), then the geodesics, conjugate and cut loci of \(F_{n}\) are obtained from the geodesics, conjugate and cut loci of \(h\) through the flow of \(V+W\), respectively.
## 4 Conclusions
We will consider a simple example of the construction described in Theorem 3.8.
Let us start with the Riemannian two-sphere of revolution \((M\simeq\mathbb{S}^{2},h=dr^{2}+m^{2}(r)d\theta^{2})\) given in Section 2, Proposition 2.5. The vector field \(V_{0}\in\mathcal{X}(M)\) is \(h\)-Killing if and only if it is a rotation, i.e. \(V_{0}=\mu_{0}\frac{\partial}{\partial\theta}\), \(\mu_{0}\) constant, where \((r,\theta)\) are the \(h\)-geodesic coordinates. In order that \(F_{0}\) is positive defined we need the condition \(\|V_{0}\|_{h}<1\), i.e. \(\mu_{0}^{2}m^{2}(r)<1\).
Next, we consider the vector field \(V\in\mathcal{X}(M)\) which is also \(h\)-Killing if and only if \(V=\mu_{1}\frac{\partial}{\partial\theta}\) with \(\mu_{1}\) constant (see Remark 3.9). The Randers metric \(F_{1}=\alpha_{1}+\beta_{1}\) is positive defined if and only if \((\mu_{0}+\mu_{1})^{2}m^{2}(r)<1\), i.e. we choose \(\mu_{0},\mu_{1}\) such that \(m(r)<\frac{1}{\mu_{0}+\mu_{1}}\).
Finally, we construct a vector field \(W\in\mathcal{X}(M)\) such that \(d\beta=0\). For instance
\[W=A(r)\frac{\partial}{\partial r}-\mu\frac{\partial}{\partial\theta},\]
where \(\mu:=\mu_{0}+\mu_{1}\) is an obvious choice. Observe that \(V_{0}+V+W=A(r)\frac{\partial}{\partial r}\), hence \(\beta_{2}=-\frac{A(r)}{1-A^{2}(r)}dr\). If we impose condition \(A^{2}(r)<1\), then \(F_{2}\) is a positive defined Randers metric.
We obtain
**Proposition 4.1**: _Let \((M\simeq\mathbb{S}^{2},h=dr^{2}+m^{2}(r)d\theta^{2})\) be the Riemannian two-sphere of revolution described in Proposition 2.5. Let_
\[V_{0}=\mu_{0}\frac{\partial}{\partial\theta},\ V=\mu_{1}\frac{\partial}{ \partial\theta},\ W=A(r)\frac{\partial}{\partial r}-\mu\frac{\partial}{ \partial\theta},\]
_be three vector fields on \(M\), where \(\mu=\mu_{0}+\mu_{1}\)._
* _If_ \(m(r)<\frac{1}{\mu}\) _for all_ \(r\in[0,\pi]\) _and_ \(A:[0,\pi]\to[0,\infty)\) _is smooth function such that_ \(A^{2}(r)<1\)_, then the Finsler metrics_ \(F_{0}=\alpha_{0}+\beta_{0}\)_,_ \(F_{1}=\alpha_{1}+\beta_{1}\)_,_ \(F_{2}=\alpha_{2}+\beta_{2}\)_, obtained as solutions of Zermelo's navigation problem with data_ \((h,V_{0})\)_,_ \((F_{0},V)\) _and_ \((F_{1},W)\)_, respectively, are positive defined Randers metrics._
* _The Randers metrics_ \(F_{1}=\alpha_{1}+\beta_{1}\) _and_ \(F_{2}=\alpha_{2}+\beta_{2}\) _can be obtained as solutions of Zermelo's navigation problem with data_ \(\left(h,\mu\frac{\partial}{\partial\theta}\right)\) _and_ \((h,A(r)\frac{\partial}{\partial r})\)_, respectively._
* _iii.1 The unit speed_ \(F_{2}\)_-geodesics are given by_ \[\mathcal{P}(t)=\psi_{t}(\rho(t)),\] _where_ \(\rho\) _are unit speed_ \(h\)_-geodesics and_ \(\psi_{t}\) _is the flow of_ \(\widetilde{V}=V_{0}+V+W=A(r)\frac{\partial}{\partial r}\)_._
* _The point_ \(\widehat{p}=\mathcal{P}(l)\) _is conjugate to_ \(\widehat{q}:=\mathcal{P}(0)\) _along the_ \(F_{2}\)_-geodesic_ \(\mathcal{P}:[0,l]\to M\) _if and only if_ \(q=\mathcal{P}(0)=\rho(0)\) _is conjugate to_ \(p:=\rho(l)\) _along the corresponding_ \(h\)_-geodesic_ \(\rho(t)=\psi_{-t}(\mathcal{P}(t))\)_,_ \(t\in[0,l]\)_._
* _The cut locus of a point_ \(q\in(M,F_{2})\) _is a subarc if the antipodal parallel displaced by the flow_ \(\varphi_{t}\)_._
One can describe the Finsler metric \(F_{2}\) in coordinates as follows. If \(h\) is given by \(ds^{2}=dr^{2}+m^{2}(r)d\theta^{2}\) in the geodesic coordinates \((r,\theta)\in(0,\pi]\times[0,2\pi)\), then
\[\alpha_{2}^{2} =\frac{1}{\widetilde{\lambda}^{2}(r)}dr^{2}+\frac{m(r)}{ \widetilde{\lambda}}ds^{2},\] \[\beta_{2} =-\frac{A(r)}{\widetilde{\lambda}(r)}dr,\ \widetilde{\lambda}(r):=1-A^{2}(r).\]
More precisely, if \(m(r)=\frac{1}{1-2\alpha}\sin(r-\alpha\sin 2r)\) (see Example 2.9), for any \(\alpha\in\left(0,\frac{1}{2}\right)\) and \(A(r):=\frac{r}{\sqrt{r^{2}+1}}\), then \(\widetilde{\lambda}=\frac{1}{r^{2}+1}\) hence the Finsler metric \(F_{2}\) is given by
\[\alpha_{2}^{2} =(r^{2}+1)^{2}dr^{2}+\frac{1}{1-2\alpha}(r-\alpha\sin 2r)(r^{2}+1)d \theta^{2},\] \[\beta_{2} =-r\sqrt{r^{2}+1}\ dr,\ r\in(0,\pi],\ \theta\in[0,2\pi).\]
Other examples can be similarly constructed from the Riemannian examples in [16].
**Remark 4.2**: Observe that in order to construct Randers metric having same cut locus structure as the Riemannian metric \(h\), another condition is also possible. Indeed, choosing
\[V_{0}: =v_{0}(r,\theta)\frac{\partial}{\partial r}+w_{0}(r,\theta)\frac{ \partial}{\partial\theta},\] \[V: =-v_{0}(r,\theta)\frac{\partial}{\partial r}+[\mu-w_{0}(r,\theta )]\frac{\partial}{\partial\theta},\]
will lead to \(V_{0}+V=\mu\frac{\partial}{\partial\theta}\) which is \(h\)-Killing and combined with \(W=A(r)\frac{\partial}{\partial r}-\mu\frac{\partial}{\partial\theta}\) the derived conclusion follows, for any smooth functions \(v_{0},w_{0}\) and constant \(\mu\) such that \(m(r)<\frac{1}{\mu}\).
|
2309.16799 | Induced supersolidity in a Dy-Er mixture | Recent experimental realization of the heteronuclear dipolar mixture of Dy
and Er atoms opens fascinating prospects for creating intriguing novel phases
in dipolar quantum gases. The experimentally measured value of intra-species
$s$-wave scattering length of $^{166}$Er condensate in a $^{164}$Dy-$^{166}$Er
mixture is larger than its intra-species dipolar length, implies that the
$^{166}$Er condensate itself will not be in a regime of dominated dipole-dipole
interaction (DDI). However, we find that the presence of $^{164}$Dy atoms with
high magnetic moment induces droplet nucleation and supersolidity in $^{166}$Er
condensate via the long-range and anisotropic inter-species DDI. Remarkably, we
find that the imbalance in the magnetic dipole moment combined with its strong
anisotropic coupling led to the emergence of unique ground state phases. The
emerging phases include doubly superfluid states, a mixture of insulating
droplets and supersolid states, binary supersolids with uniform and alternating
domains and a combination of supersolid-superfluid mixed states. We delineate
the properties of all these ground state phases and construct a phase diagram.
We also explore the dynamical evolution across these phase boundaries via a
linear quench of inter-species scattering length. Although we have demonstrated
the result for the $^{164}$Dy-$^{166}$Er mixture, our results are generally
valid for other dipolar bosonic mixtures of different Dy-Er isotope
combinations and may become an important benchmark for future experimental
scenarios. | Soumyadeep Halder, Subrata Das, Sonjoy Majumder | 2023-09-28T19:02:52Z | http://arxiv.org/abs/2309.16799v1 | # Induced supersolidity in a Dy-Er mixture
###### Abstract
Recent experimental realization of the heteronuclear dipolar mixture of Dy and Er atoms opens fascinating prospects for creating intriguing novel phases in dipolar quantum gases. The experimentally measured value of intra-species \(s\)-wave scattering length of \({}^{166}\)Er condensate in a \({}^{164}\)Dy-\({}^{166}\)Er mixture is larger than its intra-species dipolar length, implies that the \({}^{166}\)Er condensate itself will not be in a regime of dominated dipole-dipole interaction (DDI). However, we find that the presence of \({}^{164}\)Dy atoms with high magnetic moment induces droplet nucleation and supersolidity in \({}^{166}\)Er condensate via the long-range and anisotropic inter-species DDI. Remarkably, we find that the imbalance in the magnetic dipole moment combined with its strong anisotropic coupling led to the emergence of unique ground state phases. The emerging phases include doubly superfluid states, a mixture of insulating droplets and supersolid states, binary supersolids with uniform and alternating domains and a combination of supersolid-superfluid mixed states. We delineate the properties of all these ground state phases and construct a phase diagram. We also explore the dynamical evolution across these phase boundaries via a linear quench of inter-species scattering length. Although we have demonstrated the result for the \({}^{164}\)Dy-\({}^{166}\)Er mixture, our results are generally valid for other dipolar bosonic mixtures of different Dy-Er isotope combinations and may become an important benchmark for future experimental scenarios.
## I Introduction
Supersolid (SS) is an intriguing state of matter that combines superfluidity and a spatial crystalline structure. This unique state was predicted nearly six decades ago and initially searched for in \({}^{4}\)He [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Despite being elusive in solid Helium, recent advancements in ultracold quantum gasses have made it possible to realize SS features. Bose-Einstein condensates (BEC) of Rydberg atoms [11; 12; 13], with spin-orbit coupling [14; 15; 16; 17; 18; 19; 20; 21], and in optical cavities [22; 23] have already demonstrated the formation of SS-like phases. The realization of the SS phase in ultracold dipolar bosonic gases composed of highly magnetic rare-earth Dy [24; 25; 26; 27; 28; 29] and Er [30; 31; 32] atoms attract significant attention in recent years. In dipolar bosonic gases, the mean-field-driven collapse is arrested by quantum fluctuations [33; 34; 30; 35]. The interplay between the short-range isotropic contact interaction (CI) and long-range anisotropic dipole-dipole interaction (DDI) in a dipolar BEC gives rise to a rich plethora of physical phenomena including the anisotropic superfluidity [36; 37], the appearance of roton excitation [38; 39; 40; 41; 42], formation of self-bound droplet [43; 44; 45; 46; 47] and SS phases both in quasi one [48; 49; 50; 51; 52; 53] and two-dimensional [54; 55] trapping confinement. Theoretical and experimental studies over the past few years reveal exotic pattern formation in dipolar BEC [56; 57; 58; 59; 60; 61; 62], nucleation of quantized vortices in SS state [63; 64; 65; 66; 67], formation of self-bound thin disk-shape macro droplets [68] and SS state in the anti-dipolar regime [69; 70].
Binary mixture offers more exciting facets resulting from the interplay between intra- and inter-species interactions. Isotropic miscible quantum droplets have been realized in non-dipolar binary homonuclear [71; 72; 73; 74] and heteronuclear [75; 76] bosonic mixtures with attractive inter-species CI. Non-dipolar binary mixtures of alkali atoms can form polar molecules possessing a large electric dipole moment featuring various exotic supersolid states [77].
Recent experimental realization of a quantum degenerate dipolar mixture of Dy and Er atoms [78], and the ability to control their inter-species interaction strength through the Feshbach resonance [79; 80] unveil new exciting aspects for the study of binary dipolar mixtures. Along with the intra- and inter-species CI, both components also exhibit long-range and anisotropic DDI, giving rise to the emergence of unique miscible and axially immiscible self-bound droplet states [81; 82; 83; 84]. Moreover, recent theoretical studies indicate that homonuclear binary dipolar mixtures have the potential to yield miscible and immiscible doubly SS and various mixed states both in quasi-one [85] and two-dimensional [86] trap confinement. However, in such scenarios, the substantial DDI necessitates the quantum stabilization of both condensates through the Lee-Huang-Yang (LHY) correction mechanism. Furthermore, the relatively high peak densities in these droplet and SS phases shorten their lifetime. On the contrary, a dipolar-nondipolar mixture can sustain a long-lifetime supersolid state [87; 88]. In the dipolar-nondipolar mixture, the dipolar component exhibits an insulating droplet (ID) or SS state, while the non-dipolar component serves as a background superfluid. However, the absence of inter-species DDI in such a system limits the physical richness and the emergence of the doubly SS phase is only feasible under the immiscible condition.
In contrast, both the condensates in a Dy-Er mixture possess substantial yet distinct magnetic dipole moments, resulting in a robust long-range anisotropic DDI coupling between the condensates. The imbalance in the DDI strength together with the strong intra- and inter-species interactions (CI and DDI), promises a more profound array of complex many-body phenomena. Driven by these facts, we uncover some of the diverse physical phenomena inherent in the Dy-Er mixture.
In this article, we theoretically investigate the possibility of forming various intriguing ground state phases of the \({}^{164}\)Dy-\({}^{166}\)Er mixture confined in a quasi-two-dimensional trapping geometry. As per recent experimental findings, the intra-species \(s\)-wave scattering length of \({}^{166}\)Er condensate in a \({}^{164}\)Dy-\({}^{166}\)Er mixture indicates that the \({}^{166}\)Er condensate will not be in a dipole-dominated interaction regime. However, in this work, we have demonstrated that even within this experimentally constrained parameter regime, the \({}^{164}\)Dy condensate can induce supersolidity in \({}^{166}\)Er condensate. This intriguing phenomenon arises as a consequence of the anisotropic long-range DDI coupling between the two condensates. In our study, we observe fascinating ground state phases in the \({}^{164}\)Dy-\({}^{166}\)Er mixture, including a doubly superfluid (SF-SF), a mixed state featuring ID and SS, a uniform domain supersolid (UDS), an alternating domain supersolid (ADS) and a SS-SF mixed states. We characterize these distinct miscible and immiscible ground state phases by quantifying the overlap between the two species and determining the superfluid fraction of each species. We delineate all these phase boundaries and construct the captivating groundstate phase diagram within the experimentally feasible parameter range. Employing the time-dependent coupled extended Gross-Pitaevskii equation (eGPE), we also monitor the dynamical evolution of these phases during quench-induced dynamics.
This paper is structured as follows. Section II describes the theory and formalism, including the coupled eGPE. Section III is devoted to investigating all possible ground state phases of a \({}^{164}\)Dy-\({}^{166}\)Er mixture. In subsection III.1, we describe characterization processes to differentiate all possible ground state phases. In subsection III.2, we remark all these phase boundaries and construct the ground state phase diagram of a dipolar heteronuclear mixture of \({}^{164}\)Dy and \({}^{166}\)Er atoms. Subsection III.3 is devoted to comprehending how the interplay between intra- and inter-species CI and DDI results in the formation of these intriguing ground state phases. In Sec. IV, we explore real-time dynamics and the formation of various two-dimensional miscible-immiscible droplet and supersolid states by using the time-dependent eGPE. A summary of our findings, together with future aspects, is provided in Sec. V. Appendix A describes the ingredients of our numerical simulations.
## II Formalism
We consider a heteronuclear binary dipolar mixture consisting of \({}^{164}\)Dy and \({}^{166}\)Er atoms. Here we assume both species have the same atomic mass \(m=165\)u and neglect the effect of gravitational sag. This is a valid approximation since \({}^{164}\)Dy and \({}^{166}\)Er have a relative mass difference of less than \(1.5\%\). The atoms of the mixture are polarized along the \(z\) direction by an external magnetic field and confined in a circular symmetric harmonic trapping potential. At zero temperature, the temporal evolution of the macroscopic wave function \(\psi_{i}\) (\(i=1,2\)) is described by the coupled extended GP equations \(i\hbar\psi_{i}=\mathcal{H}_{i}^{\rm GP}\psi_{i}\), with,
\[\mathcal{H}_{i}^{\rm GP} =\Big{[}-\frac{\hbar^{2}}{2m}\nabla^{2}+V_{t}(\mathbf{r})+\sum_{j =1}^{2}\Big{(}g_{ij}|\psi_{j}(\mathbf{r},\mathbf{t})|^{2}+\] \[\int d\mathbf{r}^{\prime}V_{ij}^{\rm dd}(\mathbf{r}-\mathbf{r}^{ \prime})|\psi_{j}(\mathbf{r}^{\prime},\mathbf{t})|^{2}\Big{)}+\Delta\mu_{i} \Big{]}. \tag{1}\]
Here, \(V_{t}(\mathbf{r})=\frac{1}{2}m_{i}\omega^{2}(x^{2}+y^{2}+\lambda^{2}z^{2})\) is the harmonic trapping potential with angular frequencies \(\omega_{x}=\omega_{y}=\omega,\omega_{z}\) and \(\lambda=\omega_{z}/\omega\) is the trap aspect ratio. The contact interaction strength between the atoms of species \(i\) and \(j\) is characterized by the coupling constant \(g_{ij}=4\pi\hbar^{2}a_{ij}/m\) with \(a_{ij}\) being the \(s\)-wave scattering length of the atoms. In addition to the contact interactions, there exists a long-range DDI between the atoms and it takes the following form
\[V_{ij}^{\rm dd}(\mathbf{r})=\frac{3g_{ij}^{\rm dd}}{4\pi}\left(\frac{1-3\cos^ {2}\theta}{r^{3}}.\right), \tag{2}\]
Here \(g_{ij}^{\rm dd}=4\pi\hbar^{2}a_{ij}^{\rm dd}/m\) is the dipolar coupling constant with the dipolar length \(a_{ij}^{\rm dd}=\mu_{0}\mu_{i}^{m}\mu_{j}^{m}m/12\pi\hbar^{2}\), where \(\mu_{0}\) is the vacuum permeability and \(\theta\) is the angle between the axis linking two dipolar atoms and their dipole polarization direction (\(z\)-axis). The last term appearing in the Hamiltonian [Eq.(1)] represents the correction to the chemical potential resulting from the effect of quantum fluctuation (known as LHY correction) given by [81; 82; 83]
\[\Delta\mu_{i}=\frac{m_{i}^{3/2}}{3\sqrt{2}\pi^{2}\hbar^{3}}\sum_{\pm}\int_{0}^ {1}{\rm d}u\ \operatorname{Re}\{I_{i\pm}\}, \tag{3}\]
where
\[I_{1\pm}=\bigg{(}\tilde{U}_{11} \pm\frac{\delta_{1}\tilde{U}_{11}+2\tilde{U}_{12}^{2}n_{2}}{\sqrt {\delta_{1}^{2}+4\tilde{U}_{12}^{2}n_{1}n_{2}}}\bigg{)}\bigg{(}n_{1}\tilde{U} _{11}+\] \[n_{2}\tilde{U}_{22}\pm\sqrt{\delta_{1}^{2}+4\tilde{U}_{12}^{2}n _{1}n_{2}}\bigg{)}^{3/2}, \tag{4}\]
with \(\delta_{1}=n_{1}\tilde{U}_{11}-n_{2}\tilde{U}_{22}\), and \(\tilde{U}_{ij}(u)=g_{ij}[1+a_{ij}^{\rm dd}/a_{ij}(3u^{2}-1)]\), being the Fourier transform of the DDI potential. A similar expression for \(\Delta\mu_{2}\) can be easily obtained with \(\delta_{2}=-\delta_{1}\).
However the Dy and Er have magnetic dipole moments \(\mu_{11}^{m}=9.93\mu_{B}\) and \(\mu_{22}^{m}=7\mu_{B}\), which corresponds to the dipolar lengths \(a_{11}^{\rm dd}=131a_{B}\) and \(a_{22}^{\rm dd}=65.5a_{B}\), respectively. Whereas the experimentally measured1 value of intra-species scattering lengths of \({}^{164}\)Dy and \({}^{166}\)Er condensates are \(a_{11}=90a_{B}\) and \(a_{22}=80a_{B}\), respectively [28, 30, 78, 79, 80]. Since \(a_{22}>a_{22}^{\rm dd}\), it implies that in a \({}^{164}\)Dy-\({}^{166}\)Er mixture, \({}^{166}\)Er condensate itself is not in a dipole-dominated interaction regime and stable against the mean field collapse. Therefore, the LHY correction is mainly relevant for the \({}^{164}\)Dy condensate2. Under this condition, the correction to the chemical potential energy is well approximated by the known form of LHY correction for a single species dipolar BEC \(\Delta\mu_{1}=\gamma_{1}(\epsilon_{11}^{\rm dd})n_{1}^{3/2}\)[33, 34], where the quantum fluctuation strength \(\gamma_{1}(\epsilon_{11}^{\rm dd})\) can be written as
Footnote 1: To produce a \({}^{164}\)Dy-\({}^{166}\)Er mixture experimentally, a magnetic field of \(B=2.075\) G [78] is required for the evaporative cooling process which corresponds to the value of \(a_{11}=90a_{B}\) and \(a_{22}=80a_{B}\) for \({}^{164}\)Dy and \({}^{166}\)Er condensate, respectively [28, 30, 80].
\[\gamma_{1}(\epsilon_{11}^{\rm dd})=\frac{32}{3}g_{11}\sqrt{\frac{a_{11}^{3}}{ \pi}}F(\epsilon_{11}^{\rm dd}), \tag{5}\]
with
\[F(\epsilon_{11}^{\rm dd})=\text{Re}\bigg{\{}\int_{0}^{1}\text{d}u(1+\epsilon_ {11}^{\rm dd}(3u^{2}-1))^{5/2}\bigg{\}} \tag{6}\]
and the dimensionless parameter \(\epsilon_{11}^{\rm dd}=a_{11}^{\rm dd}/a_{11}\), quantifies the relative strength of DDI to the contact interaction between the atoms of \({}^{164}\)Dy. The order parameters of each condensate are normalized to the total number of atoms in that species, \(N_{i}=\int\text{d}\textbf{r}{|\psi_{i}(\textbf{r})|}^{2}\).
## III Groundstate phases of Dy-Er mixture
In this section, we investigate the groundstate phases of a \({}^{164}\)Dy-\({}^{166}\)Er mixture with an equal number of atoms in each condensate (\(N_{1}=N_{2}=N\)) and the intra-species \(s\)-wave scattering lengths \(a_{11}=90a_{B}(<a_{11}^{\rm dd}=131a_{B})\) and \(a_{22}=80a_{B}(>a_{22}^{\rm dd}=65.5a_{B})\)[28, 30]. We consider a pancake-shaped trap geometry to confine the mixture with \((\omega_{x},\omega_{y},\omega_{z})=2\pi\times(45,45,133)\) Hz. With this choice of parameters and without any inter-species coupling (single-component case) the Dy atoms form a periodic density modulated SS state, whereas the Er atoms form a SF state as shown in Fig. 1. However, the presence of inter-species coupling in a Dy-Er mixture makes it a more intriguing candidate to investigate new possibilities in dipolar quantum gases. In the following, we evaluate the groundstate phases of the mixture as a function of the number of particles \(N\) and inter-species scattering length \(a_{12}\). We observe that the Dy condensate can induce droplet nucleation and supersolidity in the Er condensate.
### Characterization of distinct ground state phases
Dy-Er mixture can exhibit either a miscible or an immiscible phase. We differentiate these two phases by evaluating the overlap integral [89, 90]
\[\Lambda=\frac{\left[\int d\textbf{r}n_{1}(\textbf{r})n_{2}(\textbf{r})\right] ^{2}}{\left[\int d\textbf{r}n_{1}^{2}(\textbf{r})\right]\left[\int d\textbf{r }n_{2}^{2}(\textbf{r})\right]}, \tag{7}\]
where \(n_{i}(\textbf{r})=|\psi_{i}(\textbf{r})|^{2}\) is the densities of the species-\(i\). \(\Lambda=1\) implies maximal spatial overlap between the condensates, i.e., the system is in a completely miscible state, whereas a complete phase separation (immiscible phase) corresponds to \(\Lambda=0\). The variation of overlap between the Dy and Er condensates with the number of particles \(N\) and inter-species scattering length \(a_{12}\) is depicted in Fig. 2.
The periodic density modulation due to the anisotropic DDI can be characterized by the density contrast [52]
\[\mathcal{C}=\frac{(n_{i}^{\rm max}-n_{i}^{\rm min})}{(n_{i}^{\rm max}+n_{i}^{ \rm min})}. \tag{8}\]
Here \(n_{i}^{\rm max}\) and \(n_{i}^{\rm min}\) denote the neighboring maxima and minima of the \(i\)-th component observed on the x-y plane. A SF state is associated with a smooth density distribution where \(n_{i}^{\rm max}=n_{i}^{\rm min}\), leading to \(\mathcal{C}=0\). In case of an ID state where there is no overlap between the droplets
Figure 1: Ground state density profile of single component dipolar BEC. (a) Density distributions of Dy condensate with \(\mu_{1}^{m}=9.93\mu_{B}\) and \(a_{11}=90a_{B}\) and (b) Er condensate with \(\mu_{2}^{m}=7\mu_{B}\) and \(a_{22}=80a_{B}\) in the \(x-y\) plane, where \(a_{B}\) is the Bohr radius and \(\mu_{B}\) is the Bohr Magneton. Each of these single component dipolar BEC consists of \(N=6\times 10^{4}\) number of atoms and is confined in a circular symmetric harmonic trap with \((\omega_{x},\omega_{y},\omega_{z})=2\pi\times(45,45,133)\) Hz. The different color bar for each panel denotes the density of the corresponding condensate in units of \(\mu\)m\({}^{-3}\).
(\(n_{\text{min}}\approx 0\)), Eq. 8 yields \(\mathcal{C}\approx 1\). Conversely, a SS state is characterized by overlapping droplets (\(n_{i}^{\text{min}}\neq 0\)), resulting in a density contrast \(\mathcal{C}\) that attains an intermediate value between 0 and 1 [68; 86].
Nevertheless, the mere presence of a crystal-like structure alone does not conclusively indicate a SS phase. To better understand the degree of supersolidity, the superfluid fraction serves as a valuable measure. In the \(x-y\) plane, the superfluid fraction is a rank-2 tensor with \(x\) and \(y\) as the eigen axes. The superfluid fraction of species-\(i\) along the direction \(\sigma\) is measured by applying a perturbation \(-v_{i}^{\sigma}\hat{P}_{i}^{\sigma}\) to the species-\(i\), where \(v_{i}^{\sigma}\) and \(\hat{P}_{i}^{\sigma}\) are the velocity and momentum operator, respectively3. The superfluid fraction of the species-\(i\) along the direction \(\sigma\) is given by
Footnote 3: This is equivalent to solving the stationary state in a comoving reference frame moving with a velocity \(v_{i}^{\sigma}\) along the axis \(\sigma\), i.e., \(i\hbar\psi_{i}=\left(\mathcal{H}_{i}^{GP}-v_{i}^{\sigma}\hat{P}_{i}^{\sigma} \right)\psi_{i}\).
\[f_{i}^{s,\sigma}=1-\lim_{v_{i}^{\sigma}\to 0}\frac{\left\langle\hat{P}_{i}^{ \sigma}\right\rangle}{N_{i}mv_{i}^{\sigma}}, \tag{9}\]
where \(\left\langle\hat{P}_{i}^{\sigma}\right\rangle=-i\hbar\int\psi_{i}^{*}\partial \psi_{i}/\partial\sigma\) is the expectation value of momentum and \(N_{i}mv_{i}^{\sigma}\) is the total momemtum of species-\(i\). In our analysis, we employ Leggett's upper bound within the central region of each condensate of size \(2L\times 2L\) to estimate the superfluid fraction along the axis \(\sigma\)[91; 92; 2]
\[f_{i}^{s,\sigma} =\Bigg{[}\left\langle n_{i}(\sigma)\right\rangle\left\langle \frac{1}{n_{i}(\sigma)}\right\rangle\Bigg{]}^{-1}\] \[=(2L)^{2}\Bigg{[}\int_{-L}^{L}\text{d}\sigma\ n_{i}(\sigma)\int_ {-L}^{L}\text{d}\sigma\frac{1}{n_{i}(\sigma)}\Bigg{]}^{-1}, \tag{10}\]
where \(\sigma=\{x,y\}\), \(x,y\in[-L,L]\) and \(n_{i}(x)=\int\int\text{d}y\text{d}z|\psi_{i}(x,y,z)|^{2}\). In our case, we consider \(L=4.5\mu\)m. The overall superfluid fraction of each component is estimated by taking an average of the superfluid fraction along \(x\) and \(y\) direction \(f_{i}^{s}=(f_{i}^{s,x}+f_{i}^{s,y})/2\). In this work, we categorize a component as an ID when
Figure 2: Value of the overlap integral \(\Lambda\) of Dy-Er mixture as a function of inter-species scattering length \(a_{12}\) and the number of particles in each condensate \(N\). The black dashed contour has been drawn at \(\Lambda=0.5\), roughly separating the miscible and immiscible phase domain. To demark the completely miscible and immiscible phase domain, two other dotted and dash-dotted white contours are drawn at \(\Lambda=0.9\) and \(\Lambda=0.1\), respectively. The color bar corresponds to the value of the overlap integral \(\Lambda\). The result is for the case of binary dipolar BEC consisting of \({}^{164}\)Dy and \({}^{166}\)Er atoms with \(a_{11}=90a_{B}\), \(a_{11}^{\text{dd}}=131a_{B}\) and \(a_{22}=80a_{B}\), \(a_{22}^{\text{dd}}=65.5a_{B}\), respectively. The binary mixture is confined in a harmonic trap with \((\omega_{x},\omega_{y},\omega_{z})=2\pi\times(45,45,133)\) Hz.
Figure 3: Superfluid fraction \(f_{i}^{s}\) of (a) \({}^{164}\)Dy (\(f_{i}^{s}\)) and (b) \({}^{166}\)Er condensates (\(f_{i}^{s}\)) as a function of inter-species scattering length \(a_{12}\) and number of particles in each condensate \(N\). The black dash-dotted contour is drawn at \(f_{i}^{s}=0.1\). The region inside the contour represents ID phase domain. No such region is present for Er condensate. Other parameters are same as Fig. 2.
its average superfluid fraction \(f_{i}^{s}\) is less than \(0.1\). Conversely, a component with \(f_{i}^{s}>0.1\), accompanied by a periodic density modulation, is considered to be in a SS state. Additionally, a component with \(f_{i}^{s}>0.1\) and an unmodulated density profile is classified as a SF state. The superfluid fraction of each component is shown in Fig. 3.
### Phase diagram
In Fig. 4, we depict the ground state phase diagram of the \({}^{164}\)Dy-\({}^{166}\)Er mixture. For a small number of particles, the binary mixture forms a SF-SF mixture [Fig. 4 ] characterized by a smooth unmodulated density distribution on the \(x-y\) plane [see Figs. 5(a i) and 5(b i)]. As we increase the number of particles, the anisotropic DDI strongly comes into play. At low \(a_{12}\) (in the miscible phase domain \(\Lambda>0.8\)), due to the large intra-species anisotropic DDI, the atoms of \({}^{164}\)Dy condensate experience an attractive potential at certain positions which leads to an increase in the density at those locations and consequently form a ID state [see Fig. 5(a ii)] with a superfluid fraction \(f_{1}^{s}<0.1\). As a consequence of these isolated density humps in the \({}^{164}\)Dy condensate, the atoms of \({}^{166}\)Er condensate occupying those positions are also experiencing significant attractive inter-species DDI. This results in the formation of periodic density humps (droplets) within the \({}^{166}\)Er condensate. These induced droplets are coupled with each other via a background superfluid characterized by a lower peak density and higher superfluid fraction \(f_{2}^{s}>0.1\) [see Fig. 5(b ii)]. The binary mixture collectively forms a miscible ID-SS mixed state within this phase domain [Fig. 4 ].
Increasing \(a_{12}\) up to the miscible to immiscible phase transition boundary induces a transition to the uniform domain supersolid (UDS) state [Fig. 4 ]. Within this phase regime, the first species composed of \({}^{164}\)Dy atoms forms a SS state with \(f_{1}^{s}>0.1\) [Fig. 5 a iii]. Similar to the previous scenario, the \({}^{164}\)Dy condensate induces supersolidity in the \({}^{166}\)Er condensate as well. Notably, both condensates exhibit droplets located precisely at the same positions and these droplets are interconnected by a background superfluid [see Figs. 5(a iii) and 5(b iii)]. Nevertheless, the peak density of both condensates is lower compared to the peak density in the SS state of a single-component dipolar BEC (\(a_{12}=0\)). Specifically, the peak density of the \({}^{166}\)Er condensate is approximately an order of magnitude smaller than that of the \({}^{164}\)Dy condensate and has an enhanced superfluid fraction (\(f_{2}^{s}>0.7\)). Though we have not considered the three-body recombination loss, the reduced peak density of this UDS state leads to a decrease in the three-body recombination loss rate. As a consequence, the UDS state will persist for a longer time.
As the value of \(a_{12}\) is further increased, the overlap between the binary components diminishes. We observe a phase domain in close proximity to the miscible to immiscible phase boundary, where the relatively large inter-species repulsive interaction leads to the emergence of contrasting density patterns between the binary components. Specifically, while one component exhibits a density maximum, the other component forms a density minimum. This results in the formation of an intriguing alternating domain of supersolid (ADS) state [Fig. 4 ]. In this phase regime, the \({}^{164}\)Dy condensate adopts a hexagonal-shaped supersolid state [see Fig. 5(a iv)] with a superfluid fraction \(f_{1}^{s}>0.2\), whereas the \({}^{166}\)Er condensate forms a honeycomb supersolid phase [see Fig. 5(b iv)]. Remarkably, in this phase domain, the \({}^{166}\)Er condensate exhibits a lower peak density resembling that of a SF state and also displays an enhanced superfluid fraction (\(f_{2}^{s}>0.8\)).
Further increasing \(a_{12}\) beyond the ADS phase domain, the inter-species DDI becomes repulsive. As a consequence, the \({}^{166}\)Er condensate adopts a SF state, while the \({}^{164}\)Dy condensate exhibits a SS state due to the attractive intra-species DDI. The \({}^{166}\)Er condensate, owing to its relatively small intra-species \(s\)-wave scattering length, occupies the central position within the trap, while the \({}^{164}\)Dy condensate forms a ring-shaped supersolid configuration. The ring supersolid state exhibits a lower superfluid fraction (\(f_{1}^{s}>0.1\)) compared
Figure 4: Ground state phase diagram of Dy-Er mixture in an oblate trap as a function of inter-species scattering length \(a_{12}\) and number of particles in each condensate \(N\). Different color domains highlighted with different markers represent SF-SF (), ID-SS (), uniform domain supersolid (UDS ), alternating domain supersolid (ADS ), and SS-SF () phase domains, respectively. The result is for the case of binary dipolar BEC consisting of \({}^{164}\)Dy and \({}^{166}\)Er atoms with \(a_{11}=90a_{B}\), \(a_{11}^{dd}=131a_{B}\) (\(\mu_{1}^{m}=9.93\mu_{B}\)) and \(a_{22}=80a_{B}\), \(a_{22}^{dd}=65.5a_{B}\) (\(\mu_{2}^{m}=7\mu_{B}\)), respectively. The binary mixture is confined in a harmonic trap with \((\omega_{x},\omega_{y},\omega_{z})=2\pi\times(45,45,133)\) Hz.
to the SS states observed in the UDS and ADS phases. The density distributions of the \({}^{164}\)Dy and \({}^{166}\)Er condensate in this SS-SF phase domain [Fig. 4 \(\blacksquare\)] are shown in Figures 5(a v) and 5(b v), respectively. It is worth noting that the occurrence of SS-SF and ADS states has also been recently predicted for a dipolar-nondipolar mixture and an anti-aligned dipolar mixture [87; 88].
To further validate the crystalline structure we compute the power spectrum of each species \(S_{i}(k_{x},k_{y})=|\mathcal{F}[n_{i}(x,y,z=0)](k_{x},k_{y})|^{2}\), where \(n_{i}(x,y,z=0)\) is the density of species-\(i\) in the \(z=0\) plane and \(\mathcal{F}[n_{i}(x,y,0)](k_{x},k_{y})\) represent the Fourier transformation of the corresponding density distribution from position space to the momentum space. There is no additional density modulation along the axial direction, so examining the power spectrum in the \(k_{x}-k_{y}\) plane is sufficient to comprehend the condensate's crystallinity. The insets of Fig. 5 represent the power spectrum in the \(k_{x}-k_{y}\) plane. In the SF-SF mixture, a solitary peak is evident at \(k_{x}=k_{y}=0\). However, in other states besides the central peak, we observe growths in the weight of the power spectrum at finite momentum values (\(k_{x},k_{y}\neq 0\)). These localized maxima in the power spectrum are organized in a triangular pattern, except for the \({}^{166}\)Er condensate in the ADS and SS-SF mixed states. The triangular lattice pattern in the power spectrum signifies the emergence of multiple Brillouin zones formed by the droplets observed in the position space [56]. The power spectrum of the \({}^{166}\)Er condensate in the ADS state exhibits a hexagonal pattern, visible at lower momenta, and is accompanied by a triangular lattice pattern at higher momentum values. In the SS-SF state, apart from the central maximum, the \({}^{166}\)Er condensate showcases a hexagonal-shaped power spectrum at lower momentum values, attributed to the arrangement of density voids. The absence of a triangular lattice pattern indicates a lack of crystalline properties, signifying that the \({}^{166}\)Er condensate is in the SF state.
### Interaction energies
The interplay between intra- and inter-species CI and DDI results in the emergence of the above distinct ground state phases. The total energy of a Dy-Er mixture can be written as follows:
\[E= \int\mathrm{d}\mathbf{r}\left[\,\sum_{i=1}^{2}\left\{\frac{\hbar ^{2}}{2m_{i}}|\mathbf{\nabla}\psi_{i}(\mathbf{r})|^{2}+V_{t}|\psi_{i}(\mathbf{r} )|^{2}\right\}\right] \tag{11}\] \[+\,\sum_{i,j=1}^{2}\left(E_{ij}^{\mathrm{CI}}+E_{ij}^{\mathrm{ DDI}}\right)+\frac{2}{5}\gamma_{1}(\epsilon_{11}^{\mathrm{dd}})\int\mathrm{d} \mathbf{r}\left|\psi_{1}(\mathbf{r})\right|^{5}.\]
Here the first two terms represent the kinetic energy and external potential energy due to the harmonic confinement, respectively. The last term represents the beyond mean-field LHY interaction energy for the Dy condensate5. The energy terms \(E_{ij}^{\mathrm{CI}}\) and \(E_{ij}^{\mathrm{DDI}}\) refer to CI and
Figure 5: Ground state density profile of Dy-Er mixture. The upper and lower panel represent the density distributions of \({}^{164}\)Dy and \({}^{166}\)Er condensate in the \(x-y\) plane, respectively. Each column represents a density distribution corresponding to (a i, b i) SF-SF (\(\blacklozenge\)), (a ii, b ii) ID-SS (\(\blacklozenge\)), (a iii, b iii) UDS (\(\blacklozenge\)), (a iv, b iv) ADS (\(\blacktriangle\)) and (a v, b v) SS-SF (\(\blacksquare\)) ground state phases, respectively. Locations of all these phases are highlighted in the ground state phase diagram by the corresponding marker (see Fig. 4). The color bar denotes the density in units of \(\mu\)m\({}^{-3}\). The insets in the lower right corners of each density distribution represent the power spectrum \(S_{i}(k_{x},k_{y})\) in the \(k_{x}-k_{y}\) plane.
DDI energy, respectively and can be written as
\[E_{ij}^{\rm CI}=\frac{1}{2}\int{\rm d}{\bf r}\,g_{ij}|\psi_{i}({\bf r})|^{2}| \psi_{j}({\bf r})|^{2}, \tag{12}\]
and
\[E_{ij}^{\rm DDI}=\frac{1}{2}\int\int{\rm d}{\bf r}\,{\rm d}{\bf r}^{\prime}\,V_{ ij}^{\rm dd}({\bf r}-{\bf r}^{\prime})|\psi_{j}({\bf r}^{\prime})|^{2}|\psi_{i}({ \bf r})|^{2}. \tag{13}\]
Figure 6(a) illustrates the variation of intra-species and inter-species CI energies as well as DDI energies with \(a_{12}\) for \(N=6\times 10^{4}\) number of particles in each condensate. To quantify the relative strength of DDI energy over the CI energy for the species-\(i\), we introduce a dimensionless parameter \(\Delta{\cal E}_{i}=\sum_{j}E_{ij}^{\rm DDI}/\sum_{j}E_{ij}^{\rm CI}\). In Fig. 6(b), we have shown the variation of \(\Delta{\cal E}_{i}\) with the inter-species scattering length \(a_{12}\). In general, the value of \(\Delta{\cal E}_{i}<-1\) implies a dominating attractive DDI energy characterizing a self-bound ID phase. Conversely, \(\Delta{\cal E}_{i}>0\) signifies a SF phase with dominating repulsive interactions. In the SS state, the dimensionless parameter \(\Delta{\cal E}_{i}\) attains an intermediate value between \(-1\) to \(0\). Additionally, we observe that a honeycomb supersolid state appears when \(\Delta{\cal E}_{i}\) slightly trends toward the positive value (in the vicinity of \(\Delta{\cal E}_{i}\to 0\)).
For very small values of \(a_{12}\), the DDI energy dominates over the CI energy, leading to \(\Delta{\cal E}_{1}<-1\) for the \({}^{164}\)Dy condensate, while \(\Delta{\cal E}_{2}\geq-1\) for the \({}^{166}\)Er condensate due to smaller intra-species DDI energy. Consequently, the \({}^{164}\)Dy condensate adopts an ID state and the \({}^{166}\)Er condensate forms a SS state. The value of \(\Delta{\cal E}_{i}\) for both the condensates increases with the increase of \(a_{12}\). Within the UDS phase domain, both condensates exhibit \(-1.1\leq\Delta{\cal E}_{i}\leq 0\) and form a binary SS state. In the ADS phase domain, although the value of \(\Delta{\cal E}_{1}<0\), the \(\Delta{\cal E}_{2}\) becomes slightly positive. As a result of this, the superfluidity of the \({}^{166}\)Er condensate increases and adopts a hexagonal supersolid state. There exists a narrow window where both components have positive \(\Delta{\cal E}_{i}\) and form a SF-SF mixture. Further increasing \(a_{12}\), the atoms of \({}^{164}\)Dy condensate experience an attractive intra-species DDI energy (\(\Delta{\cal E}_{1}\leq-1\)), whereas the \({}^{166}\)Er condensate experience a dominating repulsive interaction energy (\(\Delta{\cal E}_{2}>1\)). This results in the formation of a SS-SF mixed state.
## IV Quench induced dynamics of Dy-Er mixture
So far, our discussion has centered around the potential phases that the \({}^{164}\)Dy-\({}^{166}\)Er mixture could adopt in its ground state. Our focus will now shift toward investigating the dynamical phase transition across these phase boundaries. This transition is triggered by the quenching of the inter-species scattering length \(a_{12}\).
To initiate this study, we first prepare the \({}^{164}\)Dy-\({}^{166}\)Er mixture in a miscible ID-SS configuration, with interaction parameters set as follows: \(a_{11}=90a_{B}\), \(a_{22}=80a_{B}\), and \(a_{12}=50a_{B}\). Each condensate contains a total of \(N=6\times 10^{4}\) particles. A small-amplitude noise is introduced to the initial state to incorporate the influence of thermal fluctuations. Subsequently, we gradually increase the inter-species scattering length from its initial value of \(a_{12}=50a_{B}\) to a higher value of \(a_{12}=100a_{B}\). This modulation of \(a_{12}\) is executed using two distinct linear ramps with ramp time \(\tau=50\)ms and \(\tau^{\prime}=100\)ms and
Figure 6: (a) Variation of different intra- and inter-species energy components \(E_{ij}^{\sigma}\) in units of \(\hbar\omega_{x}\) with the inter-species scattering length \(a_{12}\). \(\sigma\) denotes CI and DDI (see the legend). Panel (b) shows the variation of \(\Delta{\cal E}_{i}\) with \(a_{12}\). \(\Delta{\cal E}_{i}\) quantifies the relative strength of DDI energy over the CI energy. The dotted vertical lines separate different phase domains and are marked by the corresponding markers. Results are for the case of \(N=6\times 10^{4}\) number of particles in each condensate. Other parameters are same as of Fig. 4.
after which \(a_{12}\) is kept constant to check the stability of the evolved state [see Fig. 7(a)].
In Figure 7(b) and 7(c,d), we present the temporal evolution of the overlap integral \(\Lambda\) and the superfluid fraction \(f_{i}^{s}\) for each species, corresponding to both the ramps with ramp times \(\tau\) and \(\tau^{\prime}\), respectively. Initially, owing to the relatively small inter-species interactions, the binary mixture adopts a miscible configuration where \(\Lambda\) attains unity. During this phase, the \({}^{164}\)Dy condensate maintains a superfluid fraction \(f_{1}^{s}<0.1\), while the \({}^{166}\)Er condensate exhibits a superfluid fraction around \(f_{2}^{s}\approx 0.5\). Throughout both the quenching process, the overlap integral experiences a gradual reduction while the superfluid fraction of both condensates increases [see Fig. 7(b)]. Remarkably, the \({}^{166}\)Er condensate consistently maintains a higher superfluid fraction than the \({}^{164}\)Dy condensate as the dynamics unfold. It is also interesting to note that the superfluid fraction of the \({}^{166}\)Er condensate exhibits a smooth transition across all phase boundaries [see Fig. 7(c, d)]. In contrast, the superfluid fraction of the \({}^{164}\)Dy condensate follows a distinct pattern: it initially increases up to the phase boundary separating the UDS and ADS states and then subsequently decreases up to the boundary between ADS and SS-SF phases [see Fig. 7 (c, d)]. In between the transitions from UDS to ADS and ADS to SS-SF phases, we find an abrupt alteration in the value of the superfluid fraction of the \({}^{164}\)Dy condensate [Fig. 7 (d)]. However, these abrupt shifts are absent when quenching is done with quicker ramp time (\(\tau=50\)ms) instead, the superfluid fraction of the \({}^{164}\)Dy condensate oscillates beyond the ADS to SS-SF phase transition boundary [Fig. 7 (c)].
The SS-SF state produced through these quench-induced dynamics [see Fig. 8(a iv), (b iv)] is slightly different from the SS-SF state observed in the ground state. In the ground state, the \({}^{164}\)Dy condensate adopts a ring-like supersolid configuration. Whereas, following the quench-induced dynamics the \({}^{164}\)Dy condensate forms a metastable state due to the transition through discontinuous phase boundaries. It exhibits six droplets interconnected by a background superfluid, forming a ring-like arrangement with an isolated droplet positioned at the central point of the ring [see Fig. 8(a iv)]. Meanwhile, due to dominating repulsive interactions, the \({}^{166}\)Er condensate adopts a superfluid state with certain density voids following both quenching processes [see Fig. 8(b iv)]. The representative instantaneous density profiles of different states produced during the interaction quench (\(\tau=100\)ms) are shown in Fig. 8(a i - b iv).
## V Conclusions and Outlook
In conclusion, we demonstrate that though the \({}^{166}\)Er condensate does not primarily exhibit a dipole-dominated interaction within a \({}^{164}\)Dy-\({}^{166}\)Er mixture, the presence of the \({}^{164}\)Dy condensate can trigger the formation of a quantum droplet and induce supersolidity in \({}^{166}\)Er condensate. The imbalance in the dipole-dipole interaction strength, the interplay between the various intra-species and inter-species interactions and
Figure 7: Quenching of inter-species scattering length \(a_{12}\) from an initial value \(50a_{B}\) to a final value \(100a_{B}\) of a \({}^{164}\)Dy-\({}^{166}\)Er mixture consist of \(N_{1}=N_{2}=6\times 10^{4}\) number of particles in each condensate. Panel (a) shows two different linear ramps with ramp time \(\tau=50\)ms and \(\tau^{\prime}=100\)ms. Panel (b) shows the corresponding variation of the value of the overlap integral \(\Lambda\). Panel (c) and (d) show the time evolution of superfluid fraction \(f_{i}^{s}\) of each condensate due to the quenching of \(a_{12}\) in \(\tau=50\)ms and \(\tau^{\prime}=100\)ms, respectively. The dotted vertical lines in panels (c) and (d) separate different phases that emerge due to the quench-induced dynamics and are marked by the corresponding marker. Other parameters are same as of Fig. 4.
Figure 8: Snapshots of the density distributions of Dy-Er mixture following the quenched-induced dynamics with ramp time \(\tau^{\prime}=100\)ms in the \(x-y\) plane. The upper and the lower panels represent the density \(n(x,y,z=0,t)\) of \({}^{164}\)Dy and \({}^{166}\)Er condensate, respectively. Each column represents a different phase and is marked by the corresponding marker to denote (a i, b, i) ID-SS (), (a ii, b ii) UDS (), (a iii, b iii) ADS () and (a iv, b iv) SS-SF states. The color bar denotes the density in units of \(\mu\)m\({}^{-3}\).
the anisotropic coupling between the components results in a wide range of unique groundstate phases. We have reported the emergence of fascinating binary SS configurations, encompassing uniform and alternating domain supersolid (UDS and ADS) states and mixed states that include combinations of ID-SS and SS-SF states. We characterize all these phases and construct a ground state phase diagram for a wide range of parameters within the current experimental reach. In contrast to single-component dipolar and dipolar-nondipolar mixtures, the heteronuclear mixture of \({}^{164}\)Dy and \({}^{166}\)Er condensates displays a broader parameter regime where various SS states emerge. In these SS phases, both condensates maintain comparatively low peak densities, potentially leading to prolonged lifetimes of these phases. Notably, owing to a relatively weak intra-species DDI strength and a dominant repulsive intra-species contact interaction, the \({}^{166}\)Er condensate results in nearly an order of magnitude lower peak density than the \({}^{164}\)Dy condensate. In fact, the induced SS states in \({}^{166}\)Er condensate exhibit a peak density resembling that of a superfluid state. Consequently, to form these unique SS states, the \({}^{166}\)Er condensate does not require quantum stabilization in the form of LHY correction. Owing to the diverse interactions, including intra- and inter-species interactions (CI and DDI), the heteronuclear binary mixture is also capable of exotic pattern formation. In addition to the trivial hexagonal supersolid structure with a triangular droplet lattice, \({}^{166}\)Er condensate possesses a unique honeycomb supersolid pattern within the alternating domain supersolid phase. Furthermore, the \({}^{164}\)Dy condensate adopts a ring-like supersolid structure in a SS-SF mixed phase. We also monitor the dynamical nucleation of these phases, starting from an initial ID-SS mixed state to a SS-SF mixed state. This transition takes place through the UDS and ADS intermediate states and is achieved via linear quenches of the inter-species scattering length. Overall, our findings demonstrate that the heteronuclear binary dipolar mixture offers an intriguing platform for observing a variety of unique long-lived supersolid states and diverse physical phenomena within the current experimental reach.
Our observations pave the way for numerous promising research directions for future endeavors. One straightforward option is to investigate the lifetime of these unique doubly SS (UDS and ADS) and mixed phases (ID-SS and SS-SF) by incorporating the three-body interaction loss [43]. Another important aspect will be to study the elementary excitations [44; 93] of these phases and across the phase transition boundaries. In our work, we have restricted our investigation to a \({}^{164}\)Dy-\({}^{166}\)Er mixture. However, Dy and Er feature many stable bosonic and fermionic isotopes [78] and it would be intriguing to explore possible physical phenomena in other Bose-Bose, Bose-Fermi and Fermi-Fermi mixtures [94; 95; 96]. Another vital prospect would be to investigate the exotic pattern formation [57; 56; 97] and various topological excitations such as the formation of solitons and vortex clusters [98; 67; 99] in a heteronuclear binary dipolar mixture. Moreover, the direct dynamical nucleation of these phases via the evaporation cooling process and investigation of the impact of finite temperature on these phases would be highly interesting [53; 55].
###### Acknowledgements.
We acknowledge the National Supercomputing Mission (NSM) for providing computing resources of 'PARAM Shakti' at IIT Kharagpur, which is implemented by C-DAC and supported by the Ministry of Electronics and Information Technology (MeitY) and Department of Science and Technology (DST), Government of India. S. Halder acknowledges the MHRD Govt. of India for the research fellowship. S. Das acknowledges support from AFOSR FA9550-23-1-0034.
## Appendix A Numerical Methods
Results in this work are based on three-dimensional numerical simulations in the coupled eGPE framework. For the sake of the convenience of numerical simulations and better computational precision, we cast the coupled eGPE into a dimensionless form. This is achieved by rescaling the length scale and time scale in terms of oscillator length \(l_{osc}=\sqrt{\hbar/m\omega_{x}}\) and \(\omega_{x}\) trapping frequency along the x direction. Under this transformation, the wave function of species-\(i\) obeys \(\psi_{i}(\mathbf{r})=\sqrt{l_{osc}^{3}/N_{i}}\psi_{i}(\mathbf{r})\), where \(N_{i}\) is the number of particles in species-\(i\). After the transformation of variables into dimensionless quantities, the coupled eGPE is solved by split-step-Crank-Nicolson scheme [100]. Since the dipolar potential has a singularity at \(r=0\) (see Eq. (2)), it is numerically evaluated in Fourier space and we obtain the real space contribution through the application of the convolution theorem. The groundstates of binary dipolar condensate are obtained by propagating the relevant equations in imaginary time until the relative deviations of the wave functions (calculated at every grid point) and energy of each condensate between successive time steps are less than \(10^{-6}\) and \(10^{-7}\), respectively. Furthermore, we fix the normalization of each species at every time instant of the imaginary time propagation. Using this groundstate solution as an initial state, at \(t=0\), and by changing the interaction strengths, we monitor their evolution in real-time. Our simulations are performed within a 3D box grid containing \((256\times 256\times 128)\) grid points, with the spatial grid spacing \(\Delta_{x}=\Delta_{y}=0.1l_{osc}\) and \(\Delta_{z}=0.2l_{osc}\) while the time step \(\Delta_{t}=2\times 10^{-4}/\omega_{x}\). |
2304.01219 | DoE2Vec: Deep-learning Based Features for Exploratory Landscape Analysis | We propose DoE2Vec, a variational autoencoder (VAE)-based methodology to
learn optimization landscape characteristics for downstream meta-learning
tasks, e.g., automated selection of optimization algorithms. Principally, using
large training data sets generated with a random function generator, DoE2Vec
self-learns an informative latent representation for any design of experiments
(DoE). Unlike the classical exploratory landscape analysis (ELA) method, our
approach does not require any feature engineering and is easily applicable for
high dimensional search spaces. For validation, we inspect the quality of
latent reconstructions and analyze the latent representations using different
experiments. The latent representations not only show promising potentials in
identifying similar (cheap-to-evaluate) surrogate functions, but also can
significantly boost performances when being used complementary to the classical
ELA features in classification tasks. | Bas van Stein, Fu Xing Long, Moritz Frenzel, Peter Krause, Markus Gitterle, Thomas Bäck | 2023-03-31T09:38:44Z | http://arxiv.org/abs/2304.01219v1 | # DoE2Vec: Deep-learning Based Features
###### Abstract
We propose DoE2Vec, a variational autoencoder (VAE)-based methodology to learn optimization landscape characteristics for downstream meta-learning tasks, e.g., automated selection of optimization algorithms. Principally, using large training data sets generated with a random function generator, DoE2Vec self-learns an informative latent representation for any design of experiments (DoE). Unlike the classical exploratory landscape analysis (ELA) method, our approach does not require any feature engineering and is easily applicable for high dimensional search spaces. For validation, we inspect the quality of latent reconstructions and analyze the latent representations using different experiments. The latent representations not only show promising potentials in identifying similar (cheap-to-evaluate) surrogate functions, but also can significantly boost performances when being used complementary to the classical ELA features in classification tasks.
exploratory landscape analysis, landscape features, auto-encoders, optimization
## 1 Introduction
Solving real-world black-box optimization problems can be extremely complicated, particularly if they are strongly nonlinear and require expensive function evaluations. As suggested by the no free lunch theorem in [1], there is no such things as a single-best optimization algorithm, that is capable of optimally solving all kind of problems. The task in identifying the most time- and resource-efficient optimization algorithms for each specific problem, also known as the algorithm selection problem (ASP) (see [2]), is tedious and challenging, even with domain knowledge and experience. In recent years, landscape-aware algorithm selection has gained increasing attention from the research community, where the fitness landscape characteristics are exploited to explain the effectiveness of an algorithm across different problem instances (see [3, 4]). Beyond that, it has been shown that landscape characteristics are sufficiently informative in reliably predicting the performance of optimization algorithms, e.g., using Machine Learning approaches (see [5, 6, 7, 8, 9, 10]). In other words, the expected performance of an optimization algorithm on an unseen problem can be estimated, once the corresponding landscape characteristics have been identified. Interested readers are referred to [7, 11, 12, 13, 14].
Exploratory landscape analysis (ELA), for instance, considers six classes of expertly designed features, including \(y\)-distribution, level set, meta-model, local search, curvature and convexity, to numerically quantify the landscape
complexity of an optimization problem, such as multimodality, global structure, separability, plateaus, etc. (see [15, 16]). Each feature class consists of a set of features, which can be relatively cheaply computed. Other than typical ASP tasks, ELA has shown great potential in a wide variety of applications, such as understanding the underlying landscape of neural architecture search problems in [17] and classifying the Black-Box Optimization Benchmarking (BBOB) problems in [18]. Recently, ELA has been applied not only to analyze the landscape characteristics of crash-worthiness optimization problems from automotive industry, but also to identify appropriate cheap-to-evaluate functions as representative of the expensive real-world problems (see [19]). While ELA is well established in capturing the optimization landscape characteristics, we would like to raise our concerns regarding the following aspects.
1. Many of the ELA features are highly correlated and redundant, particularly those within the same feature class (see [20]).
2. Some of the ELA features are insufficiently expressive in distinguishing problem instances (see [21]).
3. Since ELA features are manually engineered by experts, their feature computation might be biased in capturing certain landscape characteristics (see [22]).
4. ELA features are less discriminative for high-dimensional problems (see [23]).
Instead of improving the ELA method directly, e.g., searching for more discriminative landscape features, we approach the problems from a different perspective. In this paper, we introduce an automated self-supervised representation learning approach to characterize optimization landscapes by exploiting information in the latent space. Essentially, a deep variational autoencoder (VAE) model is trained to extract an informative feature vector from a design of experiments (DoE), which is essentially a generic low-dimensional representation of the optimization landscape. Thus, the name of our approach: _DoE2Vec_. While the functionality of our approach is fully independent of ELA, experimental results reveal that its performance can be further improved when combined with ELA (and vice versa). To the best of our knowledge, a similar application approach with VAE in learning optimization landscape characteristics is still lacking. Section 2 briefly introduces the state-of-the-art representation learning of optimization landscapes as well as the concepts of (variational) autoencoder. This is followed by the description of our methodology in Section 3. Next, we explain and discuss our experimental results in Section 4. Lastly, conclusions and outlooks are included in Section 5.
## 2 Representation of Optimization Landscape
In the conventional ELA approach, landscape features are computed primarily using a DoE of some samples points \(\mathcal{W}=\{w_{1},...,w_{n}\}\) evaluated on an objective function \(f\), i.e., \(f\colon\mathbb{R}^{d}\rightarrow\mathbb{R}\), with \(w_{i}\in\mathbb{R}^{d}\), \(n\) represents sample size, and \(d\) represents function dimension. The objective function values \(f(w_{i})\), \(i\in\{1,\ldots,n\}\) are the inputs of VAE models in DoE2Vec. In this work, we consider ELA features similar to those in [19], which do not require additional sampling, and compute them with the package flacco by [24, 25]. These features include commonly used dimensionality reduction approaches such as Principal Component Analysis (PCA) [26], a number of simple surrogate models and many others.
To overcome the drawbacks of the ELA approach, attentions have been focused on developing algorithm selection approaches without requiring landscape features. For example, [27] proposed two feature-free approaches using a deep learning method, where optimization landscapes can be represented through either 1) image-based fitness maps or 2) graph-like fitness clouds. In the first approach, convolutional neural networks were employed to project data sets into two-dimensional fitness maps, using different dimensionality reduction techniques. In the second approach, data sets were embedded into point clouds using modified point cloud transformers, which can accurately capture the global landscape characteristics. Nonetheless, the fitness map approach suffered from the curse of dimensionality, while the fitness cloud approach was limited to fixed training sample size. Additional relevant works can be found in [28, 29, 30, 22]. Unlike these approaches, which were directly used as classifiers, the latent feature sets generated by our proposed approach can be easily combined with other features, such as ELA features, for classification tasks. In our work, we do not propose to replace conventional ELA features, but to actually extend them with autoencoder (AE) based latent-space features. Since the implementation of both approaches mentioned above is not available, a comparison to our work in terms of classifying high-level properties is only feasible by directly comparing their results on a identical experimental setup. Following this, results from the downstream tasks in this work can partially be compared to the mentioned results in [29], including the standard Principal Component Analysis (PCA), reduced Multiple Channel (rMC) and a transformer based approach (Transf.), taking into account that additional hyperparameter tuning was involved in their classification experiments with ELA features.
Our approach is capable of learning the representations of optimization landscapes in an automated, generic and unsupervised manner, with the advantage that the learned features are not biased towards any particular landscape characteristic. Unlike previously mentioned approaches, our proposed method is independent of the sampling method. By using only fully connected (dense) layers that learn from one-dimensional (flattened) landscapes, an AE or a VAE is,
in theory, capable of learning any number of input-dimensions without scaling difficulties. Furthermore, the fast-to-train (V)AE models can be easily shared in practice.
### Autoencoder
A standard AE usually has a symmetrical architecture, consisting of three components: an encoder, a hidden layer, also known as bottleneck, and a decoder (Figure 1). In short, an encoder projects the input space \(\mathcal{X}\) to a representative feature space \(\mathcal{H}\), i.e., \(e\colon\mathcal{X}\rightarrow\mathcal{H}\), while a decoder transforms the feature space back to the input space \(d\colon\mathcal{H}\rightarrow\hat{\mathcal{X}}\) (see [31, 32]). In other words, AE attempts to optimally reconstruct the original input space \(\mathcal{X}\), by minimizing the reconstruction error \(\mathcal{L}(\mathcal{X},\hat{\mathcal{X}})\), e.g. mean squared error, during the (unsupervised) training process. Commonly, AE is constructed with an input layer and an output layer of the same dimension \(dim(X)\) and a bottleneck layer of lower dimension \(dim(H)\), i.e., \(dim(H)<dim(X)\), to improve its ability in capturing crucial representations of the input space.
Following this, AE has rapidly gained popularity in the field of dimensionality reduction as well as representation learning [33, 34]. In comparison with the simple principal component analysis (PCA) technique, AE has the advantage that nonlinear representation features can be learned with activation functions, such as a sigmoid function.
### Variational autoencoder
Originating from the same family as traditional AE, a VAE typically has a similar architecture with some modifications, as shown in Figure 1.
Unlike AE, the latent space of a VAE is encoded as a distribution by using a mean and variance layer, together with a sampling method. Following this, the latent space can be properly regularized to provide more meaningful features. During the training process, a common loss function \(\mathcal{L}_{\text{VAE}}\) (Equation 1) of VAE is to be minimized, consisting of a regularization term (Equation 2), which can be expressed as the Kullback-Leibler (KL) divergence \(\mathcal{L}_{\text{KL}}\), and a reconstruction error, e.g., mean squared error \(\mathcal{L}_{\text{MSE}}\) (Equation 3).
\[\mathcal{L}_{\text{VAE}}(\mathcal{X},\hat{\mathcal{X}})=\beta\cdot\mathcal{L}_ {\text{KL}}(\mathcal{X},\hat{\mathcal{X}})+\mathcal{L}_{\text{MSE}}(\mathcal{ X},\hat{\mathcal{X}})\,, \tag{1}\]
\[\mathcal{L}_{\text{KL}}(\mathcal{X},\hat{\mathcal{X}})=\frac{1}{2}\sum_{i=1}^ {|\mathcal{X}|}(\exp(\sigma_{i})-(1+\sigma_{i})+\mu_{i}^{2})\,, \tag{2}\]
\[\mathcal{L}_{\text{MSE}}(\mathcal{X},\hat{\mathcal{X}})=\sum_{x\in\mathcal{X},x\in\hat{\mathcal{X}}}(x-\hat{x})^{2}\,, \tag{3}\]
where a weighting factor \(\beta\) is included to ensure a good trade-off between \(\mathcal{L}_{\text{KL}}\) and \(\mathcal{L}_{\text{MSE}}\), \(\sigma\) and \(\mu\) are the variance and mean latent layers of the VAE and \(\hat{\mathcal{X}}\) is the reconstruction of the input space. Detailed explanations regarding VAE can be found in [35, 36].
### Black-Box Optimization Benchmarking
The development of DoE2Vec is based on the well-known academic BBOB suite by [37], consisting of altogether 24 noise-free real-parameter single objective optimization problems of different landscape complexity. For each BBOB problem, the global optimum (within \([-5,5]^{d}\)) can be randomly shifted, e.g., through translation or rotation, to generate a new problem instance.
Figure 1: Architecture of a (standard) AE on the left and VAE on the right used in DoE2Vec. The latent space \(z\) is defined by \(z=\mu+\sigma\cdot\epsilon\), where \(\epsilon\) denotes the sampling with mean \(\mu\) and variance \(\sigma\).
## 3 DoE2Vec
Generally, our proposed method uses a VAE with similar design as described in Section 2.2. Precisely, our VAE model has an architecture of altogether seven fully connected layers, where rectified linear unit (ReLU) activation functions are assigned to the hidden layers, while a sigmoid activation function is used for the final output layer of the decoder. The encoder is composed of four fully connected layers with \(dim(X)\) depending on the DoE sample size \(n\), starting with the input layer size \(n\), two hidden layers with sizes \(n/2\) and \(n/4\) and the latent size \(ls\) (\(ls<n/4\)) for the mean and log variance of the latent space. The decoder is composed of three fully connected layers with sizes \(n/4\), \(n/2\) and \(n\) for the final output layer. The focus of our approach lies on VAE, rather than AE, because it has the additional benefits of regularizing the latent space without loss of generalisation. For comparison, we consider a standard AE model as well (with the same number of hidden layers, except that the latent space is now a single dense layer, instead of a mean and log variance layer with a sampling scheme). Full specifications of the different models are available in our Github repository ([38]), while pre-trained models are also available on Huggingface.
The general workflow of DoE2Vec can be summarized as follows:
1. First, a DoE of size \(2^{m}\) is created, where \(m\) is a user defined parameter. By default, a Sobol sequence by [39] is used as sampling scheme, but any sampling scheme or even a custom DoE can be utilized in practice.
2. The DoE samples, initially within the domain \([0,1]^{d}\), can be re-scaled to any desired function boundaries, as the DoE2Vec method is scale-invariant by design.
3. Next, the DoE samples are evaluated for a set of functions randomly generated using the random function generator from [19], which was originally proposed by [40]. The main advantage of using this function generator is that a large training set can be easily created, covering a wide range of function landscape of different complexity.
4. Following this, all objective function values are first re-scaled to \([0,1]\) and then used as input vectors to train (V)AE models.
5. Lastly, the latent space of the trained V(AE) models can be used as feature vectors for further downstream classification tasks, such as optimization algorithm selection.
In the next section, we will show that the learned feature representations have attractive characteristics and they are indeed useful for downstream classification and meta-learning tasks.
## 4 Experimental Results and Discussions
In this work, we have conducted altogether four experiments for different research purposes. Basically, the first two experiments are to investigate the quality of VAE models trained and to validate the latent space representation learned, while the last two experiments are to evaluate the potential of the DoE2Vec approach for downstream meta-learning tasks, as follows.
1. Analyzing the reconstruction of function landscapes and the impact of the latent size and KL loss weight (weighting factor \(\beta\)).
2. Investigating the differences in latent space representation between a standard AE and a VAE model.
3. Identifying functions with similar landscapes based on the latent representations.
4. Three downstream multi-class classification experiment to show the potential of the proposed approach in practice, using the latent feature vectors as inputs.
In our experiments, we fix the sampling scheme to a Sobol sequence and \(m\) to eight, ending up with a DoE of \(256\) samples. Unless otherwise specified, all AE and VAE models are trained on a set of \(250,000\) five-dimensional (\(5d\)) random functions.
### Reconstruction of Function Landscapes
In our first experiment, we investigate the impact of two model parameters, namely the KL loss weight and the latent size, on the model loss functions. For this purpose, a total of \(30\) VAE models are trained using combinations of six different latent sizes (\(4,8,16,24,32\)) and five different KL loss weights (\(0.0001\), \(0.0002\), \(0.001\), \(0.005\), \(0.01\)). Results in the left subplot of Figure 2 clearly show that latent sizes have a positive effect on the \(\mathcal{L}_{\text{VAE}}\), with the improvement
diminishes beyond \(16\). On the other hand, the \(\mathcal{L}_{\text{MSE}}\) can be improved with smaller KL loss weights, as shown in the right subplot.
In Figure 3, the combined effects of these parameters on the \(\mathcal{L}_{\text{KL}}\) and \(\mathcal{L}_{\text{MSE}}\) are fully visualized.
In the remaining experiments, we use a latent size of either \(24\) or \(32\) (expressed as (V)AE-24 or (V)AE-32) and a KL loss weight of \(0.001\) as a good compromise between the two loss terms. It is to be noted that, while the (V)AE architecture can be further improved by applying neural architecture search, we leave it for future work, since the landscape reconstruction is not the ultimate purpose of this work. In fact, the reconstruction here is meant to be a way in evaluating the quality of learned representations. Subsequently, we verify the capability of a VAE-24 model by reconstructing a large variety of functions, using the \(24\) BBOB functions (first problem instance) (Figure 4).
### Latent Space Representation
Next, we project the latent space of an AE and a VAE from our DoE2Vec approach onto a \(2d\) visualization using multidimensional scaling (MDS) method, as shown in Figure 5. For a comprehensive comparison, a similar MDS visualization for the feature vectors with all the (considered) ELA features is included as well. Interestingly, the latent space of ELA has a better looking cluster-structure, while the latent space of AE and VAE have a clear structure. As expected, the latent space of AE contains a few outliers and is in general more widespread than the latent space of VAE.
To further analyze the differences in latent space representation between the AE and VAE models, we use the generator part of the models to iteratively generate new function landscapes. To be precise, we select one latent representation as starting point and individually vary each latent variable with small steps between \(-1\) and \(+1\) of its original value. In Table 1, the generated landscapes are shown only for the extreme cases of an AE-\(4\) and a VAE-\(4\) model. Based on this result, two important conclusions can be drawn.
Figure 3: Parallel coordinates plot of different latent sizes and KL loss weights in relation with the validation \(\mathcal{L}_{\text{KL}}\) and \(\mathcal{L}_{\text{MSE}}\). (Columns left to right: Latent size, KL loss weight, \(\mathcal{L}_{\text{MSE}}\) and \(\mathcal{L}_{\text{KL}}\).) Each color or line represents a combination of latent size and KL loss weight. Conflict between both loss terms can be observed, since parameter combinations with low \(\mathcal{L}_{\text{MSE}}\) have a higher \(\mathcal{L}_{\text{KL}}\) and vice versa.
Figure 2: Impact of different latent sizes and KL loss weights on the loss functions. Left: \(\mathcal{L}_{\text{VAE}}\) against latent sizes. Right: \(\mathcal{L}_{\text{MSE}}\) against KL loss weight.
1. The interpolation for both models is very smooth, showing that the models can learn the landscape representations well and are able to learn the structure, even though the dimensionality of the function landscapes \(d\) is not known.
2. The VAE model utilizes a more compact feature representation, where its output variance is greater than those of the AE model for the same variation of latent features.
Figure 4: Reconstructions of the 24 \(2d\) BBOB functions (labelled from \(f_{1}\) to \(f_{24}\)) using a VAE-24 model. The surface plots are generated based on a DOE of 256 samples using triangulation. Generally, the reconstructed landscapes look similar to the actual landscapes based on visual inspection.
Figure 5: From left to right: \(2d\) MDS visualization of the (normalized) ELA feature spaces, the latent space of an AE-32 model and the latent space of a VAE-32 model. Altogether 100 problem instances for all 24 BBOB functions, resulting in a total of 2,400 dots (in each subplot), where each dot represents a feature vector of a BBOB instance and the color denotes the corresponding BBOB function.
### Similar Representative Functions
The first main application of DoE2Vec is to identify cheap-to-evaluate functions with similar latent space representation for a given DoE, which can be very useful when dealing with real-world expensive black box optimization problems, as also described in [19]. In this way, large experiments can now be conducted on a "similar" function group at a much lower computational cost. Since a direct analytical verification of the results is very challenging, we instead use a visual inspection based on \(2d\) functions to showcase the potential of the proposed method. In Table 2, the \(24\) BBOB functions are paired with their respective "nearest" random function, where _nearest_ is defined as the random function that has the closest feature representation in terms of Euclidean distance. For most BBOB functions, a very well fitting (almost identical) random function can be identified by the DoE2Vec model, where random functions of equal complexity are suggested for the more complex BBOB functions.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} & start & \(v_{1}\) & \(v_{2}\) & \(v_{3}\) & & \(v_{4}\) & \\ & & \(-1\) & \(+1\) & \(-1\) & \(+1\) & \(-1\) & \(+1\) & \(-1\) & \(+1\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of function landscapes generated by an AE-4 and a VAE-4 model for different latent variables, using the BBOB \(f_{22}\) encoding as starting point \((v_{1},v_{2},v_{3},v_{4})\). In each column, the latent features are separately varied by \(-1\) or \(+1\). Complete variation of the latent features can be found in videos available in our Github repository [38].
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} & start & \(v_{1}\) & \(v_{2}\) & \(v_{3}\) & & \(v_{4}\) \\ & & \(-1\) & \(+1\) & \(-1\) & \(+1\) & \(-1\) & \(+1\) & \(-1\) & \(+1\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of function landscapes generated by an AE-4 and a VAE-4 model for different latent variables, using the BBOB \(f_{22}\) encoding as starting point \((v_{1},v_{2},v_{3},v_{4})\). In each column, the latent features are separately varied by \(-1\) or \(+1\). Complete variation of the latent features can be found in videos available in our Github repository [38].
### Classification Tasks
Secondly, the DoE2Vec approach is designed to learn characteristic representations of an optimization landscape, which can be verified through a classical classification task of high level function properties. These high level properties, such as multimodality, global structure and funnel structure, are important for the ASP, as they often determine the difficulty of an optimization problem. Table 4 illustrates the BBOB functions and their associated high level properties.
In this experiment, a standard random forest (RF) model (using sklearn from [41]) is implemented for the multiclass classification tasks based on the latent representations learned by four DoE2Vec models, consisting of AE-\(24\), AE-\(32\)
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**BBOB function** & \multicolumn{2}{c}{**Multimodal**} & **Global Structure** & **Funnel** \\ \hline
1: Sphere & none & none & yes \\
2: Ellipsoidal separable & none & none & yes \\
3: Rastrigin separable & high & strong & yes \\
4: Buche-Rastrigin & high & strong & yes \\
5: Linear Slope & none & none & yes \\ \hline
6: Attractive Sector & none & none & yes \\
7: Step Ellipsoidal & none & none & yes \\
8: Rosenbrock & low & none & yes \\
9: Rosenbrock rotated & low & none & yes \\ \hline
10: Ellipsoidal high cond. & none & none & yes \\
11: Discus & none & none & yes \\
12: Bent Cigar & none & none & yes \\
13: Sharp Ridge & none & none & yes \\
14: Different Powers & none & none & yes \\ \hline
15: Rastrigin multimodal & high & strong & yes \\
16: Weierstrass & high & medium & none \\
17: Schaffer F7 & high & medium & yes \\
18: Schaffer F7 mod. ill-cond. & high & medium & yes \\
19: Griewank-Rosenbrock & high & strong & yes \\ \hline
20: Schwefel & medium & deceptive & yes \\
21: Gallagher 101 Peaks & medium & none & none \\
22: Gallagher 21 Peaks & low & none & none \\
23: Katsuura & high & none & none \\
24: Lunacek bi-Rastrigin & high & weak & yes \\ \hline \hline \end{tabular}
\end{table}
Table 4: High level properties of the 24 BBOB functions. This table was first introduced in [29]
\begin{table}
\begin{tabular}{c c|c c c c|c c c|c} \hline \hline \(d\) & **Task** & **AE-24** & **AE-32** & **VAE-24** & **VAE-32** & **ELA** & PCA* & rMC* & Transformer* & **ELA-VAE-32** \\ \hline \multirow{4}{*}{2} & multimodal & 0.875 & 0.849 & 0.877 & 0.856 & 0.984 & 0.994 & 0.971 & 0.991 & **0.991** \\ & global struct. & 0.903 & 0.904 & 0.902 & 0.889 & 0.983 & 0.992 & 0.965 & 0.991 & **0.998** \\ & funnel & 0.985 & 0.974 & 0.956 & 0.978 & **1.000** & 0.999 & 0.995 & 1.000 & **1.000** \\ \hline \multirow{4}{*}{5} & multimodal & 0.908 & 0.903 & 0.880 & 0.889 & 0.963 & 0.897 & 0.947 & 0.991 & **0.998** \\ & global struct. & 0.838 & 0.828 & 0.810 & 0.793 & **1.000** & 0.807 & 0.859 & 0.978 & **1.000** \\ & funnel & **1.000** & **1.000** & 0.996 & 0.991 & **1.000** & 0.990 & 0.989 & 1.000 & **1.000** \\ \hline \multirow{4}{*}{10} & multimodal & 0.877 & 0.813 & 0.844 & 0.838 & **1.000** & 0.839 & 0.952 & 0.974 & **1.000** \\ & global struct. & 0.794 & 0.737 & 0.783 & 0.745 & 0.902 & 0.774 & 0.911 & 0.963 & **0.991** \\ & funnel & 0.998 & 0.993 & 0.997 & 0.993 & 0.972 & 0.977 & 0.991 & **1.000** & 0.997 \\ \hline \multirow{4}{*}{20} & multimodal & 0.726 & 0.722 & 0.700 & 0.694 & 0.970 & - & - & **0.991** \\ & global struct. & 0.689 & 0.621 & 0.606 & 0.626 & 0.972 & - & - & - & **0.997** \\ \cline{1-1} & funnel & 0.993 & 0.982 & 0.985 & 0.982 & **1.000** & - & - & - & **1.000** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification results (averaged macro F1 scores over 10 runs with different random seeds) using a standard RF model with \(100\) trees, trained on the feature representations (from AE, VAE, classical ELA or ELA combined with VAE-32) of the first \(100\) instances for each BBOB function and validated on instance \(101\) to \(120\). * PCA, rMC and Transformer results are directly taken from the work of [29], which uses an identical experimental setup but without repetitions.
VAE-\(24\) and VAE-\(32\). In other words, the high level properties of a BBOB function are to be predicted using the latent representations. Again, we compare the DoE2Vec approach against the classical ELA method, which is specifically constructed to excel in exactly this kind of function property classification tasks. Beyond that, a combination of classical ELA features with a VAE-\(32\) model is included to evaluate the complimentary effect of the different feature sets.
The classification results (macro F1 scores) of the different feature sets are summarized in Table 3. It is not surprising that the ELA features generally perform very well and outperform the latent features most of the time, especially in classifying the global structure and multimodal landscapes. Fascinatingly, the classification performances can be significantly improved when the DoE2Vec is combined with the classical ELA method, indicating that both feature sets seem to be complimentary to each other.
## 5 Conclusions and Future Work
In this work we propose _DoE2Vec_, a VAE-based approach to learn the latent representations of an optimization landscape, similar to the classical landscape features. Using a wide set of experiments, we have shown that DoE2Vec is able to reconstruct a large set of functions accurately and that the approach can be used for downstream meta-learning tasks, such as algorithm selection. The proposed methodology can be effectively used next to existing techniques, such as classical ELA features, to further increase the classification accuracy of certain downstream tasks. In fact, DoE2Vec can learn good feature representations for optimization landscapes and has several advantages over the ELA approach, such as feature engineering or selection knowledge is not required, domain knowledge in ELA is not needed and it is applicable to optimization tasks in a very straightforward manner. Nonetheless, there are a few known limitations to the proposed method, such as 1) Our approach is scale-invariant, but not rotation- or translation-invariant. Using a different loss function to train the autoencoders might be able to improve this 2) If a custom DoE sample is used, the model needs to be trained from scratch (no pre-trained model available). This typically takes a few minutes up to an hour, depending on the sample size \(n\) and number of random functions to train on, 3) The learned feature representations are a black-box that are hard to interpret directly.
In future work, we plan to improve our approach by tackling some of the challenges mentioned. Apart from that, we would like to verify the usefulness of the random functions with similar latent features w.r.t. the performances of an optimization algorithm.
## 6 Reproducibility statement
We provide an open-source documented implementation of our package at [38], with visualizations and video explanations. Pre-trained models (weights) are available on Huggingface. All models are trained using a single T4 GPU and the computations are carbon neutral (\(CO_{2}\)-free) by using solar power. Average training time of a model on \(250,000\) random functions was five minutes.
|
2309.15290 | Bayesian Inference on the Isotopic Building Blocks of Mars and Earth | Isotopic anomalies provide a means of probing the materials responsible for
the formation of terrestrial planets. By analyzing new iron isotopic anomaly
data from Martian meteorites and drawing insights from published data for O,
Ca, Ti, Cr, Fe, Ni, Sr, Zr, Mo, Ru, and Si, we scrutinize potential changes in
the isotopic composition of the material accreted by Mars and Earth during
their formation.
A Principal Component Analysis of isotopic anomalies in meteorites identifies
three main clusters (forming the three parts of the isotopic trichotomy): CI,
CC=CM+CO+CV+CR, and NC=EH+EL+H+L+LL.
Our results suggest that Earth is primarily an isotopic mixture of ~92% E, 6
% CI, and <2% COCV and O. Mars, on the other hand, appears to be a mixture of
~65% E, 33% O, and <2% CI and COCV. We establish that Earth's CI contribution
substantially increased during the latter half of its accretion. Mars began
accreting a mix of O and E but predominantly accreted E later.
Mars' changing isotopic makeup during accretion can be explained if it
underwent gas-driven type I migration from its origin near the O - E boundary
to a location well within the E region during the first few million years of
solar system history. Earth's later increased CI contribution may be attributed
to the stochastic impact of an interloper carbonaceous embryo that moved inside
the inner solar system region while nebular gas was still present, and
subsequently participated in the stage of chaotic growth.
The recent findings of Si isotopic anomalies in enstatite chondrites when
compared to terrestrial rocks likely stems from insufficient correction for
high-temperature equilibrium isotopic fractionation. With appropriate
adjustments for this influence, both the silicate Earth and enstatite
chondrites exhibit comparable Si isotopic anomalies, reaffirming a genetic link
between them. | Nicolas Dauphas, Timo Hopp, David Nesvorny | 2023-09-26T22:02:25Z | http://arxiv.org/abs/2309.15290v1 | # Bayesian Inference on the Isotopic Building Blocks of Mars and Earth
###### Abstract
We present a Bayesian inference on the Isotopic Building Blocks of Mars and Earth. We show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We also show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth. We show that the Bayesian inference on the Isotopic Building Blocks of Mars and Earth is a good approximation to the Isotopic Building Blocks of Mars and Earth.
###### Abstract
Elements with differing siderophile affinities provide insights into various stages of planetary accretion. The isotopic anomalies they exhibit offer a deeper understanding of the materials responsible for the formation of terrestrial planets. By analyzing new iron isotopic anomaly data from Martian meteorites and drawing insights from published data for O, Ca, Ti, Cr, Fe, Ni, Sr, Zr, Mo, Ru, and Si, we scrutinize potential changes in the isotopic composition of the material accreted by Mars and Earth during their formation. Our methodology employs a Bayesian inference method, executed using a Markov Chain Monte Carlo (MCMC) algorithm.
The Bayesian method helps us balance the task of emulating the compositions of the terrestrial planets (reflected in the likelihood) with the flexibility to explore the isotopic compositions of mixing components within permissible intervals (indicated by the priors of the nuisance parameters). A Principal Component Analysis of isotopic anomalies in meteorites identifies three main clusters (forming the three parts of the isotopic trichotomy): CI, CC=CM+CO+CV+CR, and NC=EH+EL+H+L+LL. We adopt CI, COCV, O, and E as endmember compositions in the mixtures. We are concerned here with explaining isotopic anomalies in Mars and Earth, not their chemical compositions. Previous studies have shown that Earth's chemical composition could not be well explained by solely considering undifferentiated meteorites, as it requires incorporation of a refractory component enriched in forsterite. The endmember chondrite components considered here are assumed to be representative of isotopic reservoirs that were present in the solar nebula, but the actual building blocks could have had different chemical compositions, and we use cursive letters to denote those putative building blocks (\(\mathcal{C}\mathcal{I}\), \(\mathcal{CO}\mathcal{V}\), \(\mathcal{O}\), and \(\mathcal{E}\)).
Because Earth's mantle composition is an endmember for some isotopic systems, it cannot be reproduced exactly by considering known chondrite groups only and requires involvement of a component that is missing from meteorite collections but is likely close to enstatite meteorites. With this caveat in mind, our results suggest that Earth is primarily an isotopic mixture of \(\sim\)92% \(\mathcal{E}\), 6 % \(\mathcal{C}\mathcal{I}\), and \(<\)2% \(\mathcal{CO}\mathcal{V}\) and \(\mathcal{O}\). Mars, on the other hand, appears to be a mixture of \(\sim\)65% \(\mathcal{E}\), 33% \(\mathcal{O}\), and \(<\)2% \(\mathcal{C}\mathcal{I}\) and \(\mathcal{CO}\mathcal{V}\). We establish that Earth's \(\mathcal{C}\mathcal{I}\) contribution substantially increased during the latter half of its accretion. Mars began accreting a mix of \(\mathcal{O}\) and \(\mathcal{E}\) but predominantly accreted \(\mathcal{E}\) later.
Mars' changing isotopic makeup during accretion can be explained if it underwent gas-driven type I migration from its origin near the \(\mathcal{O}\) - \(\mathcal{E}\) boundary to a location well within the \(\mathcal{E}\) region during the first few million years of solar system history. Earth's later increased \(\mathcal{C}\mathcal{I}\) contribution may be attributed to the stochastic impact of an interloper carbonaceous embryo that moved inside the inner solar system region while nebular gas was still present, and subsequently participated in the stage of chaotic growth.
The discovery that a significant portion of Earth's building blocks closely resembles enstatite chondrites contrasts with recent findings of Si isotopic anomalies in enstatite chondrites when compared to terrestrial rocks. We suggest this discrepancy likely stems from insufficient correction for high-temperature equilibrium isotopic fractionation, whether of nebular or planetary origin. With appropriate adjustments for this influence, both the silicate Earth and enstatite chondrites exhibit comparable Si isotopic anomalies, reaffirming a genetic link between them.
## 1 Introduction
Initial studies connecting terrestrial planets with potential meteoritic building blocks focused on chemical comparisons (Allegre, Manhes and Lewin, 2001; Allegre et al., 1995; Hart and Zindler, 1986; Jagoutz et al., 1979; Wanke, 1981). While it is chemically easier to build Earth from material akin to carbonaceous chondrites (Allegre, Manhes and Lewin, 2001), Javoy (1995; 1997) recognized that enstatite chondrites were the only meteorites that matched the nitrogen and oxygen isotopic compositions of terrestrial mantle rocks and postulated a connection between the two. A difficulty with this scenario is that enstatite chondrites have very low Mg/Si atom ratios (\(\sim\)0.75 for EH and \(\sim\)0.85 for EL, Lodders, 2021) compared to Earth's mantle (\(\sim\)1.25, McDonough, 2003; \(\sim\)1.21, O'Neill, 2014). A pure enstatite chondrite model of Earth requires a mantle Mg/Si ratio of 1.09 (Javoy et al., 2010a). This is much lower than current bulk silicate Earth (BSE) estimates (McDonough, 2003; O'Neill, 2014) and would require the existence of a volumetrically significant (\(\sim\)2/3 of the mantle) Si-rich (Mg/Si\(\sim\)1.02) reservoir in the deep mantle that has so far evaded detection (Javoy et al., 2010a). Earth's mantle also has silicon isotopic composition (\(\delta^{\rm 50}\)Si) that is clearly distinct from enstatite chondrites (Fitoussi and Bourdon, 2012).
Numerous elements have been shown to carry isotopic anomalies at a bulk scale (Clayton, 2003; Clayton and Mayeda, 1996; Dauphas, Marty and Reisberg, 2002a; Dauphas and Schauble, 2016; Qin and Carlson, 2016; Trinquier, Birck and Allegre, 2007; Warren, 2011a). Isotopic analyses of a variety of elements have demonstrated that Earth's isotopic composition is close to that of enstatite chondrites, ascertaining the genetic relationship between the two (Dauphas, 2017). Some differences exist between the isotopic compositions of terrestrial rocks and enstatite chondrites for some elements for which Earth is an endmember (Fe, Zr, and Mo) but these differences are small (Budde, Burkhardt and Kleine, 2019; Render et al., 2022; Schiller, Bizzarro and Siebert, 2020). One way to reconcile the isotopic similarity between Earth and enstatite chondrites with their distinct Mg/Si ratios and Si isotopic compositions is to argue that these objects formed in the same reservoir, but that physical separation between forsterite and gas in the solar nebula produced high Mg/Si-\(\delta^{\rm 30}\)Si dust that through agglomeration made planetesimals precursor to the Earth, while the low Mg/Si- \(\delta^{\rm 30}\)Si gas reservoir produced planetesimals akin to E-type asteroids (Dauphas et al., 2015; Morbidelli et al., 2020). The parent-body of angrites has high \(\delta^{\rm 30}\)Si (Dauphas et al., 2015; Pringle et al., 2014) and Mg/Si (Tissot et al., 2022) presumably resulting from physical separation between forsterite and gas, and the same mechanism could have affected Earth's building blocks (Dauphas, 2017; Dauphas et al., 2015; Morbidelli et al., 2020). Contrasting with the hypothesis of a link between Earth and enstatite chondrites, an alternative view proposes that mixtures of differentiated meteorites and chondrites such as SNC and CV (Fitoussi, Bourdon and Wang, 2016) or non-aburite achondrites (NAA) and CI (Onyett et al., 2023) can emulate Earth's isotopic composition for many elements.
Dauphas (2017) introduced a novel approach to study the isotopic nature of material accreted by Earth during its formation by leveraging the fact that elements with varying siderophile behaviors were delivered to the mantle at different stages of Earth's accretion. The conclusion of that study is that the isotopic nature of material accreted by Earth remained relatively constant, an observation more in line with a single dominant component rather than a diverse blend of components coming from different heliocentric distances. However, that study had limitations, chiefly due to the limited availability of isotopic anomaly measurements. The wealth of data produced since then enables us to reevaluate the nature of Earth's accreted material's with enhanced precision and deeper insights.
The accretion history of Mars can also be elucidated using isotopic anomalies (Brasser, Dauphas and Mojzsis, 2018; Burkhardt et al., 2021; Kleine et al., 2023; Lodders and Fegley, 1997; Paquet, Sossi and Moynier, 2023; Sanloup, Jambon and Gillet, 1999; Tang and Dauphas, 2014). Earlier attempts were hindered by insufficient data or imprecision, but the accumulation of new data over recent years provides a robust basis to revisit this question. Among the elements displaying isotopic anomalies, siderophile elements are particularly valuable for constraining the later stages of accretion. Considering this, we have measured the Fe isotopic compositions of several Martian meteorites, providing a more precise definition of the isotopic nature of Mars' accreted material.
Past studies have shown that no perfect alignment exists between the isotopic compositions of Earth or Mars and any meteorite mixture. In particular, the terrestrial mantle has an endmember isotopic composition for some elements, which cannot be perfectly reproduced by considering existing meteorites. Bayesian inference methodologies allow us to relax the strict assumption that meteorites represent an exhaustive sampling of terrestrial building blocks. Indeed, they provide an avenue to balance two crucial considerations: (_i_) making sense of observations using a model, and (_ii_) accepting that our understanding of some parameters is far from perfect and subject to change. This study uses such an approach based on the Metropolis-Hastings Markov-Chain Monte Carlo (MCMC) procedure (Gelfand and Smith, 1990; Hastings, 1970; Metropolis et al., 1953). This statistical methodology allows us to draw insightful comparisons between the accretion histories of Mars and Earth within the context of planetary dynamics.
### Martian samples, chemical purification, and iron isotopic analyses
In this study four Martian meteorites from the three major groups (shergottites, nakhlites, chassignites) were selected for Fe isotopic analyses. All four analyzed samples are meteorite falls that were previously investigated for their Ni isotopic compositions (Tang and Dauphas, 2014). Iron was purified from digestion aliquots produced in the previous study using the separation scheme described in Tang and Dauphas (2012a) that efficiently separates Fe from Cu, Ni, Co, and Cr by loading sample aliquots in 0.25 mL of 10 M HCl onto 3 mL pre-cleaned AG1-X8 (200-400 mesh) anion resin filled into 10.5 cm long PFA columns (0.62 cm ID). Iron is fixed on the resin, while Ni and other major elements are eluted in 5 mL of 10 M HCl. Other possible contaminants like Cu and Cr are rinsed from the resin using 30 mL of 4 M HCl. Finally, Fe is eluted using 9 mL of 0.4 M HCl, dried down, re-dissolved in 10 M HCl, and the chemical purification is repeated using new resin. This procedure produces purified Fe solutions that are suitable for high precision-high accuracy analyses of Fe isotopic anomalies in extraterrestrial materials (Hopp et al., 2022a; Hopp et al., 2022b).
Iron isotopic measurements were performed at the University of Chicago using a Thermo Scientific Neptune multicollector inductively coupled plasma mass spectrometer (MC-ICP-MS) that was upgraded to Neptune Plus specifications. The setup of the measurements is described in detail in Hopp _et al._ (2022a,b). The Measurements were made on flat-topped peak shoulders in medium resolution to resolve molecular argide interferences. All Fe isotopes, as well as and \({}^{53}\)Cr and \({}^{60}\)Ni used to monitor isobaric interferences, were measured statically in multi-collection on Faraday cups attached to 10\({}^{10}\)\(\Omega\) (\({}^{56}\)Fe), 10\({}^{12}\)\(\Omega\) (\({}^{54}\)Cr and \({}^{58}\)Ni), and 10\({}^{11}\)\(\Omega\) (all other isotopes) amplifier resistors. Each measurement consisted of 50\(\times\)8.369 s cycles. Solutions of 10 \(\mu\)g/g Fe in 0.3 M HNO\({}_{3}\) were introduced into the MC-ICP-MS using an ESI PFA nebulizer through a cyclonic glass spray chamber (Stable Introduction System) at an uptake rate of \(\sim\)100 \(\mu\)/min. Sample analyses were bracketed by measurements of the reference material IRMM-524a and concentrations were matched to within \(\leq\)2 %. On peak zero intensities from a blank solution measured at the beginning of each sequence were subtracted from all individual measurements and a washout time of 210 s was used between successive measurements. Possible natural mass-dependent Fe isotopic fractionation in \(\delta\)-notation is calculated using sample-standard bracketing,
\(\delta^{56}\)Fe (%o) = \(\left[\left({}^{56}\text{Fe}/^{54}\text{Fe}\right)_{\text{sample}}/\left({}^ {56}\text{Fe}/^{54}\text{Fe}\right)_{\text{IRMM-524a}}-1\right]\times 10^{3}\). (1)
Iron isotopic anomalies were calculated by correcting for instrumental and natural mass fractionation using the exponential law (Dauphas and Schauble, 2016b; Marechal, Telouk and Albarede, 1999) and internal normalization to \({}^{57}\)Fe\({}^{56}\)Fe=0.023095, the certified ratios of IRMM-014. Isotopic anomalies are reported in the \(\mu\)-notation,
\(\mu^{\text{Fe}}=\left[\left({}^{i}\text{Fe}/^{56}\text{Fe}\right)_{\left(57/56 \right),\text{sample}}/\left({}^{i}\text{Fe}/^{56}\text{Fe}\right)_{\left(57/56 \right),\text{IRMM-524a}}-1\right]\times 10^{6}\), (2)
where the subscript (57/56) indicates the ratio used for internal normalization. The purified solutions are measured 12 to 15 times in a sample-standard bracketing scheme and the average \(\mu\) values are calculated. Uncertainties for individual samples are based on repeat measurements of each sample solution using the Student's \(t\)-value for a two-sided 95% confidence interval and the relevant number of degrees of freedom.
### Isotopic mass balances of Mars and Earth in multi-stage accretion models
For non-volatile elements, evaluations of the building blocks of Earth or Mars often consider mass-balance without any regard for the siderophile character of the chemical elements (Dauphas et al., 2014a; Fitoussi, Bourdon and Wang, 2016; Lodders and Fegley, 1997; Sanloup, Jambon and Gillet, 1999; Tang and Dauphas, 2014). Simulations of terrestrial planet accretion however indicate that the nature of the material accreted by Earth and Mars could have changed through time (Brasser, Dauphas and Mojzsis, 2018; Morbidelli et al., 2000; Rubie et al., 2015b). Dauphas (2017) devised an approach that uses elements with different siderophile characters to probe the isotopic composition of the material accreted by Mars and Earth through time. While lithophile elements record the full accretion history of the planet, siderophile elements provide a record that is skewed towards later accretion stages because elements delivered early were scavenged into the core (Figs. 1, 2). For example, highly siderophile elements only record the last \(\sim\)0.5 of Earth's and \(\sim\)0.8% of Mars's accretion histories (Day, Brandon and Walker, 2016). Under the assumptions of small impactors, constant metal-silicate partitioning, and constant metal/silicate proportions, the
fraction of an element in the present mantle delivered when the planet grew from mass fractions \(x_{1}\) to \(x_{2}\) relative to the final size of the planet is (Dauphas, 2017),
\(X_{1\to 2}=x_{2}^{\kappa+1}-x_{1}^{\kappa+1}\), (3)
with \(\kappa=Dfk_{lc}/(1-f)\), where \(D\) is the core-mantle (metal-silicate) distribution coefficient, \(f\) is the core mass fraction, and \(k_{lc}\) is the degree of equilibration of the impactor core with the target mantle. While this analytical equation makes some assumptions, Dauphas (2017) showed that it captured the behavior of elements in more complex models involving changes in degree of metal-silicate equilibration, metal fractions, and core partitioning behaviors. We calculated effective distribution coefficients \(D\) based on estimated concentrations in the mantles and cores of Mars and Earth. The core mass fractions \(f\) in Mars and Earth are 0.25 (Stahler et al., 2021) and 0.32, respectively. The degree of impactor core-target mantle equilibrations \(k_{lc}\) are set to 1 and 0.4 for Mars and Earth, respectively. The rationale for setting \(k_{lc}\) to 1 for Mars is that it is likely a stranded planetary embryo formed by the accretion of small objects (Dauphas and Pourmand, 2011). The value of 0.4 adopted for Earth falls within the range of 0.3 to 0.8 inferred from previous studies (Nimmo, O'brien and Kleine, 2010; Rudge, Kleine and Bourdon, 2010). The values of \(\kappa\) calculated for all elements considered in Mars and Earth are compiled in Tables 1 and 2.
We consider 3 mixing models (_baseline, 4-stage_, and _achondrite_; Table 3). For the _baseline_ mixing model, we follow Dauphas (2017) and Brasser _et al._ (2018) and divide Earth's and Mars' accretions into three stages: I=0-60%, II=60-99.2% (Mars) or 60-99.5% (Earth), and III=late veneret (Table 1). The choice of dividing Earth's accretion history at 60% follows from the work of Rubie et al. (2015) who suggested that material accreted after that could have been more oxidized. For Mars, the reason for adopting the same cutoff is that Mo and Fe record the last \(\sim\)40-60% of the accretion history (Fig. 1) and adopting the same cutoff as Brasser _et al._ (2018) allows us to compare our results with that study. Ideally, we would want to divide the accretion of Earth and Mars into more numerous and finer stages, but there are insufficient anomaly data on siderophile elements to justify doing so. To test the sensitivity of the Bayesian inference to stage definition, we also ran a test by dividing the accretions of Earth and Mars into _4 stages_, corresponding to: I=0-50%, II=50-75%, III=50-99.2% (Mars) or 50-99.5% (Earth), and IV=late veneret (Table 2). We also evaluate the effect of using non-aubrite _achondrites_ as endmember in mixing calculations and used only 2 stage accretion models in this context: I=0-60% and II=60-100%.
The observations that we are trying to match with the mixing calculation are the isotopic anomalies (or lack thereof) of Mars' and Earth's mantles. The total number of chondrite groups considered in the mixing calculation is \(\boldsymbol{n}_{i}\). We examine \(\boldsymbol{n}_{j}\) elements affected by isotopic anomalies. Delivery of elements to the mantles of Mars and Earth occurs over \(\boldsymbol{n}_{k}\) stages. Indices \(i,j\), and \(k\) enumerate the chondrite groups, isotopic anomalies, and stages. The isotopic anomalies of the endmembers are compiled in a \(n_{i}\times n_{j}\) matrix named \(I\) with meteorite groups in rows and anomalies in columns (Table 4; Appendix). The corresponding uncertainties are stored in a matrix of the same dimension named \(\sigma_{t}\) with meteorite groups in rows and \(1\sigma\) anomaly uncertainties in columns (Table 4). The concentrations of each element in the meteorite group are stored in a matrix of the same dimension named \(C\) with meteorite groups in rows and concentrations in columns (Table 5). The fraction of each element delivered in each stage is compiled in a \(\boldsymbol{n}_{j}\times\boldsymbol{n}_{k}\) matrix named \(X\) with elements in rows and stage in columns (Tables 1 and 2). The parameters of interest are the mass fractions of each meteorite group in each stage, which is stored in a \(n_{i}\times n_{k}\) matrix named \(F\) with meteorite groups in rows and stages in columns. The calculated mantle isotopic compositions of either Mars or Earth are stored in a \(\boldsymbol{n}_{j}\)-long vector named \(T\) (for target) and the associated uncertainty vector is noted \(\sigma_{T}\). If we note \(\mathcal{T}[i,j]\) the \(i^{th}\) row and \(j^{th}\) column of matrix \(T\), the isotopic composition of the mantle \(T\) can be calculate by mass-balance and its uncertainty \(\sigma_{T}\) can be calculated by recognizing that the variance of a linear combination of independent random variables is equal to the sum of the squares of the constants multiplied by the variances of the respective variables (Dauphas, 2017),
\(T[j]=\Sigma_{k=1}^{n_{k}}\sum_{X|k,j}\frac{\Sigma_{k=1}^{n_{k}}\sum_{i=1}^{n_ {k}}F[k|k|C|i,j](k|i,j]}{\Sigma_{k=1}^{n_{k}}\Sigma_{k=1}^{n_{k}}}\frac{X|k,j |F[k|k|C|i,j]}{\Sigma_{k=1}^{n_{k}}F[k|k|C|i,j]},\) (4)
\(\sigma_{T}[j]^{2}=\Sigma_{i=1}^{n_{i}}\bigg{(}\Sigma_{k=1}^{n_{k}}\sum_{i=1}^ {n_{k}}\frac{X|k,j|F[k|k|C|i,j]}{\Sigma_{k=1}^{n_{k}}\Sigma_{k=1}^{n_{k}}F[k|k |C|i,j]}\bigg{)}^{2}\sigma_{t}[i,j]^{2}.\) (5)
### Mixing components
We focus on chondrites as potential building blocks for Mars and Earth. The two reasons why achondrites, iron, and stony-iron meteorites are secondary in our analysis are: (\(i\)) isotopic anomaly data are missing due to the low
abundances of some elements that were depleted by volatilization or partitioning in metal or silicate (Table 4), and (_ii_) achondrites can be affected by admixture of material with chondritic abundances of volatile and siderophile elements, so they may not be representative of the bulk body (Zhu et al., 2021). We focus our attention on meteorite groups that have been well studied, namely CI, CM, CO, CV, H, L, LL, EH, and EL. These are also the main meteorite groups considered by Wasson and Kallemeyn (1988) in their study of the chemical compositions of the major groups of chondrites. Lodders (Lodders, 2021) also included CK and CR but these are less frequently analyzed and the database is incomplete for those groups. Examination of Table 4 and previous studies (Dauphas and Schauble, 2016; Kleine et al., 2020; Qin and Carlson, 2016; Warren, 2011b) shows that omitting those minor groups in our analysis is of little consequence because their compositions are close to those measured in the major meteorite groups. This leaves 9 meteorite groups that we can potentially consider as building blocks of Mars and Earth. We consider \(n_{k}\) stages and \(n_{i}\) building blocks for the accretion of the two planets. The parameters of interest that we seek to estimate are the mass fractions of building block \(i\) during stage \(k\), which we note \(F_{Lk}\). With the additional constraint \(\overline{\psi k}\!\in\!\{1,\!2,\ldots,n_{k}\}\!\sum_{i=1}^{n_{k}}\!F_{Lk}\!=\! \overline{1}\), we have \(n_{k}(n_{i}-1)\) parameters of interest. For 3 stages used in Dauphas (2017) and Brasser et al. (2018), and 9 meteorite groups, we therefore have 24 potential parameters. High quality isotopic anomaly measurements are available for \(n_{j}=11\) systems for Mars (\(\Delta^{17}\)O, \(\varepsilon^{48}\)Ca, \(\varepsilon^{50}\)Ti, \(\varepsilon^{54}\)Cr, \(\varepsilon^{54}\)Fe, \(\varepsilon^{64}\)Ni, \(\mu^{66}\)Zn, \(\mu^{84}\)Sr \(\varepsilon^{96}\)Zr, \(\varepsilon^{92}\)Mo, \(\Delta^{95}\)Mo) or \(n_{j}=12\) for Earth (like Mars but with \(\varepsilon^{100}\)Ru added; terrestrial rocks are used for normalization of isotopic anomaly measurements of extraterrestrial samples). Many of these anomalies are correlated. In our Bayesian analysis, we seek to maintain a balance between model complexity and parsimony to avoid overfitting. With 11-12 observables in our data, we find it reasonable to consider a model with a comparable number of parameters. This can be achieved by reducing the number of chondrite components. Dauphas (2017) only considered CI, COCV, O=H+L+LL, and E=EH+EL chondrites. Furthermore, he fixed the proportions of CI:COCV chondrites to a single value in the three stages, so the calculation only had 7 parameters of interest.
To rigorously evaluate potential dimensionality reduction within our problem, we applied Principal Component Analysis (PCA) to the isotopic anomaly data set. PCA is a method that transforms a dataset with possibly correlated variables into a new set of uncorrelated variables, called principal components. These are ordered by the amount of explained variance in the original dataset, enabling a reduction in dimensionality. We included meteorite groups CI, CM, CO, CV, CR, H, L, LL, EH, and EL in our analysis, using available data for \(\Delta^{17}\)O, \(\varepsilon^{48}\)Ca, \(\varepsilon^{50}\)Ti, \(\varepsilon^{54}\)Cr, \(\varepsilon^{44}\)Fe, \(\varepsilon^{64}\)Ni, \(\mu^{66}\)Zn, \(\mu^{84}\)Sr \(\varepsilon^{96}\)Zr, \(\varepsilon^{92}\)Mo, \(\Delta^{95}\)Mo, and \(\varepsilon^{100}\)Ru anomalies. In the context of a PCA designed to reduce dimensionality among meteorite groups, the meteorite groups function as variables while the isotopic anomalies are considered observations. Given the disparate scales on which isotopic anomalies are measured, we started our analysis with a Z-score normalization for each isotopic system across the different meteorite groups. Figure 3A presents the PCA produced with the \(R\) statistical software. We focus on the loadings, which are the coefficients assigned to each original variable in the formation of the first two principal components. Principal component 1 clearly segregates the meteorite groups into non-carbonaceous (NC=H, L, LL, EH, and EL) and carbonaceous (CC=CO, CV, CM, and CR) clusters, as previously defined based on cross-correlation diagrams (Budde et al., 2016; Dauphas and Schauble, 2016; Kleine et al., 2020; Kruijer et al., 2017; Warren, 2011b). Principal component 2 isolates CI as a chondrite group that should not be lumped with CC, corroborating a conclusion already reached by Hopp _et al._ (2022a) based on cross-correlation diagrams involving Fe isotopic anomalies. Previous studies have shown that the contribution of CC to Earth and Mars was minimal (Brasser, Dauphas and Mojzsis, 2018; Budde et al., 2016; Burkhardt et al., 2021; Dauphas, 2017; Dauphas and Schauble, 2016; Javoy, 1995; Javoy et al., 2010; Kleine et al., 2020; Kleine et al., 2023; Kruijer et al., 2017; Lodders and Fegley, 1997; Martins et al., 2023; Nie et al., 2023; Paquet et al., 2023; Sossi and Moynier, 2023; Sanloup, Jambon and Gillet, 1999; Savage, Moynier and Boyet, 2022; Steller et al., 2022; Warren, 2011b), so we use a single meteorite cluster (COCV) as representative of CC. As NC dominate the building blocks of Earth and Mars, considering ordinary (O=H+L+LL) and enstatite (E=EH+EL) chondrites separately offers more granular insight. While there are chemical differences between H, L, and LL, or EH and EL chondrites, these effects are secondary. CI emerges as a distinct group that cannot be grouped with any other meteorite category.
PCA does not incorporate uncertainties. To assess whether these could have influenced the way we defined meteorite clusters, we calculated the weighted Euclidean distance between isotopic vectors, whereby the distance
between two meteorite groups \(i1\) and \(i2\) is defined as \(d_{i2-i1}=\sqrt{\sum_{j=1}^{n_{j}}\big{(}l_{i2,j}-l_{i2,j}\big{)}^{2}\big{/} \big{(}\sigma_{i2,j}^{2}+\sigma_{i1,j}^{2}\big{)}}\) where \(l_{i,j}\) is the isotopic composition of the \(j^{\rm th}\) system in the \(i^{\rm th}\) meteorite group and \(\sigma_{i,j}\) is the associated \(1\sigma\) uncertainty. Figure 3B presents a heat-map of these weighted Euclidean distances for pairs of meteorite groups. This analysis corroborates the result of the PCA. It separates meteorite groups into NC=O+E and others, and within NC a further division between O and E is evident. Among carbonaceous chondrites, CI is uniquely distinguished as a separate subgroup.
To summarize, our PCA and weighted Euclidean distance analyses suggest that for our study of the building blocks of Mars and Earth, we should consider the following groups: CI, COCV (a proxy for CC), O, and E. These components align with those used by Dauphas (2017), but unlike that study, we do not assume constant proportions of CI and COCV at all stages. Our approach reflects the consideration that CI might originate further out in the protoplanetary disk, and their integration into the terrestrial planet-forming region may have been influenced by the growth and migration of ice giant planets (Hopp et al., 2022). For 3 or 4 accretion stages, we therefore have only 3\(\times\)(4-1)=9 or 4\(\times\)(4-1)=12 parameters of interest to consider.
In this study, our focus lies in the examination of isotopic anomalies on Mars and Earth rather than an analysis of their chemical compositions. Prior research has shown that the chemical and isotopic compositions of Earth for non-volatile elements cannot be solely attributed to documented undifferentiated meteorites (Budde, Burkhardt and Kleine, 2019; Burkhardt et al., 2011; Burkhardt et al., 2021; Dauphas et al., 2015). Instead, there is a possible need for a reservoir enriched in a refractory component (Dauphas et al., 2015; Morbidelli et al., 2020), with an isotopic signature close but distinct from enstatite chondrites (Dauphas, 2017). To denote these hypothetical building blocks, which might have had isotopic compositions akin to known chondrite groups but different chemical makeups, we use the notations \(\mathcal{C}\mathcal{I},\mathcal{COCV},\ \mathcal{O},\ \mathcal{O},\) and \(\mathcal{E}\) (CI, COCV, O, and E denote the actual meteorite groups). In our Bayesian inference analysis, we treat the isotopic compositions of these endmembers as hypothetical. They are informed by prior knowledge from meteorite measurements, and by the necessity for them to combine in ways that could give rise to the observed compositions of Earth and Mars.
We are interested in tying Earth and Mars to asteroid-size building blocks, which can help illuminate how the isotopic architecture of the inner solar system was mapped onto fully grown planets. Fitoussi _et al._ (2016) considered a model for Earth formation that involved mixing between Mars material and carbonaceous chondrite material akin to CV meteorites. More recently, Onyett _et al._ (2023) reported \(\mu^{30}\)Si isotopic anomalies in meteorites and found that E-chondrites had positive \(\mu^{30}\)Si values relative to terrestrial rocks, while most non-aubrite achondrites (HEDs, argrites, ureilites, pallasites, mesosiderites) had negative \(\mu^{30}\)Si values. Based on those results, they argued that Earth was made from a mixture of \(\sim\)3/4 low-\(\mu^{30}\)Si inner solar system material sampled by non-aubrite achondrites and \(\sim\)1/4 high-\(\mu^{30}\)Si outer solar system material akin to CI chondrites. We have therefore evaluated isotopic mixing models for Earth and Mars that consider achondrites (\(\mathcal{A}\)) as an endmember, in addition to the four chondrite endmembers highlighted above, and uses \(\mu^{30}\)Si as an additional constrain. We take HED meteorites (most likely originating from asteroid Vesta; McCord, Adams and Johnson, 1970; McSween Jr et al., 2013) as representative of this endmember because (_i_) they have similar isotopic compositions to angrites, ureilites, mesosiderites, and main-group pallasites (Table 4) and (_ii_) important isotopic data are missing for other achondrite groups (Ca, Fe, Ni, Sr, Si for acapulcoite, Ca, Cr, Fe, Ni, Sr, Zr, Si for lodranite, Ca, Fe, Ni, Sr, Zr, Si for brachinite, Ti, Cr, Fe, Ni, Sr, Si for winonaite, Fe, Ru for angrite, Fe, Sr, Si for aubrite, Sr, Zr for ureilite, Ca, Fe, Ni, Sr for mesosiderite, and Ca, Fe, Sr for main-group pallasite). Only Mo and Ru data are missing in HEDs but these two elements could have easily been overprinted by late accretion (Zhu et al., 2021) so the bulk isotopic composition of Vesta would be difficult to constrain anyhow. The late veneers in Earth and Mars are mostly constrained from Mo and Ru, which are not use in the achondrite accretion scenario. We therefore only consider 2 stages (I: 0-60%, II: 60-100%) in the mixing calculation involving HEDs. We keep the other mixing endmembers unchanged, so we have 2\(\times\)(5-1)=8 parameters of interest to consider in the 2-stage _achondrite_ accretion model (Table 3). We take the chemical composition of H chondrites (Lodders, 2021) as proxy for the composition of Vesta (Toplis et al., 2013). Toplis _et al._ (2013) also considered a model with 75% H chondrites and 25% CM-like chondrites. Adopting such a composition would not change the result much because H and CM chondrites have relatively similar chemical compositions and the bulk mixture would remain close to H.
### Bayesian inference and Markov Chain Monte Carlo (MCMC) approach
Our aim is to estimate the mass fractions of each meteorite group at each stage, referred to as matrix \(F\). Together with the isotopic compositions of the endmembers (matrix \(I\)), these mass fractions determine the isotopic composition of the mantle, represented as vector \(T\). The relationship can be formulated as \(T=g(F,I)\). The overarching objective is to determine matrix \(F\) along with its corresponding uncertainties by contrasting our theoretical predictions of \(T\) with its observed counterpart, denoted as \(\tilde{T}\). Both the measured mantle and chondrite isotopic compositions possess inherent uncertainties (Table 4). Given the extensive number of parameters that need to be constrained, coupled with the uncertainties pervading both model parameters and observations, this problem is optimally addressed through Bayesian inference using the Metropolis-Hastings MCMC approach (Chib and Greenberg, 1995; Gelfand and Smith, 1990; Hastings, 1970; Metropolis et al., 1953). This approach has been used successfully in previous geochemical studies (Brasser, Dauphas and Mojzsis, 2018; Kipp and Tissot, 2022; Liu and Liang, 2017; Ptacek, Dauphas and Greber, 2020). We chose the Metropolis-Hastings algorithm over alternative MCMC approaches like Gibbs Sampling or Hamiltonian Monte-Carlo because it enjoys familiarity in the Bayesian community and is conceptually simple.
In the context of a probabilistic model and prior distributions over parameters (the prior is a probabilistic statement that encapsulates the knowledge or beliefs about a model parameter from prior studies or existing information), including both the parameters of interest (\(F\)) and nuisance parameters (\(I\)), MCMC strives to generate samples from the posterior distribution of these parameters given observed data (\(\tilde{T}\)). This is accomplished by constructing a Markov chain whose distribution matches the target posterior distribution. Starting at a set of parameters (denoted state), the algorithm iteratively proposes new parameters and decides on whether to accept or reject this state based on a set of rules. This is designed such that the distribution of the Markov chain aligns with the target posterior distribution. The MCMC approach leverages two key ingredients: (\(i\)) the likelihood, which gauges the probability of the observed data given a set of model parameters, and (\(ii\)) prior distributions that embody our initial beliefs about the parameters before seeing any data. The posterior is proportional to the product of the likelihood and prior, leading to a trade-off situation. The likelihood strives to adjust the parameters to fit the data optimally, while the prior attempts to steer the parameters towards regions considered plausible a priori. If the data align with our prior beliefs, there is no conflict. If they do not, it creates a tension, with the likelihood and the prior pulling the parameters in different directions. The resulting posterior distribution resolves this tension. It represents a compromise between the data (likelihood) and our prior beliefs (prior), each weighted by the strength of its assertion.
The log-likelihood is defined as \(\ell=-\chi^{2}/2\), with \(\chi^{2}(F,I)=\sum_{j=1}^{n_{j}}\big{(}T[j]-\tilde{T}[j]\big{)}^{2}/\sigma_{ \tilde{T}}[j]^{2}\) where \(F\) represents the parameters of interest, \(I\) represents the nuisance parameters, \(\tilde{T}[j]\) is the measured target composition, \(T[j]\) is the calculated target composition (Eqs. 4 and 5), and \(\sigma_{\tilde{T}}[j]\) is the uncertainty of the measured target composition. The terrestrial isotopic composition is used for normalization of isotopic anomaly measurements, so in theory for Earth \(\tilde{T}[j]=0\) and \(\sigma_{\tilde{T}}[j]=0\), meaning that the \(\chi^{2}\) is undefined. In practice, samples and terrestrial standards can be fractionated by natural processes or during purification following different laws than the one used for normalization (usually exponential; Dauphas and Schauble, 2016; Tang and Dauphas, 2012; Wombacher, Rehkamper and Mezger, 2004), which can create small artifact anomalies as has been documented for Ni (Steele et al., 2011), Mg (Davis et al., 2015), Ca, Ti, (Zhang et al., 2014), and Mo (Budde et al., 2023). Subtle isotopic heterogeneities have been found for Ru in terrestrial rocks that most likely reflect incomplete mixing of Earth's building blocks (Fischer-Godde et al., 2020). No such heterogeneity has been found yet for other elements, but it could still exist at, or below, the current state-of-the-art precision of isotopic analyses. The assumption that for silicate Earth \(\tilde{T}[j]=0\) and \(\sigma_{\tilde{T}}[j]=0\) is therefore not fully warranted and we use for the silicate Earth the most precise isotopic analyses of the most representative samples (Table 4).
Although parameters of interest and nuisance parameters serve different functions in the inferential process, they undergo analogous treatment within the MCMC algorithm. The process necessitates the generation of proposals for both these parameters. The proposal values for the nuisance parameters \(I\) (isotopic compositions of the endmembers) are generated by drawing from normal distributions with a mean of 0 and standard deviations equal to the 1\(\sigma\) error divided by a fixed value of 5 to 15, then adjusting the preceding values by these amounts. Proposal values
for the parameters of interest \(F\) (mass fraction of each endmember at each stage) are created by drawing random numbers from normal distributions with a mean of 0 and standard deviations of 0.0005 to 0.006. These values are then added to the mass fractions of \(C\!I\), \(C\!OCV\), and \(O\). If the result is less than 0 or exceeds 1, a reflective boundary is applied to ensure that the proposed values stay within the 0 to 1 range. For each stage, the mass fraction of \(\mathcal{E}\) is calculated as the difference between the sum of the mass fractions \(C\!I+\!C\!OCV+\!O\) and 1. This proposal construction guarantees that the mass fractions of \(C\!I\), \(C\!OCV\), and \(O\) remain within 0 and 1, and that the mass fractions of all endmembers sum to 1, all the while preserving detailed balance (for every pair of states \(i\) and \(j\), the probability of transitioning from \(i\) to \(j\) must be the same as the probability of transitioning from \(j\) to \(i\)), a requirement of the Metropolis-Hastings algorithm. The sole drawback with these proposals is that the mass fraction of \(\mathcal{E}\) can be negative. This situation is managed in the prior function where such a proposal is assigned a log-prior probability of \(-\infty\). For the nuisance parameters (the isotopic compositions of the meteorite endmembers), we apply normal distributions as priors, centered on the mean with a standard deviation equal to the 1\(\sigma\) uncertainty of the parameter (Table 4). The priors for the mass fractions of each mixing component at each stage are uniform between 0 and 1.
We implemented the MCMC algorithm using the Mathematica software. The main steps include:
a. Finding the values of the parameters of interest that minimize the \(\chi^{2}\). These are used as the starting state in the chain.
b. Proposing a new state for the parameters of interest and the nuisance parameters via a random walk from the current state.
c. Estimating the likelihood of the observed data for both the current and the proposed states. Multiplying these likelihoods by the prior probabilities of these states to compute the unnormalized posterior probabilities. This is a critical step in Bayesian inference as it synthesizes prior knowledge on nuisance parameters (constraints on endmember compositions from previous meteorite analyses) and parameters of interest (the mass fractions of each component at each stage must be within [0,1]), with new observed data (the isotopic compositions of Earth's and Mars' mantles as they relate to the isotopic compositions and mass fractions of the mixing endmembers).
d. Implementing the Metropolis-Hastings criterion to accept or reject the proposed state. Calculate \(\alpha\), the ratio of the posterior probabilities of the proposed and current states. If \(\alpha\geq 1\) the proposal is accepted. If \(\alpha<1\), the new state is accepted with a probability of \(\alpha\). This is achieved by drawing a random number between 0 and 1 and accepting the new state if that number is less than \(\alpha\). If the proposed is accepted, it is appended to the chain. If it is rejected, the current state is appended to the chain.
f. Repeating from step b, until the chain accumulates enough iterations.
The iterations were conducted \(10^{6}\) times. The initial 200,000 iterations were disregarded as a part of the burn-in period to allow the Markov chain to reach a stable state. The step sizes were fine-tuned based on preliminary trials runs to achieve a rejection rate of \(\sim\)40-70% and to ensure a comprehensive exploration of the parameter space. The chains were visualized and assessed to confirm convergence. Different forms of the proposal functions for the parameters (uniform, gaussian) were evaluated, yielding similar results, albeit at different convergence rates.
## 3 Results
Unless otherwise noted, all uncertainties are reported as 95% confidence intervals.
### Iron isotopic analyses of SNC meteorites
The Fe isotopic data of the four samples show no resolvable mass-dependent Fe isotopic fractionations relative to IRMM-524a (Table 6). The internally normalized data display resolvable Fe isotopic anomalies in \(\mu^{54}\)Fe but no clearly resolvable isotopic anomalies in \(\mu^{58}\)Fe, which agrees with previous meteorite analyses (Hopp et al., 2022a,b). The four samples that represent the three major groups of Martian meteorites (shergottites, nakhlites, chassignites) display indistinguishable \(\mu^{54}\)Fe and \(\mu^{58}\)Fe values and define weighted averages of +6\(\pm\)3 and -5\(\pm\)4, respectively (\(\pm\)95% c.i, \(n\)=4; Fig. 4, Table 6). The \(\mu^{54}\)Fe values agree with available data from Schiller _et al_. (2020) (weighted mean of 7\(\pm\)3, \(n\)=4). No \(\mu^{58}\)Fe data was previously reported. Together the data from this study and previously allow to define a weighted average \(\mu^{54}\)Fe for bulk silicate Mars of +7\(\pm\)2 (95% c.i.).
For each mixing model, the output of the Bayesian inference calculation is the set of 800,000 states in the Markov chain that describe the posterior distribution of the parameters of interest (the mass fractions of each meteorite group at each stage) and the nuisance parameters (the isotopic compositions of the mixing endmembers). From those, we can also calculate the predicted model compositions for Earth and Mars. These predicted compositions differ from a regular best fit model obtained by a simple \(\chi^{2}\)-minimization, as the nuisance parameters are also updated in the analysis. The posterior mass fractions of each meteorite endmember at each stage are compiled in Tables 7 and 8 for Mars and Earth, respectively. The posterior probabilities of the meteorite endmember isotopic compositions in the baseline model are compiled in Table 9. The predicted composition of the mantles of Mars and Earth are compiled in Table 4. We have considered 3- (_baseline_) and _4-stage_ accretion models involving mixtures of \(\mathcal{C}\)\(\mathcal{I}\)\(\mathcal{O}\)\(\mathcal{C}\)\(\mathcal{O}\)\(\mathcal{V}\), \(\mathcal{O}\), and \(\mathcal{E}\) (Table 3) to assess the influence of dividing planetary accretion in discrete accretion stages. We have also considered a 2-stage accretion model involving the same components plus achondrites (\(\mathcal{A}\); achondrite model; Table 3) to evaluate the role that such a component could have played in terrestrial planet accretion. Below, we start by discussing the results on the main accretion stages and we then examine the late veneer.
Earth is dominantly made of \(\mathcal{E}\) material during the main stages of its accretion (Figs. 5, 6, 7; Table 8), with contributions of \(\mathcal{E}\) -material to bulk Earth of 92.5 (+2.3/-2.5)%, 91.0 (+2.7/-4.0)%, and 92.9 (+2.1/-2.3)% in the _baseline_, _4-stage_, and _achondrite_ models, respectively. This agrees with previous studies that had recognized the isotopic similarity between Earth's mantle and enstatite chondrites for several elements (Dauphas et al., 2014; Javoy, 1995; Javoy et al., 2010). Using fewer and less precise data, Dauphas (2017) and Brasser et al. (2018) had concluded that Earth was also predominantly made of \(\mathcal{E}\) material but found a \(\sim\)50% \(\mathcal{O}\) contribution during stage I. This is not seen in any of our inferences, and the baseline model yields an \(\mathcal{O}\) contribution of only 1.0 (+3.0/-0.6)% during stage I. As discussed by Dauphas (2017) and Brasser et al. (2018), the significant \(\mathcal{O}\) contribution was due in part to uncertainties in model end member definitions and a higher \(\mathcal{E}\)-contribution was plausible. The present analysis corroborates this inference. In the achondrite model, we see no evidence for a large achondrite contribution (Table 8).
The contributions of \(\mathcal{C}\)\(\mathcal{I}\)\(\mathcal{O}\)\(\mathcal{C}\)\(\mathcal{V}\) increases slightly between early and late stages of Earth's accretion (excluding the late veneer). While this is indicated in Fig. 6, examination of the MCMC posterior distribution allows us to evaluate this more rigorously as we can examine how the fraction of NC changed across different stages in individual elements of the Markov chain. If we note \(f_{\rm NC}\) the fraction of NC\(=\mathcal{O}+\mathcal{E}\) in a given stage, examination of the posterior distributions yields \(f_{\rm NC,II}/f_{\rm NC,I}=0.91-^{0.10}_{-0.08}\) in the _baseline_ model and \(f_{\rm NC,II+III}/f_{\rm NC,I}=0.90\pm 0.08\) in the _4-stage_ accretion model, so the fraction of \(\mathcal{E}\) decreased by \(\sim\)10% in the later 40% of Earth's accretion history, and \(\mathcal{C}\)\(\mathcal{I}\) made up for much of that decline.
The nature of the late veneer on Earth is poorly constrained. It is almost entirely constrained by the isotopic composition of Ru and to a lesser extent by Mo. The late veneer is mostly \(\mathcal{C}\)\(\mathcal{I}\) in the _baseline_ model and mostly \(\mathcal{E}\) in the _4-stage_ model (Fig. 5). Only one CI meteorite had its \(\upvarepsilon^{100}\)Ru composition measured at -0.24\(\pm\)0.13 (Fischer-Godde and Kleine, 2017), while E-chondrites define an average of -0.08\(\pm\)0.04 (Fischer-Godde et al., 2015; Fischer-Godde and Kleine, 2017). \(\mathcal{E}\) is therefore favored based on the likelihood for \(\upvarepsilon^{100}\)Ru, but allowing a \(\mathcal{C}\)\(\mathcal{I}\) contribution helps fit the \(\upvarepsilon^{9}\)Mo value of the bulk silicate Earth, as 10 % of the mantle inventory delivered by \(\mathcal{C}\)\(\mathcal{I}\) is sufficient to increase the \(\upvarepsilon^{9}\)Mo value of the mantle by \(\sim\)+6, which is close to the value measured in Earth (Budde, Burkhardt and Kleine, 2019).
Even when \(\upmu^{30}\)Si and an achondritic component are considered, the Bayesian inference finds a negligible fraction of achondrites and instead moves the prior \(\upmu^{30}\)Si of \(\mathcal{E}\) from +12.5\(\pm\)2.2 to +7.4\(\pm\)1.6. We also ran a \(\chi^{2}\) minimization to evaluate the mixture that reproduces the \(\upmu^{30}\)Si value and fits the other isotopic anomalies in the _achondrite_ 2-stage model (Table 3). To do this, we arbitrarily divide the errors on the \(\upmu^{30}\)Si values of the silicate Earth and the mixing components by 10. In that case, the optimum mixture is 33% \(\mathcal{C}\)\(\mathcal{I}\)\(+\)9% \(\mathcal{O}\)\(\mathcal{C}\)\(\mathcal{O}\)\(\mathcal{V}\)\(\mathcal{+}\)58% \(\mathcal{A}\) during stage I and 31% \(\mathcal{C}\)\(\mathcal{I}\)\(\mathcal{I}\)\(\mathcal{+}\)69% \(\mathcal{A}\) during stage II (32% \(\mathcal{C}\)\(\mathcal{I}\)\(\mathcal{+}\)6% \(\mathcal{O}\)\(\mathcal{C}\)\(\mathcal{C}\)\(\mathcal{V}\)\(\mathcal{+}\)62% \(\mathcal{A}\) overall). This mixture corresponds approximately to that inferred by Onyett _et al._ (2023). The calculated isotopic anomalies are \(\upDelta^{17}\)O=-0.21\(\pm\)0.06, \(\upvarepsilon^{48}\)Ca=-0.10\(\pm\)0.16, \(\upvarepsilon^{50}\)Ti=-0.10\(\pm\)0.03, \(\upvarepsilon^{54}\)Cr=-0.05\(\pm\)0.05, \(\upmu^{54}\)Fe=+9.4\(\pm\)1.9, \(\upmu^{54}\)Sr=+13\(\pm\)10, \(\upmu^{96}\)Zr=+54\(\pm\)7, \(\upmu^{30}\)Si=+2.1\(\pm\)0.2. While this mixture reproduces the \(\upmu^{30}\)Si value, it fails in many respects. The situation is particularly dire for \(\upmu^{54}\)Fe and \(\upmu^{96}\)Zr
because the silicate Earth is an endmember for these isotopic systems and both carbonaceous meteorites and non-abrite achondrites display large isotopic anomalies (Table 4).
To summarize for Earth, most of its accretion consisted of \(\mathcal{E}\), which decreased slightly in the latter 40% of the accretion while the contribution of \(\mathcal{C}\mathcal{I}\) increasing during that stage. The nature of the late vener is uncertain, and it could either have been \(\mathcal{C}\mathcal{I}\) or \(\mathcal{E}\).
In _baseline_ and _4-stage_ accretion models, Mars is a mixture between \(\mathcal{O}\) and \(\mathcal{E}\) ( 32.7 \({}^{+5.3}_{-4.9}\) % \(\mathcal{O}\) + 64.8 \({}^{+5.1}_{-5.4}\) % \(\mathcal{E}\) + 1.8 \({}^{+2.0}_{-1.2}\) % \(\mathcal{C}\mathcal{I}\) + 0.7 \({}^{+0.7}_{-0.5}\)% \(\mathcal{C}\mathcal{O}\mathcal{C}\mathcal{V}\) and 33.1 \(\pm\) 5.4 % \(\mathcal{O}\) + 63.6 \(\pm\) 5.5 % \(\mathcal{E}\) + 1.9 \({}^{+1.8}_{-1.3}\) % \(\mathcal{C}\mathcal{I}\) + 0.6 \({}^{+0.7}_{-0.4}\)% \(\mathcal{C}\mathcal{O}\mathcal{C}\mathcal{V}\), respectively; Figs. 5, 7, and 8). This agrees with previous studies that used fewer isotopic systems and less well-constrained isotopic endmembers, which gave 30-45% O + 55-70% E (Sanloup, Jambon and Gillet, 1999) and 32-67% O + 29-68% E (Brasser, Dauphas and Mojzsis, 2018). The main novelties of the present work are that we can better define the mixing proportions and can discern changes in the proportions of \(\mathcal{O}\) and \(\mathcal{E}\) accreted by Mars at different stages of its growth. The mixing proportions inferred by Lodders and Fegley (1997) of 85% H+11% CV+4% CI cannot explain the isotopic composition of Mars (Fig. 2 of Tang and Dauphas, 2014).
The contribution of \(\mathcal{C}\mathcal{I}\)+\(\mathcal{O}\mathcal{C}\mathcal{V}\) does not change significantly between the first 60% and latter 40% of Mars' accretion history, although the calculation hints at a possible increase. The most striking observation is the increase in the proportion of \(\mathcal{E}\) relative to \(\mathcal{O}\) in later stages of Mars' accretion history (Fig. 8). If we note \(f_{\mathcal{O}}\) and \(f_{\mathcal{E}}\) the fractions of \(\mathcal{O}\) and \(\mathcal{E}\) in a given stage, examination of individual states in the Markov chain yields \(\big{[}f_{\mathcal{E},\mathrm{II}}/\big{(}f_{\mathcal{E},\mathrm{II}}+f_{ \mathcal{O},\mathrm{II}}\big{)}\big{]}\big{/}\big{[}f_{\mathcal{E},\mathrm{II} }/\big{(}f_{\mathcal{E},\mathrm{I}}+f_{\mathcal{O},\mathrm{I}}\big{)}\big{]}=1.7^{+0.7}_{-0.6}\) in the baseline model, which is statistically significant. More uncertainty is present in the _4-stage_ model and we have \(\big{[}f_{\mathcal{E},\mathrm{III}}/\big{(}f_{\mathcal{E},\mathrm{III}}+f_{ \mathcal{O},\mathrm{III}}\big{)}\big{]}\big{/}\big{[}f_{\mathcal{E},\mathrm{I }+\mathrm{II}}\big{/}\big{(}f_{\mathcal{E},\mathrm{I}+\mathrm{II}}+f_{ \mathcal{O},\mathrm{II}+\mathrm{II}}\big{)}\big{]}=1.5^{+1.3}_{-0.8}\). Brasser et al. (2018) had examined this question but the data available at that time were insufficient to see this change. The nature of the late veneer is poorly constrained in all Mars models, although a dominantly carbonaceous component is favored (Figs. 5 and 7).
When achondrites are considered as an endmember in a 2-stage accretion model for Mars using \(\mu^{30}\)Si as constraint, the Bayesian inference calculation finds a significant contribution of this endmember (38.6\({}^{+8.2}_{-12.8}\)% of the bulk planet) but the mixture becomes more _ad hoc_ as it requires involvement of more meteorite groups to reproduce the composition of silicate Mars (Table 7), as opposed to just two components (\(\mathcal{O}\)+\(\mathcal{E}\)) when achondrites are not considered.
The _baseline_ and _4-stage_ mixing models of Earth and Mars both reproduce the compositions of these planets relatively well and none stands out as superior to another (Table 4). Comparison of the posterior distributions of the isotopic compositions of the endmembers can provide us with some clues on which isotopic systems are more difficult to reproduce and what a missing endmember could look like (Table 9). In the _baseline_ Mars model, the isotopic compositions that change the most between prior and posterior are: \(\varepsilon^{50}\)Ti of \(\mathcal{E}\) from -0.16\(\pm\)0.06 to -0.23\(\pm\)0.05, and \(\mu^{84}\)Sr of \(\mathcal{E}\) from -11\(\pm\)13 to -20\(\pm\)11. The changes are relatively modest. In the _baseline_ Earth model, the isotopic compositions that change the most are: \(\mu^{54}\)Fe of \(\mathcal{E}\) from +6.4\(\pm\)1.1 to +3.2\(\pm\)0.8, \(\varepsilon^{96}\)Zr of \(\mathcal{E}\) from +14\(\pm\)5 to -2\(\pm\)2, \(\varepsilon^{92}\)Mo of \(\mathcal{E}\) from +0.40\(\pm\)0.08 to +0.32\(\pm\)0.07, and \(\varepsilon^{100}\)Ru of \(\mathcal{C}\mathcal{I}\) from -0.24\(\pm\)0.13 to +0.05\(\pm\)0.03. The changes are significant and concern primarily isotopic systems for which Earth is an endmember, with \(\mathcal{E}\) a relatively close neighbor. The only way for the posterior to improve in the inferential process is for the prior of \(\mathcal{E}\) for these isotopic systems to be updated. This suggests that meteorite collections likely lack an extraterrestrial component, for which E chondrites merely serve as a representative proxy. This could affect our conclusions. Similarly, the inferential process handles the tension between reproducing \(\Delta^{95}\)Mo and \(\varepsilon^{100}\)Ru, and this is resolved by having a late vener dominantly made of \(\mathcal{C}\mathcal{I}\) but with an \(\varepsilon^{100}\)Ru value that is updated in the inferential process. The \(\varepsilon^{92}\)Mo of \(\mathcal{C}\mathcal{I}\) changes from +0.97\(\pm\)0.45 to +0.36\(\pm\)0.41, which helps explain the positive \(\Delta^{95}\)Mo value of Earth's mantle relative to \(\mathcal{E}\) by allowing a significant late contribution of \(\mathcal{C}\mathcal{I}\). Only two specimens of the Orgueil CI chondrite have been analyzed for their Mo isotopic compositions (Burkhardt et al., 2011b; Dauphas, Marty and Reisberg, 2002b) and only one specimen of Orgueil has been analyzed for its Ru isotopic composition (Fischer-Godde and Kleine). In both cases, the \(\varepsilon^{92}\)Mo and \(\varepsilon^{100}\)Ru values are rather imprecise, calling for more precise measurements, given the possible role of this CI chondrites in explaining the positive \(\Delta^{95}\)Mo value of Earth's mantle.
Recently discovered zinc (Martins et al., 2023; Paquet et al., 2023; Savage, Moynier and Boyet, 2022; Steller et al., 2022) and potassium (Nie et al., 2023) isotopic anomalies can provide complementary pieces of information on the nature of the material accreted by the Earth. Assuming chondritic composition for the building blocks, zinc isotope anomalies give a \(\sim\)6 to 13 % mass of the Earth from carbonaceous outer solar system material (\(\sim\)30% of its Zn inventory), while the rest came from the inner solar system. Kleine et al. (2023) and Paquet _et al._ (2023) concluded that Mars' Zn inventory came primarily from the inner solar system, with CC not exceeding \(\sim\)4% of Mars' mass. Potassium isotopic anomalies also indicate that in both Earth and Mars, K came predominantly from inner solar system bodies, but the data remain consistent with a carbonaceous contribution not exceeding \(\sim\)10%. One challenge in interpreting isotopic anomalies of moderately volatile elements in terms of accretion history is the potential loss of these elements through volatilization, either in the accreted material itself or during planetary accretion. For instance, if a significant portion of Earth's formation derived from materials heavily depleted in moderately volatile elements, such as angrites, K and Zn isotopic records in the mantle might have missed 86 to 94 % of Earth's accretion (Earth is depleted by factors of 7.4 in K, Dauphas et al., 2022, and \(\sim\)15 in Zn, McDonough and Sun, 1995). In that context, the agreement in the contribution of carbonaceous materials between refractory and moderately volatile elements is remarkable but could be a coincidence, as they could record very different stages of planetary accretion.
## 4 Discussion
The most significant conclusions drawn from existing and newly measured isotopic anomalies are the following:
* Mars accreted \(\sim\)65% \(\mathcal{E}\), 33% \(\mathcal{O}\), and \(\sim\)2% \(\mathcal{C}\)\(\mathcal{J}\)+CC. In the first part of its accretion, it accreted \(\mathcal{O}\) and \(\mathcal{E}\) in approximately equal proportions but accreted predominantly \(\mathcal{E}\) after that. The contribution of carbonaceous material may have increased in later stages, including during accretion of the late veneer, but this is highly uncertain.
* Earth accreted \(\sim\)92% \(\mathcal{E}\), 6% \(\mathcal{C}\)\(\mathcal{J}\), and less than 2% CC and \(\mathcal{O}\). The contribution of \(\mathcal{C}\)\(\mathcal{J}\) increased from \(\sim\)2% in the first part of the accretion to \(\sim\)11% in the later part.
### Implications for planetary dynamics
The field has experienced a significant surge in ideas related to terrestrial planet accretion (Broz et al., 2021; Chambers and Wetherill, 1998; Clement et al., 2018; Johansen et al., 2021; Levison et al., 2015; Nesvorny, Roig and Deienno, 2021; Walsh et al., 2011; Woo et al., 2023). Our primary focus here is on circumscribed phenomena that will require further examination within more comprehensive accretion models. The scenario we propose and the possible timeline of event we detail are illustrated in Fig. 9.
Radiometric dating indicates that Mars accreted rapidly, reaching half of its present size in approximately 2 Myr (Dauphas and Pourmand, 2011a; Tang and Dauphas, 2014). This observation, together with the difficulty of accounting for its small size in planetary dynamics simulations (Wetherill, 1991), indicate that Mars is most likely a stranded planetary embryo that escaped the stage of chaotic growth involving protracted collisions between Moon- to Mars-size objects (Dauphas and Pourmand, 2011a; Kobayashi and Dauphas, 2013). The protosolar nebula is thought to have lasted \(\sim\)5 Myr (Dauphas and Chaussidon, 2011; Krot et al., 2005), so this would mean that Mars formed when nebular gas was still around. Processes such as gas driven type-I migration of proto-Mars, aerodynamic drag on planetesimals, and the evolution of the gas disk could have affected the nature of the material accreted by Mars at different times.
There are considerable uncertainties on how isotopic anomalies were distributed across the early solar system and the discussion below is speculative. Based on spectroscopic observations of asteroids and cosmochemical considerations, it is likely that the early solar system was characterized by different isotopic reservoirs from inside to outside of the solar system along a sequence \(\mathcal{E}\), \(\mathcal{O}\), \(\mathcal{CO}\)\(\mathcal{CV}\) (CC), and \(\mathcal{C}\)\(\mathcal{J}\)(Brasser, Dauphas and Mojzsis, 2018; Dauphas et al., 2014b; Desch, Kalyaan and Alexander, 2018; Hopp et al., 2022a; Morbidelli et al., 2012; Render and Brennecka, 2021). We find that during the first 60% of its growth (up to \(\sim\)2.2 Myr after solar system formation; Dauphas and Pourmand, 2011a), Mars accreting material was composed of an equal mixture of \(\mathcal{O}\) and \(\mathcal{E}\) (Fig. 5, Table 7). The
feeding zone of embryos is approximately symmetrical, and a reasonable assumption is that Mars was born near the limit of the \(\mathcal{E}\)- and \(\mathcal{O}\)-planetesimal dominated regions (first panel in Fig. 9).
In the last \(\sim\)40% of its accretion, Mars accreting material became predominantly \(\mathcal{E}\) (Fig. 5, Table [7]). This could either mean that the boundary between \(\mathcal{O}\)- and \(\mathcal{E}\)-dominated regions moved relative to the radial location of Mars. Small planetesimals could have drifted inward because of aerodynamic gas drag (Adachi, Hayashi and Nakazawa, 1976) but this would lead to an increase in the fraction of O rather than a decrease as is observed. The \(\mathcal{E}\) planetesimals that formed at \(\sim\)1 au could have been gravitationally scattered outward as Mars-class and larger protoplanets formed near 1 au (Broz et al., 2021). This could have plausibly increased the \(\mathcal{E}\)/\(\mathcal{O}\) ratio in the Mars feeding zone over time. The scattered planetesimals could not have reached \(>\)2 au because their orbits would have been circularized at \(<\)2 au by gas drag.
Alternatively, Mars migrated radially inwards from near the \(\mathcal{E}\)-\(\mathcal{O}\) boundary to well within the \(\mathcal{E}\)-region (second panel in Fig. 9). Type I migration arises from corrotation and Lindblad torques exerted on the planet from the perturbed gas disk (Goldreich and Tremaine, 1980; Ward, 1986). McNeil _et al._ (2005) studied type I migration for embryos in the terrestrial planet forming region and found that they would have to rapidly form for their orbits to avoid significant orbital decay. In their simulations, embryos found at 1.5 au would have originated at \(\sim\)1.5 to 2.5 au. Broz (2021) invoked similar migration of Mars in their model of convergent migration of protoplanets toward 1 au, as may be needed to explain the tight radial spacing of Earth and Venus. It is thus conceivable that Mars started its formation closer to the main asteroid belt where it received a significant contribution of \(\mathcal{O}\) but then drifted inwards on a timescale of a few million years, and thus received a larger contrition of \(\mathcal{E}\) at later times.
The contribution of carbonaceous asteroids to the building of Mars remained relatively modest but could have increased in later stages, especially during the late veneer stage. Given Mars' rapid growth, the late veneer stage may have occurred relatively early, when nebular gas was still around, or within \(\sim\)10 Myr of its dispersal. The growth of giant planets would have scattered carbonaceous planetesimals from \(\sim\)5-20 au (CC and \(\mathcal{C}\)J) toward the inner solar system. The initially eccentric orbits of scattered planetesimals would have been rapidly circularized by gas drag and implanted in the main asteroid belt (Nesvorny et al., 2023; Raymond and Izidoro, 2017). Toward the end of gas disk lifetime, when the gas density at \(<\)5 au was low (Komaki et al., 2023) and the effect of gas drag was weakened, a small fraction of carbonaceous planetesimals originally from \(\sim\)5-20 au could have been implanted in the Mars-feeding zone at \(<\)2 au. This could be a potential source of the carbonaceous component in Mars's late veneer. After gas disk dispersal, the asteroid belt was excited and depleted under the influence of outer planet migration/instability (Nesvorny and Morbidelli, 2012; Tsiganis et al., 2005), with a small fraction of carbonaceous and \(\mathcal{O}\) asteroids impacting Mars. We do not expect this to be a major source of Mars accretion, because the asteroid belt had a relatively low mass and the impact probability of asteroids on Mars is low.
Earth's composition cannot be reproduced exactly by any chondritic mixture, but it clearly carries an \(\mathcal{E}\) signature (Figs 5, 7, Table 8). There is no evidence for extensive mixing with the \(\mathcal{O}\)-reservoir, suggesting that it grew by accretion of local embryos. Chaotic growth of Earth would be expected to be associated with considerable mixing of embryos formed at different heliocentric distances (Chambers and Wetherill, 1998). The fact that \(\mathcal{O}\) material is largely missing from Earth but represents \(\sim\)50 % of Mars could indicate that the \(\mathcal{O}\)-region was relatively limited in extent. It was present near the birthplace of Mars, but few embryos were born with that signature. This could be understood if Mars formed near the \(\mathcal{E}\)/\(\mathcal{O}\) boundary (see above), and Earth accreted planetesimals and protoplanets from \(\sim\)1 au. Mixing of accretion materials from different heliocentric distances is reduced when the gas disk is still around because orbital eccentricities of planetesimals/protoplanets at \(\sim\)1 au remain low.
A significant result of the Bayesian inference is that the fraction of carbonaceous material accreted by Earth likely increased later in its history (Figs 5, 6, Table 8). There is considerable debate on the timescale of Earth's accretion. Fast accretion (\(\sim\)10 Myr timescale) can potentially be reconciled with excess \({}^{182}\)W from \({}^{182}\)Hf-decay if the Moon-forming impact happened late and involved significant interaction between Theia's core and the protoEarth mantle (Yu and Jacobsen, 2011). Such a scenario would be more relevant to Earth's accretion in the protoplanetary gas disk by accretion of planetesimals or pebbles (Johansen et al., 2021; Levison et al., 2015), possibly associated with convergent migration of protoplanets (Broz et al., 2021). If Earth grew in a more canonical manner through
collisions between planetary embryos, then Earth's accretion likely unfolded over a longer period, achieving 60% of its final mass in \(\sim\)5 to 50 Myr (Gu et al., 2023).
In the _baseline_ model, the mass of carbonaceous material accreted by Earth in the last \(\sim\)40% of its accretion (excluding the late veneer) would have represented 4.5\(\pm\)2.5% of Earth's total mass. In the _4-stage_ accretion model, the mass of carbonaceous material accreted by Earth in the last \(\sim\)25% of its accretion (again excluding the late veneer) would have represented 4.7\(\pm\)2.4% of Earth's total mass. This finding is challenging to explain, because it is difficult to imagine how carbonaceous material could be such an important contributor during the late stages of Earth's accretion.
Planet migration and dynamical instability in the outer solar system (Tsiganis et al., 2005) is an appealing scenario to explain the late delivery of carbonaceous material to Earth, but there are significant difficulties. After gas dispersal, dynamical interaction of Neptune with trans-Neptunian disk of icy planetesimals fueled radial migration of Neptune from its presumed formation radius at \(\sim\)20 au to its current orbit at 30 au. Neptune's migration could have triggered a dynamical instability in the outer solar system, known as the "Nice" model (Tsiganis et al., 2005), when Jupiter's eccentricity was excited to its current value (\(\sim\)0.05). This change should have destabilized the orbits of carbonaceous planetesimals that were previously implanted in the asteroid belt by gas drag (Nesvorny et al., 2023; Raymond and Izidoro, 2017). A fraction of these carbonaceous planetesimals would end up accreting on Earth. The issue with this scenario is that the terrestrial collision probability of destabilized planetesimals is relatively low, on the order of 10\({}^{-3}\)(Gladman et al., 1997). We seek to explain delivery of \(\sim\)4.5 % of Earth's total mass late in Earth's accretion, which for a \(\sim\)10\({}^{-3}\) efficiency would require 4.5 Earth-mass worth of carbonaceous asteroids in the outer belt, which is unreasonable. The probability of direct collision with the Earth of outer solar system planetesimals flung inwards during Neptune's migration is als over low (\(\sim\)10\({}^{-6}\); Gomes _et al._, 2005). They should not represent a significant source for Earth's accretion either.
A more appealing scenario is that delivery of carbonaceous material to Earth took the form of one of several stochastic impacts with large embryos. While small (\(\sim\)100 km) and early formed carbonaceous planetesimals at \(\sim\)5-10 au would be preferentially implanted in the asteroid belt (Nesvorny et al., 2023; Raymond and Izidoro, 2017), larger objects such as small embryos (\(\sim\)1000 km) could have reached \(<\)2 au (the effects of gas drag are reduced for larger bodies), where they would have mingled with embryos formed in the inner solar system. The implanted carbonaceous embryos would have participated in the stage of chaotic growth, and one or a few of those interlopers could have been accreted by Earth several tens of million years after solar system formation.
Earth's late veneer could either have been \(\mathcal{E}\) or \(\mathcal{C}\mathcal{I}\). From a dynamic point of view, \(\mathcal{E}\) would be easier to understand if it represented just the tail end of leftover accreting materials in Earth's vicinity, but the scenario outlined above could also provide some justification for a possible \(\mathcal{C}\mathcal{I}\) component of the late veneer. This is because, if \(\mathcal{C}\mathcal{I}\) planetesimals formed in the Uranus/Neptune region at \(>\)10 au (Hopp et al., 2022a) or even \(>\)15 au (Desch, Kalyaan and Alexander, 2018), and if Uranus/Neptune reached their final masses relatively late during the gas disk lifetime, the gas density at \(<\)5 au could have been already reduced (for example by MHD winds, Komaki _et al._, 2023). The \(\mathcal{C}\mathcal{I}\) planetesimals scattered by Uranus/Neptune could have been subsequently implanted to \(<\)2 au by gas drag, where they would later contribute to Mars and Earth late veneers.
The scenario proposed above can be tested against dynamical simulations to weed out those that do not reproduce the constraints that we see. Considerable uncertainties remain for the Earth because an endmember component is still missing.
### A significant achondritic component in Earth?
The claim was recently made based on Si isotopic anomalies that the making of the Earth requires a significant achondritic component (Onyett et al., 2023). Enstatite chondrites were excluded by Onyett _et al_. (2023) because they display positive \(\upmu^{30}\)Si anomalies of \(+\)11\(\pm\)3. Onyett _et al_. (2023) argue instead for a model that involves mixing between achondrites and CI\(+\)CC meteorites. While such a mixture can reproduce the \(\upmu^{30}\)Si value, it fails in many respects, most notably for \(\upmu^{54}\)Fe and \(\upmu^{36}\)Zr because the silicate Earth is an endmember for these isotopic systems and both carbonaceous meteorites and non-aubrite achondrites display large isotopic anomalies (Fig. 10; Table 4).
Onyett et al. (Onyett et al., 2023) argue that the difference in \(\mu^{30}\)Si between enstatite chondrites and BSE is of nucleosynthetic origin, ruling out the possibility that the two be genetically tied by formation from the same isotopic reservoir. A potential complication however is that enstatite chondrites have very fractionated \(\delta^{30}\)Si values relative to BSE (Armytage et al., 2011; Dauphas et al., 2015; Fitoussi and Bourdon, 2012; Georg et al., 2007; Onyett et al., 2023; Savage and Moynier, 2013; Zambardi et al., 2013), which can create spurious isotopic \(\mu^{30}\)Si values if natural fractionation departs from the exponential mass fractionation law used for internal normalization (Dauphas and Schauble, 2016a; Davis et al., 2015; Tang and Dauphas, 2012a; Wombacher, Rehkamper and Mezger, 2004; Young, Galy and Nagahara, 2002). It is therefore not entirely clear if the apparent \(\mu^{30}\)Si anomalies measured by Onyett _et al_. (2023) are nucleosynthetic in origin or if they are artifacts arising from the use of an incorrect mass fractionation law in the data procedure. Onyett et al. (2023) dismissed this concern, citing a lack of correlation between \(\mu^{30}\)Si and \(\delta^{30}\)Si across different meteorite groups. However, the effect is notably pronounced for enstatite chondrites. As discussed below, the difference of \(+11\)\(\pm\)3 in \(\mu^{30}\)Si between BSE and enstatite chondrites can be largely attributed to the fact that equilibrium between gas and solid fractionated the Si isotopic composition of these objects, which is not well accounted for by the exponential law used by Onyett _et al_. (2023) in their data reduction.
The \(\delta^{30}\)Si values of EH and EL are -0.68\(\pm\)0.03 and -0.55\(\pm\)0.04 %o respectively (Armytage et al., 2011; Dauphas et al., 2015; Fitoussi and Bourdon, 2012; Georg et al., 2007; Onyett et al., 2023; Savage and Moynier, 2013; Zambardi et al., 2013), while the silicate Earth is at -0.29\(\pm\)0.01 %o (Savage et al., 2010; Savage et al., 2014; Zambardi et al., 2013). These variations correlate with variations in Mg/Si ratios and are best explained by equilibrium isotopic fractionation between SiO gas and forsterite during condensation in the solar nebula (Dauphas et al., 2015). At that temperature, we expect a fractionation of \(\sim\)2.3 %o for \(\delta^{30}\)Si between forsterite and SiO\({}_{8}\), with approximately half of Si condensed and the remaining Si in the gas. High-temperature equilibrium isotopic fractionation should follow the relationship (Dauphas et al., 2022; Dauphas and Schauble, 2016a; Matsuhisa, Goldsmith and Clayton, 1978; Young, Galy and Nagahara, 2002),
\(\frac{\Delta\delta^{30}\mathrm{Si}_{\mathrm{op}}}{\Delta\delta^{29}\mathrm{Si }_{\mathrm{op}}}\simeq\frac{1/m_{28}-1/m_{30}}{1/m_{28}-1/m_{29}}\simeq 1.931\), (6)
where \(m_{28}\), \(m_{29}\), and \(m_{29}\) are the masses of the three Si isotopes. Onyett _et al_. (2023) adopted the exponential law to correct their data for mass fractionation by internal normalization, which assumes the following relationship (Dauphas and Schauble, 2016a),
\(\frac{\Delta\delta^{30}\mathrm{Si}_{\mathrm{op}}}{\Delta\delta^{29}\mathrm{Si }_{\mathrm{op}}}\simeq\frac{\ln(m_{30})-\ln(m_{28})}{\ln(m_{29})-\ln(m_{28})} \simeq 1.964\). (7)
Correcting natural mass fractionation on the \({}^{30}\)Si/\({}^{28}\)Si ratio by internal normalization to a fixed \({}^{29}\)Si/\({}^{28}\)Si ratio assuming exponential mass fractionation (Eq. 7) can yield artifact anomalies when the actual fractionation follows a different law like the equilibrium law (Eq. 6). Tang and Dauphas (2012a) derived approximate analytic formulas to calculate apparent isotopic anomalies if an incorrect mass fractionation law is used for internal normalization. For \(\mu^{30}\)Si and \(\delta^{30}\)Si, it takes the form,
\(\mu^{30}\)Si \(\simeq 500(n-k)\frac{(m_{30}-m_{29})}{m_{28}}\delta^{30}\)Si, (8)
with \(n\) and \(k\) are parameters that describe the actual and inferred mass fractionation laws, respectively (\(-1\) for high-T equilibrium and 0 for exponential; Marechal, Telouk and Albarede, 1999)XXXX. If the exponential law (\(k=0\)) is used to correct high-T equilibrium isotopic fractionation (Tang and Dauphas, 2012a), then we expect the following relationship between apparent isotopic anomalies and mass fractionation, \(\mu^{30}\)Si \(\simeq-17\)\(\delta^{30}\)Si. Given the \(+0.4\) %o difference in \(\delta^{30}\)Si, we would therefore expect silicate Earth to be shifted relative to enstatite chondrites by -7 ppm. Onyett _et al_. (2023) calculated a similar value for isotopic equilibrium involving gaseous atomic Si but reported different values for gas species coordinated by O or other atoms as they used isotopologue rather than isotope masses. Taking SiO as an example, they replaced \(m_{28}\), \(m_{29}\), and \(m_{30}\) with \(m_{28}+m_{0}\), \(m_{29}+m_{0}\), and \(m_{30}+m_{0}\) (\(m_{0}\) is the mass of oxygen) in Eq. 6, corresponding to a slope \(\Delta\delta^{30}\)Si/\(\Delta\delta^{29}\)Si of 1.954, which is closer to the exponential law (Extended Data Fig. 1 in Onyett _et al_. 2023). The high-temperature equilibrium mass fractionation law should however not depend on the atoms coordinating with Si (Dauphas and Schauble, 2016a; Matsuhisa, Goldsmith and Clayton, 1978; Young, Galy and Nagahara, 2002; see Appendix A1 of Dauphas _et al_. 2022 for a diatomic gas molecule). The
equilibrium mass fractionation law calculated for SiO\({}_{\rm g}\) by Onyett _et al._ (2023) is therefore incorrect. Correcting for this 7 ppm effect, the enstatite chondrite-silicate Earth difference shrinks from \(+11\)\(\pm\)3 to \(+4\)\(\pm\)3 (Fig. 10), which is barely resolvable. Other meteorite groups would also need to be corrected for the same effect. We conclude that much of the difference in \(\mu^{30}\)Si between the silicate Earth and enstatite chondrites is an artifact arising from correcting natural equilibrium isotopic fractionation in Si isotopes using the exponential law, which is inappropriate. This conclusion does not depend on details of the scenario outlined by Dauphas et al. (2015) being correct, only on high-temperature equilibrium controlling this fractionation, which is highly likely.
The two primary models for Earth's building blocks are as follows: (_i_) Earth was formed from material that, while isotopically akin to E-chondrites, possessed a distinct chemistry and a slightly different isotopic composition (Dauphas, 2017). (_ii_) Earth arose from a specific blend of achondrites and carbonaceous material. This blend underwent thermal processing, aligning certain isotopic anomalies (see Fig. 10) with those observed in both Earth and, coincidentally, enstatite chondrites (Schiller et al., 2020; Onyett et al., 2023). From a parsimony standpoint, the first model is favored. It demands fewer isotopic components, drawing primarily from enstatite rather than a blend of achondrites and carbonaceous materials. Additionally, it avoids invoking _ad hoc_ nebular processes to alter isotopic anomalies at a large scale for refractory elements like Mo and Zr.
## 5 Conclusion
This study, along with numerous prior investigations over the past several years, has substantially enhanced the number and quality of isotopic anomaly measurements in planetary materials. We utilize these data from Mars and Earth in a Bayesian inference calculation to delineate the isotopic nature of their building blocks. The use of a Bayesian approach is motivated by its ability to resolve, in a statistically sound manner, the challenge of replicating Mars and Earth compositions via a mixing model while simultaneously permitting exploration of the isotopic compositions of the endmembers.
To simplify the problem, we apply principal component analysis and a weighted Euclidean distance metric to the database of isotopic anomaly measurements. Our results indicate that most variations documented in meteorites can be attributed to the mixing of CI, COCV, O, and E. Notably, the PCA analysis uncovers the isotopic trichotomy CI, CC, and NC, previously observed in cross-correlation diagrams.
Earth's isotopic compositions cannot be reproduced exactly by considering known meteorite groups only. With this caveat in mind, our analysis reveals that Earth consists of 92% \(\mathcal{E}\), 6% \(\mathcal{O}\), and \(<\)2% \(\mathcal{COCV}\) and \(\mathcal{O}\). Mars' isotopic composition is best characterized as a blend of \(\sim\)65% \(\mathcal{E}\), 33% \(\mathcal{O}\), and \(<\)2% \(\mathcal{O}\) and \(\mathcal{COCV}\). There was a notable increase in the contribution of carbonaceous material in the last 40% of its accretion. Mars' accretion, on the other hand, saw a shift from an equal mixture of \(\mathcal{E}\)+\(\mathcal{O}\) to predominantly \(\mathcal{E}\). The nature of the late veneer remains poorly defined for both Mars and Earth, but it could have included a significant proportion of carbonaceous material.
We consider potential planetary dynamics scenarios that could have enabled the observed changes to take place. Mars may have originated at the boundary between the \(\mathcal{O}\) and \(\mathcal{E}\) regions, subsequently migrating into the \(\mathcal{E}\) - region via gas-driven type-I migration over a span of a few million years. The increased contribution of carbonaceous material to Earth could be attributed to the random impact of an outer solar system embryo that migrated within the inner solar system while nebular gas remained, subsequently playing a role in the phase of chaotic growth.
Our study demonstrates that Si isotopic anomalies in Earth and enstatite chondrites should not be used as a constraint on the nature of the material accreted by Earth. These objects have highly fractionated Si isotopic compositions that, when inadequately corrected using the exponential law, produce apparent isotopic anomalies. These anomalies are not nucleosynthetic in origin; instead, they reflect nebular and planetary processes.
Acknowledgements.This work was supported by NASA grants 80NSSC20K1409 (Habitable Worlds), 359NNX17AE86G (LARS), NNX17AE87G and 80NSSC20K0821 (Emerging Worlds), EAR-2001098 (CSEDI), and a DOE grant to ND. Discussions with James Zhang were greatly appreciated. Editorial oversight by Dr. Morbidelli and comments by two anonymous reviewers were greatly appreciated.
**Author contributions.** ND and TH conceived the study. TH measured the Fe isotopic compositions. ND implemented the Bayesian inference calculation and ran the PCA/weighted Euclidean distance analysis. DN provided input on dynamical interpretations. ND wrote the first draft of the manuscript, which was subsequently edited by all authors.
**Appendix.** The Origins Lab isotopic anomaly database (OL_anomaly_database.xlsx) is publicly available as a _Google Sheets_. It will continue to be updated in the future by the authors as new measurements are published. It can be accessed at the following link:
[https://drive.google.com/drive/folders/17IzvN4L7P8JvyG838-iq90wxgfltJ-k?usp=sharing](https://drive.google.com/drive/folders/17IzvN4L7P8JvyG838-iq90wxgfltJ-k?usp=sharing)
## Figures
Figure 3: Taxonomy of chondrite groups (CI, CM, CO, CV, CR, EH, EL, H, L, LL) based on isotopic anomalies (\(\Delta^{17}\)O, \(\varepsilon^{48}\)Ca, \(\varepsilon^{50}\)Ti, \(\varepsilon^{54}\)Cr, \(\varepsilon^{56}\)Fe, \(\varepsilon^{60}\)Ni, \(\mu^{66}\)Zn, \(\mu^{84}\)Sr \(\varepsilon^{96}\)Zr, \(\varepsilon^{92}\)Mo, \(\Delta^{95}\)Mo, \(\varepsilon^{100}\)Ru). These meteorite groups are the only ones among undifferentiated meteorites with a complete dataset of isotopic anomalies (Table 4). **A.** PCA loadings diagram for meteorite groups, which are treated as variables, with isotopic anomalies as the samples. The aim is to reduce the dimensionality of meteorite groups rather than isotopic anomalies. Since the isotopic anomalies are measured on disparate scales and span a variety of ranges, we first normalized them using z-score normalization. The direction and magnitude of each arrow in the loadings plot indicates the contribution of each meteorite group to the respective principal component. The PCA analysis allows for a clear separation of meteorite groups into the trichotomy NC=EH+EL+H+L+LL, CC=CO+CV+CM+CR, and CI. The analysis further illuminates the divide between ordinary (O=H+L+LL) and enstatite (E=EH+EL) chondrites. **B.** Weighted Euclidean distance matrix depicting the distances between meteorite groups in isotopic anomaly space. The distances were calculated by weighting each difference by the inverse of its variance, thereby ensuring that less precise measurements contribute less to the distance metrics. For two isotopic vectors \(i1\) and \(i2\), each associated with standard deviation vectors \(\sigma_{i1}\) and \(\sigma_{i2}\), the weighted Euclidean distance is computed as \(d_{i2-i1}=\sqrt{\sum_{j=1}^{n_{j}}\left(I_{i2,j}-I_{i2,j}\right)^{2}\big{/} \big{(}\sigma_{i2,j}^{2}+\sigma_{i1,j}^{2}\big{)}}\). This calculation ensures that the effects of less certain measurements are minimized. Like the PCA, the distance matrix illuminates several readily identifiable subgroups within the data.
Figure 4: Fe isotopic composition of Martian meteorites. The four samples that represent the three major groups of Martian meteorites (shergottites, nakhlites, and chassignites) have indistinguishable \(\upmu^{54}\)Fe values that agree with available data of Schiller _et al_. (2020). Together these samples define a weighted average for the bulk silicate Mars (BSM) of +7\(\pm\)2 (95% c.i.).
Figure 5: Contributions of \(\mathcal{O}\), \(\mathcal{E}\), and carbonaceous asteroids to accretion stages of Mars and Earth in the baseline model (Tables 7.8; III is the late vener; II=60-99.x%, I=0-60%).
Figure 6: Contributions of \(\mathcal{E}\) and \(\mathcal{NC}\) during accretion of Earth in the _baseline_ and _4-stage_ models.
Figure 8: Contributions of \(\mathcal{E}\) and \(\mathcal{NC}\) during accretion of Mars in the _baseline_ and _4-stage_ models.
Figure 7: Contributions of \(\mathcal{O}\), \(\mathcal{E}\), and carbonaceous asteroids to accretion stages of Mars and Earth in the _4-stage_ model (Tables 7, 8; IV is the late vener; III=75-99.x%, II=50-75%, I=0-50%).
Figure 9: Schematic illustration of transport and mixing between different isotopic reservoirs during growth of Mars and Earth. Mars was born near the inner edge of the main asteroid belt, in between the \(\mathcal{E}\) and \(\mathcal{O}\) reservoirs. It started accreted an equal mixture of both planetesimals (**A**) but progressively accreted more \(\mathcal{E}\) material as it migrated inwards into the \(\mathcal{E}\) region by type I migration (**B**). During those times, carbonaceous asteroids from the outer solar system were scattered inwards by the giant planets, but interaction with nebular gas led to their implantation into the outer main belt. Larger carbonaceous objects were less subjected to gas drag and were able to penetrate in the inner solar system. Earth grew later primarily from \(\mathcal{E}\) -type embryos (**C**). Several tens of millions of years after solar system formation, one or several interloper carbonaceous embryos implanted in the inner disk during the first phase were stochastically accreted by the Earth (**D**).
Figure 10: Silicon isotopic relationships between silicate Earth and enstatite chondrites. **Left panel**: enstatite chondrites have fractionated \(\delta^{30}\)Si values relative to Earth (Fitoussi and Bourdon, 2012; Onyett et al., 2023). This is most likely due to high-temperature equilibrium Si isotopic fractionation in the nebula (Dauphas et al., 2015). Correction of such equilibrium isotopic fractionation using the exponential law will yield spurious isotopic anomaly like those measured in enstatite chondrites (Eq. 6; Tang and Dauphas, 2012a). Much of the positive \(\mu^{30}\)Si values in enstatite chondrites relative to Earth (Onyett et al., 2023) can be explained by such an artifact of the internal normalization procedure (see text for details; data points in this panel correspond to individual \(\delta^{30}\)Si-\(\mu^{30}\)Si meteorite analyses from Onyett _et al_. 2023). The semi-transparent green circles are the raw values for enstatite chondrites, and the solid green circles are corrected for equilibrium isotopic fractionation (left panel). **Right panel**: Si and Zr isotopic anomalies in the silicate Earth, enstatite chondrites, non-aubrite achondrites, and carbonaceous chondrites (Table 4). Mixing achondrites and carbonaceous chondrites can emulate the \(\mu^{30}\)Si of Earth but yields large \(\mu^{96}\)Zr anomalies that are not seen in Earth. After correction for equilibrium Si isotopic fractionation, enstatite chondrites have \(\mu^{30}\)Si close to Earth, with no need to invoke multicomponent mixing and hypothetical fractionation of presolar SiC (Onyett et al., 2023). |
2310.20348 | Class Incremental Learning with Pre-trained Vision-Language Models | With the advent of large-scale pre-trained models, interest in adapting and
exploiting them for continual learning scenarios has grown.
In this paper, we propose an approach to exploiting pre-trained
vision-language models (e.g. CLIP) that enables further adaptation instead of
only using zero-shot learning of new tasks. We augment a pre-trained CLIP model
with additional layers after the Image Encoder or before the Text Encoder. We
investigate three different strategies: a Linear Adapter, a Self-attention
Adapter, each operating on the image embedding, and Prompt Tuning which instead
modifies prompts input to the CLIP text encoder. We also propose a method for
parameter retention in the adapter layers that uses a measure of parameter
importance to better maintain stability and plasticity during incremental
learning. Our experiments demonstrate that the simplest solution -- a single
Linear Adapter layer with parameter retention -- produces the best results.
Experiments on several conventional benchmarks consistently show a significant
margin of improvement over the current state-of-the-art. | Xialei Liu, Xusheng Cao, Haori Lu, Jia-wen Xiao, Andrew D. Bagdanov, Ming-Ming Cheng | 2023-10-31T10:45:03Z | http://arxiv.org/abs/2310.20348v1 | # Class Incremental Learning with Pre-trained Vision-Language Models
###### Abstract
With the advent of large-scale pre-trained models, interest in adapting and exploiting them for continual learning scenarios has grown. In this paper, we propose an approach to exploiting pre-trained vision-language models (e.g. CLIP) that enables further adaptation instead of only using zero-shot learning of new tasks. We augment a pre-trained CLIP model with additional layers after the Image Encoder or before the Text Encoder. We investigate three different strategies: a Linear Adapter, a Self-attention Adapter, each operating on the image embedding, and Prompt Tuning which instead modifies prompts input to the CLIP text encoder. We also propose a method for parameter retention in the adapter layers that uses a measure of parameter importance to better maintain stability and plasticity during incremental learning. Our experiments demonstrate that the simplest solution - a single Linear Adapter layer with parameter retention - produces the best results. Experiments on several conventional benchmarks consistently show a significant margin of improvement over the current state-of-the-art.
## Introduction
Deep neural networks have revolutionized many real-world computer vision applications. However, most neural networks are restricted to static world scenarios in which they are trained _once_ and lack the flexibility to _adapt and update_ themselves in dynamically changing environments. The objective of continual learning is to enable training of neural networks in the presence of newly-arriving data [1, 2, 10]. The key problem to overcome in continual learning is catastrophic forgetting in which the network, when integrating knowledge needed for solving new tasks, forgets knowledge key to solving previous ones [13]. Class incremental learning (CIL), in which tasks consist of disjoint classes, is one of the principal continual learning scenarios considered in the literature [20].
In CIL, classes are presented to the learner in a sequence of disjoint _tasks_ containing one or more classes. CIL approaches can be roughly categorized into three broad classes: regularization-based, architecture-based, and replay-based strategies [1]. Replay-based CIL methods, which currently dominate the state-of-the-art, store a small set of samples (called exemplars) from previous tasks. Exemplars help consolidate previous knowledge, but they bring with them the new challenge of (often extreme) imbalance between current task data and exemplars of past tasks. This imbalance leads to predictions biased toward new classes, which in turn leads to worse performance on old ones. This problem is specifically addressed in IL2M [1], UCIR [14], BiC [21], EEIL [1]and SS-IL [15].
A recent trend in CIL is the exploitation of pre-trained models to leverage the knowledge contained therein [1, 13, 14, 12]. The use of foundation models has yielded very strong performance gains in several domains. Most previous works are based on pre-trained vision models, for example, models pre-trained on ImageNet using supervised
Figure 1: Different adaptation options for class incremental learning with pre-trained vision-language models. a) Zero-shot learning using CLIP for class incremental learning proposed in Continual-CLIP [10]. b) Prompt Tuning by learning additional text prompts. c) Self-attention Adapter for incremental adaptation of encoded image features. d) Linear Adapter after the image encoder for incremental adaptation of encoded image features.
learning [22, 23, 24], or unsupervised learning [15, 25]. Continual-CLIP [16] is the first work evaluating the potential of exploiting a pre-trained vision-language (CLIP [13]) for continual learning. Continual-CLIP provides a very strong and simple baseline that exploits the zero-shot learning capabilities of CLIP, although the pre-trained model is frozen and is thus unable to further improve with new data.
In this work, we propose a simple and efficient paradigm for continual learning based on the adaptation of a pre-trained CLIP model. Specifically, during training, we extract the latent feature of each input image using the pre-trained CLIP Image Encoder. To further improve the model when new data arrives, we analyze three different ways of adding additional _adaptation_ parameters: Prompt Tuning, Self-attention Adapters, and Linear Adapters (illustrated in Figure 1). Adapted features can be further combined with the text features obtained with the CLIP Text Encoder. The inputs to the Text Encoder are hand-crafted prompts representing all categories the model has seen so far (e.g. "A photo of a dog"). We follow the default approach of CLIP and multiply the feature matrices of the two modalities to obtain the final logits for classification. We then calculate the loss and perform backpropagation to update the parameters of the adapter. Note that the _adapter_ is the only component that needs to be updated, as we freeze all other parameters in CLIP. The adapter is temporarily retained and used for the initialization of the next task.
Additionally, to better preserve the knowledge from the previous task, we propose a strategy for parameter retention in order to preserve important weights from the previous adapter. Only the most important parameters are updated, where importance measured by the distance between the previous and current parameters. We compare our method with conventional knowledge distillation approaches, and our experiments demonstrate the superiority of our method for preserving the knowledge of previous tasks.
To summarize, the main contributions of this work are:
* we exploit a pre-trained CLIP model for CIL and integrate an additional _adapter_ to enable incorporation of new knowledge and adaptation to new tasks;
* we propose a parameter retention strategy to accumulate previous knowledge in a way that accounts for the importance of each parameter;
* we achieve state-of-the-art results on several datasets in different scenarios.
## Related Work
We review work from the literature related to our approach.
### Class incremental learning
Class incremental learning is one of the three most common settings in continual learning [26].
Approaches can be divided into three main categories. regularization-based, parameter-isolation-based, and replay-based methods [1].
Regularization-based methods.The main idea behind regularization-based methods is to constrain the optimization of parameters by either identifying the importance of each or regularizing network outputs. Elastic Weight Consolidation (EWC) [12] uses a diagonal approximation to the Fisher Information Matrix (FIM) as a measure of parameter importance [12]. Rotated-EWC found the assumption of diagonal FIM to be too strict in practice and reparameterizes the network to approximately diagonalize the FIM by rotating the parameter space [13]. Synaptic Intelligence (SI) [14] and Memory Aware Synapses (MAS) [1] are in this same research direction. Semantic Drift Compensation (SDC) is based on metric learning combined with regularization and semantic drift compensation [26]. There are also multiple works focusing on different variants of knowledge distillation [15, 27, 28] to alleviate forgetting.
Architecture-based methods.Architecture-based methods manage model capacity by adding neurons and branches or by adding masks on top of the base model. Progressive Neural Networks (PNN) [24] add a duplicate layer with lateral connections, and DER [25] dynamically adds network branches. Progress & Compress [26] and AANETs [13] use two different types of network blocks, one for stability and one for plasticity. Piggyback [11], Packnet [12], HAT [20], and Ternary Feature Masks [10] are mask-based methods to consolidate previous knowledge. Random Path Selection [14] builds and selects random paths for different tasks without backpropagating to previously learned modules.
Replay-based methods.Replay-based methods assume that a small buffer is available for storing knowledge from past tasks. They can store real samples (known as exemplars), synthetic samples produced by generative models, or intermediate feature representations (like DGR [23] and REMIND [12]). Most replay-based methods store real samples from previous tasks. However, due to the limited memory available for previous samples, there is an imbalance problem when learning with current data and exemplars. This was partially addressed by EEIL [10], BiC [22], IL2M [1], Rainbow Memory [1] and SS-IL [13]. PODNet applies an efficient spatial distillation loss throughout the model to distill knowledge from previous tasks [11]. GDumb [15] proposes a simple baseline by greedily storing samples and learning from scratch using samples only available in the memory. Verwimp et al. [23] re-vealed the limits and merits of rehearsal methods in a systematical manner.
Pre-trained models for CIL.Pre-trained models can be utilized and obtain high performance in continual learning.
Tz-Ying Wu et al. (Wu et al.2022) propose a two-stage training strategy, cloning part of the pre-trained model and fine-tuning it on the new data in stage one, and combining the base and novel classifiers by a new fusion algorithm in stage two. L2P (Wang et al.2022b) introduces the idea of prompting in incremental learning and improves the performance by learning a prompt pool memory to instruct the pre-trained model. DualPrompt(Wang et al.2022a) categorizes prompts as G-prompts and E-prompts which learn task-invariant and task-specific knowledge respectively. Although these two works achieved impressive results in a non-exemplar setting, it is worth noting that they utilized an ImageNet-21K pre-trained backbone. It is important to consider that the data distribution of ImageNet, CIFAR-100, and ImageNet-R is largely overlapping. Furthermore, (Janson et al.2023)'s findings support the fact that using an ImageNet-21K pre-trained backbone with the NMC (non-parametric classification) approach yields results superior to L2P(Wang et al.2022b) without the need for additional training.
MEAT (Xue et al.2022) learns an attention mask for each task and dynamically assigns attention masks to generate task-specific self-attention patterns on a ViT (Dosovitskiy et al.2020) backbone. ProgPrompt (Razdaibiedina et al.2023) learns a new soft prompt for each task and sequentially concatenates it with previously learned prompts. ADA (Ermis et al.2022) proposes an adaptive distillation algorithm of adapters. It effectively consolidates new adapters with old adapters, so that pre-trained transformers can be trained incrementally on a sequence of tasks. In this work, we focus on class incremental learning with pre-trained vision-language models.
### Vision-Language Pre-training
In recent years researchers have made great progress in both computer vision and natural language processing, and pre-trained vision-language models are attracting more and more attention. The pioneering work in CLIP (Radford et al.2021) proposes to directly learn from a large number of text-image pairs with a simple contrastive objective.
CoOp (Zhou et al.2022b) finds prompt engineering is a major challenge when deploying such models in practice, therefore it models text prompts with a learnable vector while the pre-trained parameters are kept fixed. CoCoOp (Zhou et al.2022a) further learns a lightweight neural network to generate an input-conditional token for each image to replace the static prompts of CoOp. The authors of ALIGN (Jia et al.2021) found that previous models have strict requirements for datasets. They leverage a noisy dataset of over one billion image-text pairs and successfully use the scale of corpus to compensate for the noise.
The methods above are mainly suited for vision-based downstream tasks. To adapt to Vision-Language tasks, researchers propose to learn joint image-text embeddings, such as OSCAR (Li et al.2020) and UNITER (Chen et al.2020). These use an object detector to obtain image features, then concatenate image features and text features and input the concatenation into a Transformer (Vaswani et al.2017) module to learn the joint image-text embedding. None of these methods can conduct image-text alignment before fusion. This problem can cause difficulty in learning the interaction between images and text. To obtain better interaction, ALBEF (Li et al.2021) introduces a contrastive loss to align image and text features before fusing them and then generates joint representations. In this work, we are interested in how to leverage a pre-trained vision-language model to further improve class incremental learning.
## Method
In Figure 2, we present the overall framework of the proposed method, encompassing the CLIP pre-trained image encoder and text encoder, as well as a linear adapter. During the training process, we solely update this linear adapter and keep all other parts fixed. Additionally, we introduce a parameter retention method that maximizes the preservation of knowledge acquired from previous tasks, thereby mitigating catastrophic forgetting. We begin by describing the CIL scenario we consider and then describe our approach.
### Preliminaries
#### Class incremental learning.
In class incremental learning a model must be updated over \(T\) sequentially-presented classification tasks. At task \(t\) the data available for training is \(\mathcal{D}_{t}=\left(\mathbf{x}_{t}^{(i)},y_{t}^{(i)}\right)_{i=1}^{n_{t}}\), where \(\mathbf{x}_{t}^{(i)}\) is an input image, \(y_{t}^{(i)}\) is the corresponding label, and \(n_{t}\) is the number of samples in task \(t\). We write the number of new classes at task \(t\) as \(K_{t}\) and \(K_{1:t}=\sum_{t^{\prime}=1}^{t}K_{t}\) for the total number of classes up to the current task. After learning on \(\mathcal{D}_{t}\), the model is evaluated on all previous tasks - that is, we require that the model continue to perform well on previously seen tasks.
Normally the model can be divided into two parts: a feature extractor \(F_{\theta}\) and a fully-connected classification head \(H_{\phi}\) with parameters \(\theta\) and \(\phi\), respectively. A fully-connected layer normally consists of a weight matrix and a bias vector, which we denote as \(\phi=\{\mathbf{W},\mathbf{b}\}\). The conventional cross-entropy loss is often used for learning to classify, which for task \(t\) is:
\[\mathcal{L}_{\text{CE}}(\mathbf{x},\mathbf{y};\theta,\phi)=-\frac{1}{n}\sum_{i =1}^{n}\mathbf{y}_{i}\cdot\log\mathbf{p}_{i}(\mathbf{x}), \tag{1}\]
where \(\mathbf{y}_{i}\) is the one-hot vector encoding the correct label and \(\mathbf{p}_{i}\) are the model probabilities obtained by softmax over the outputs of the classifier head:
\[\mathbf{p}_{i}(\mathbf{x})=\frac{\exp\left(\mathbf{W}_{i}F(\mathbf{x})+b_{i} \right)}{\sum_{j=1}^{K_{1:t}}\exp\left(\mathbf{W}_{j}F(\mathbf{x})+b_{j}\right)}, \tag{2}\]
where \(K_{1:t}\) is the number of classes up the task \(t\) and \(\mathbf{W}_{i}\) is the weight vector corresponding to \(i_{\text{th}}\) class.
The challenge of class incremental learning is to minimize the loss in Eq. (1) using _only_ data from task \(t\) without forgetting how to recognize the \(K_{1:t-1}\) classes from all previous tasks.
#### CLIP for continual learning.
Thegnane et al. (Thegnane et al.2022) proposed exploiting the zero-shot learning capability of CLIP to mitigate catastrophic forgetting of previous tasks during continual learning. Continual-CLIP (Thegnane
et al. 2022) classifies images by multiplying the encoded image with text features extracted using the text encoder in a frozen CLIP model using the labels of all categories seen up to the current task as prompts. The prompts are manually determined and combined with the class name and the similarity score between the input image and different prompts from all classes is used for classification. This method does not require storing any exemplars or updating any parameters. It outperforms most state-of-the-art methods with only zero-shot evaluation. Continual-CLIP performs no adaptation ability on incoming task data and in the next section, we propose how CLIP pre-trained models can be updated and further improved for class incremental learning.
### CLIP with Adaptation
Starting from the zero-shot evaluation protocol proposed in Continual-CLIP (Thengane et al. 2022), we aim to augment the original architecture with new modules that enable better adaptation to more downstream tasks. By learning on the training set of each task and updating the parameters of these modules, we can achieve a better trade-off between stability and plasticity.
**Adaptation via a Linear Adapter.** Assume we are at task \(t\), the input image is \(x_{t}^{(i)}\), and denote the CLIP image encoder as \(F_{\text{image}}\) and the text encoder as \(F_{\text{text}}\). For the image modality, we input the images into the image encoder to extract the latent embedding \(I_{i}=F_{\text{image}}(x_{t}^{i})\in\mathbb{R}^{M}\). Subsequently,
we pass it through a Linear Adapter layer \(A_{i}=g_{W}(I_{i})\) parameterized by weight matrix \(W\in\mathbb{R}^{M\times M}\) to compute an adapted image feature with more capacity. For the text modality, we amalgamate a manually specified, fixed set of prompts ("a photo of [CLS]") where [CLS] takes the value of all class names \(C_{t}=\{c_{1},c_{2},...,c_{t}\}\) that the current model has encountered. This set of prompts is given to the text encoder to obtain the text features \(B_{t}\). Finally, we perform a matrix multiplication between \(A_{i}\) and \(B_{t}\) to compute the output logits. We use the cross-entropy loss to update the parameters of the Linear Adapter layer:
\[\mathcal{L}_{\text{CE}}(\mathbf{x},y,t;W)=-\frac{1}{n}\sum_{i=1}^{n}y_{i}\cdot \log A_{i}B_{t}. \tag{3}\]
**Adaptation via a Self-attention Adapter.** We can use a self-attention module in place of the single Linear Adapter layer described above. Specifically, we apply three separate linear transformations to the output \(I_{i}\) of the CLIP image encoder, resulting in the query (\(Q\)), key (\(K\)), and value (\(V\)) matrices:
\[Q=W_{q}I_{i},K=W_{k}I_{i},V=W_{v}I_{i}, \tag{4}\]
where \(W_{q}\), \(W_{k}\), and \(W_{v}\) are learned weight matrices. Next, we compute the self-attention weights using the dot product between the query and key matrices, scaled by a factor \(\sqrt{M}\):
\[\alpha=\text{softmax}(\frac{QK^{T}}{\sqrt{M}}), \tag{5}\]
Finally, we compute the weighted sum of the value matrix V, using the self-attention weights \(\alpha\):
\[A_{i}=\alpha V. \tag{6}\]
After obtaining the adapted image features \(A_{i}\), we use Eq. 3 to calculate the logits by multiplying the image features with the text features and use the cross-entropy loss to update parameters. Unlike the previous approach, in this Self-attention Adapter we need to update three linear projection matrices instead of just one. This results in three times the number of learnable parameters.
Figure 2: Illustration of our proposed method. We add a Linear Adapter layer (in green) after the CLIP Image Encoder which generates projected features. The text feature is generated by encoding class labels using the CLIP Text Encoder. During training the Image and Text encoders are frozen and we only update the Linear Adapter. To obtain better continual learning performance, we propose a parameter retention method to mitigate forgetting in the Adapter (see Parameter Retention Section).
**Adaptation via Prompt Tuning.** The two methods above operate only on the image modality. We propose adding learnable parameters to the text prompt. In the previous two methods, we use fixed prompts such as "a photo of [CLS]". For prompt tuning, we follow the CoOp method Zhou et al. (2022) and replace the fixed prompt with a learnable parameter \(p\). We concatenate it with the first layer of embedded input tokens for the class prompts that the model has seen so far, and then give it in input to \(F_{\text{text}}\) to obtain the text features \(B_{t}\). In this method, we use the fixed CLIP image encoder-generated \(I_{i}\) directly as \(A_{i}\) and multiply it with \(B_{t}\). We then calculate the loss in Eq. 3 and update the prompt vector to optimize the model parameters.
### Parameter Retention
Although a single adapter layer may achieve satisfactory performance on the current task, it will drift in the learning process and result in catastrophic forgetting in the later stages of continual learning, leading to a decline in overall performance. Therefore, we propose a parameter retention strategy aimed at maximizing the preservation of previously learned knowledge while learning new tasks.
Using a simple Linear Adapter as an example, before the start of training we randomly initialize an \(M\times M\) parameter matrix \(W_{0}\) for the Linear Adapter. After completing training and testing the first task, we save the trained parameter matrix \(W_{1}\) and use it to initialize the adapter for the second task. After training the second task, \(W_{2}\) is selectively updated to preserve a proportion of the original parameters from \(W_{1}\). Measuring the drift of parameter \(i\) in \(W\) with \(\Delta W^{i}=|W_{1}^{i}-W_{2}^{i}|\), we select a proportion \(\gamma\in[0,1.0]\) of parameters from \(W_{1}\), while the rest are replaced with the new parameters from \(W_{2}\):
\[W_{2}^{\prime,i}=\begin{cases}W_{1}^{i},&\text{if }\Delta W^{i}<\beta(\gamma), \\ W_{2}^{i},&\text{if }\Delta W^{i}\geq\beta(\gamma),\end{cases} \tag{7}\]
where \(\beta(\gamma)\) is a dynamic threshold computed from the desired retention rate \(\gamma\) and a ranking of the parameter drifts \(\Delta W^{i}\) (i.e. we order the values in \(\Delta W^{i}\) and select the threshold \(\beta\) that preserves exactly the desired proportion \(\gamma\) of weights). We use the new parameter matrix \({W_{2}}^{\prime}\) to test the model. Before the end of each subsequent task, we use the same method to preserve the parameters at all stages of continual learning.
This parameter retention strategy is inspired by the human cognitive process. There are studies indicating that humans activate different neurons in the brain when acquiring different forms of knowledge Pulvermuller (2005). By modifying only a portion of the neural network weights while retaining most of the original parameters during the learning of new categories, our model can, to a certain extent, emulate this process and explicitly retain prior knowledge it has acquired.
As for the question of _which_ data to retain, we believe that preserving the parts with the most significant changes is sufficient for the model to make adequate adjustments to adapt to new tasks. We also conducted ablation experiments using other methods, such as randomly preserving parameters, and the results confirmed the validity of our approach (see Ablation Section).
**Knowledge distillation.** Distillation is widely employed in continual learning. The outputs of a previous model are used to impose certain constraints on the parameters of the new one. We attempted to incorporate knowledge distillation in our framework by distilling the output logits from both the current and previous models. However, the results using distillation achieve worse performance (see Experimental Section).
## Experimental Results
We first discuss implementation details, benchmarks, and competing methods. Then we provide a comparison to the state-of-the-art and an extensive ablation study.
### Experimental Setup
We performed experiments on CIFAR-100, ImageNet-100 (also known as ImageNet-Subset) and ImageNet-R Hendrycks et al. (2021). First, we follow the two experimental configurations used in DER Yan et al. (2021): B0, in which all classes are equally divided among different tasks, and B50 in which the first task contains 50 classes (half of the dataset) and the rest are equally divided into subsequent tasks. Then we follow the ImageNet-R setting proposed by L2P Wang et al. (2022) which splits the 200 classes into 10 tasks equally. For CIFAR-100, we train using SGD for 30 epochs at an initial learning rate of 0.1 and weight decay of 0.0002. For ImageNet-100 and ImageNet-R, we use Adam with an initial learning rate of 0.01 for 10 epochs. For both datasets we use a cosine annealing scheduler. We report _average incremental accuracy (Avg)_ as a function of increasing tasks and the accuracy at the last task (Last) when one number is preferable. All results are averaged over 3 runs with different class orders and random seeds.
We compare our approach with several state-of-the-art methods: iCaRL Rebuffi et al. (2017), WA Zhao et al. (2020) UCIR Hou et al. (2019), PODNet Douillard et al. (2020), BiC Wu et al. (2019), RPSNet Rajasegaran et al. (2019), and DER Yan et al. (2021). Unless otherwise stated we store a total of 2000 exemplars for all approaches except Continual-CLIP (which is based on zero-shot inference using the pre-trained CLIP model and does no adaptation to incremental tasks).
Many of the state-of-the-art methods mentioned above are based on ResNet backbones and do not use pre-trained models. Therefore, we have also developed some baselines using pre-trained models. The **Linear Probe** method adds a classifier on top of a fixed CLIP image encoder. As new tasks arrive, the classifier head is incrementally expanded, and its parameters are retrained to classify the new categories. The **Continual-CLIP** method Theengane et al. (2022) relies solely on the excellent generalization capabilities of CLIP itself, without adding any additional parameters or further training.
The **L2P**Wang et al. (2022) method uses a dynamic prompt selection approach as input to a pre-trained
Transformer. Unlike the original implementation that uses ImageNet-21K pre-trained weights as the backbone, we replaced it with the CLIP image encoder since CIFAR-100 and ImageNet-100 have overlapping classes with 21K classes, which would bias the results.
### Comparison with the State-of-the-art
**Evaluation on CIFAR-100.** In Figure 3 we plot the performance of our model on each task under different settings. We see that our model outperforms others in almost all settings and for every task. This shows that we not only exploit the performance brought by the large-scale pre-trained CLIP model, but that it is also adapting to the continual learning scenario. It is worth noting that in the extremely challenging B0-50 setting, in which there are 50 continual learning tasks, our method does not show a clear advantage over other state-of-the-art models in the very early tasks. This is due to the fact that, in the early stages of continual learning, we directly replace a selected portion of parameters with the previous ones, which can cause some degree of underfitting in tasks with fewer classes. However, in later tasks our method can preserve knowledge from previous tasks much better, which mitigates forgetting and ultimately leads to superior performance compared to other methods. We refer the readers to the supplementary material for more detailed results.
**Evaluation on ImageNet-100.** Like the experiments on CIFAR-100, we follow DER [22] and evaluate on ImageNet-100 with the same settings to compare with existing methods. As shown in Table 1, the current state-of-the-art method DER outperforms iCaRL, End2End, and UCIR by a large margin. Our method improves over DER by a significant gain of up to 8% points in the B0-10 setting and 7% points in the B50-10 setting in the Avg metric. In general, our method yields larger gains in the B0 setting than in B50 compared to DER. It's worth noting that when employing CLIP pre-trained weights and utilizing an exemplar set of 2000, the improvement in L2P's performance over Continual-CLIP is negligible. In most settings, L2P's performance is lower than ours about 1% to 2%.
**Evaluation on ImageNet-R.** We follow [14] using an exemplar of size 5000 for ImageNet-R. In Table 1, we see that our results surpass DER by 9%. While L2P outperforms Continual-CLIP by 2.1%, there still remains a gap of 1.9% compared to our results. This further demonstrates the superiority of our method in effectively utilizing pre-trained models to enhance the performance of continual learning.
### Ablation Study
We conducted an ablation study in the B0-10 setting on CIFAR-100. First, we evaluate different adapters, and then we look at the effects of parameter retention rates and other strategies for parameter retention. Finally, we analyze how the number of exemplars affects performance.
**Different adapter layers.** As shown in Table 2, Continual-CLIP leverages the zero-shot capabilities of CLIP and can exploit the pre-trained CLIP model without any further learning. The Linear Adapter is the best architecture we found for continual learning, and it requires only one additional fully-connected layer. We also evaluated more fully
Figure 3: Comparison of our method with the state-of-the-art on CIFAR-100 in different experimental settings.
connected layers (MLP), but it does not perform well. Self-attention can leverage more parameters than the Linear Adapter method, but does not improve the results. This may be due to the complexity of the self-attention architecture. Prompt Tuning learns additional text prompts, however in the continual learning setting these learned prompts can drift away, resulting in much worse results.
**Different parameter retention ratios.** In Table 3 we show the impact of parameter retention rates on performance. Note that \(\gamma=0\) means we retain no parameters and directly use the adapted ones for testing after training a new task. We see that retaining more parameters leads to better performance, but retaining more than \(\gamma=0.8\) causes the model to underfit to new tasks and leads to a decrease in performance. We use \(\gamma=0.8\) for all our experiments.
**Comparison of parameter retention strategies.** In Table 4 we compare our parameter retention strategy with Knowledge Distillation (KD) and Random Selection. KD distills the logits from the previous model to regularize model adaptation to new tasks, however it results in worse performance compared to the original method without it. Distillation may be useful in some models trained from scratch, since there are more adjustable parameters and more precise optimizations can be made. However, in our model with only an additional linear layer, complex distillation is less effective than simply retaining some useful parameters. As expected, if we retain parameters randomly instead of using our selection method, CIL completely fails since important parameters are not explicitly retained to balance stability and plasticity.
**Ablation on the number of exemplars.** Finally, we show the impact of the number of exemplars in Table 5. As expected, the more exemplars used, the better the performance. Reducing from 2000 to 500 exemplars degrades performance. It is notable that by using 500 exemplars, our method can achieve performance similar to Linear Probe with 2000, demonstrating the effectiveness of our proposed method.
## Conclusions
In this work we approached the problem of class incremental learning using a pre-trained vision-language model. We investigated three different adaptation strategies (a Linear Adapter, a Self-attention Adapter, and Prompt Tuning) to facilitate model adaptation for class incremental learning. We found that using a Linear Adapter is simple but effective compared to the other two methods. Adapting the CLIP Image Encoder alone yields significant performance gains over conventional methods and baselines with pre-trained models on all benchmarks and in all incremental learning scenarios. For future work we are exploring more effective
\begin{table}
\begin{tabular}{r|c|c|c|c|c|c|c|c|c|c} \cline{2-10} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{ImageNet-100 B0} & \multicolumn{4}{c|}{ImageNet-100 B50} & \multicolumn{4}{c}{ImageNet-R B0} \\ \cline{2-10} \multicolumn{1}{c|}{Method} & \multicolumn{4}{c|}{\# tasks} & \multicolumn{4}{c|}{\# tasks} & \multicolumn{4}{c}{tasks} \\ \cline{2-10} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{20} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{10} \\ \cline{2-10} \multicolumn{1}{c|}{} & Avg & Last & Avg & Last & Avg & Last & Avg & Last & Avg & Last \\ \hline iCaRL & 78.1 & 65.2 & 74.1 & 58.5 & 69.0 & 50.9 & 60.7 & 44.7 & 57.3 & 44.4 & - \\ End2End & 75.5 & 64.0 & 70.1 & 50.3 & 68.3 & 48.9 & 61.0 & 52.1 & 58.5 & 52.2 & - \\ UCIR & 76.0 & 64.0 & 70.5 & 55.3 & 64.7 & 47.8 & 77.2 & 68.2 & 66.9 & 56.8 & - \\ PODNet & 78.2 & 66.2 & 72.3 & 57.2 & 66.7 & 48.9 & 80.3 & 73.5 & 79.0 & 70.8 & - \\ UCIR-DDE & 77.2 & 65.8 & 71.7 & 56.8 & 66.2 & 40.0 & 78.8 & 68.1 & 68.4 & 57.9 & - \\ RM & 75.5 & 62.2 & 70.4 & 53.2 & 65.4 & 45.7 & 56.9 & 41.8 & 57.7 & 37.3 & - \\ DER & - & - & 77.2 & 66.7 & - & - & - & - & 78.2 & 74.9 & 66.7 \\ \hline Continual-CLIP & 81.7 & 75.3 & 83.1 & 75.3 & 83.5 & 75.3 & 79.2 & 75.3 & 79.3 & 75.3 & 72.0 \\ Linear Probe & 79.4 & 64.5 & 81.8 & 67.4 & 84.0 & 72.0 & 81.4 & 67.2 & 83.1 & 72.0 & 57.5 \\ L2P & 81.3 & **76.3** & 80.1 & 75.8 & 80.7 & 76.1 & 74.8 & 74.1 & 71.9 & 72.5 & 74.1 \\ \hline Ours & **84.9** & 74.7 & **85.3** & **76.8** & **84.1** & **77.0** & **85.1** & **76.8** & **85.4** & **77.4** & **75.9** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of our method with the state-of-the-art on ImageNet-100 under different experimental settings.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Ratio (\(\gamma\)) & 0 & 0.2 & 0.4 & 0.6 & 0.8 & 0.9 \\ \hline Avg & 83.2 & 83.3 & 83.5 & 84.0 & 84.2 & 81.6 \\ Last & 73.2 & 73.5 & 74.1 & 74.7 & 76.9 & 76.5 \\ \hline \end{tabular}
\end{table}
Table 3: Our performance with different ratention ratios on the CIFAR-100 B0-10 setting. The ratio is the percentage of parameters retained from the previous adapter.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & KD & Random & Ours \\ \hline Avg & 76.7 & 44.5 & 84.2 \\ Last & 67.0 & 42.2 & 76.9 \\ \hline \end{tabular}
\end{table}
Table 4: Our performance using different parameter retention strategies in the CIFAR-100 B0-10 setting.
\begin{table}
\begin{tabular}{c|c c c c} \hline \# Exemplars & 2000 & 1000 & 500 & Linear Probe (2000) \\ \hline Avg & 84.2 & 82.5 & 80.0 & 80.5 \\ Last & 76.9 & 71.9 & 68.2 & 66.8 \\ \hline \end{tabular}
\end{table}
Table 5: Our performance for different numbers of exemplars in the CIFAR-100 B0-10 setting.
ways of leveraging pre-trained models for incremental learning in order to take full advantage of the rapid development of multi-modal foundation models.
|
2309.05097 | Cohomological rank functions and surfaces of general type with $p_g=q=2$ | We classify minimal surfaces $S$ with $p_g=q=2$ and $K_S^2=5$ or $6$. | Jiabin Du, Zhi Jiang, Guoyun Zhang | 2023-09-10T17:55:15Z | http://arxiv.org/abs/2309.05097v2 | # Cohomological rank functions and surfaces of general type with \(p_{g}=q=2\)
###### Abstract.
We classify minimal surfaces \(S\) with \(p_{g}=q=2\) and \(K_{S}^{2}=5\) or \(6\).
Key words and phrases:Surfaces of general type, Generic vanishing, Polarized abelian surfaces 2020 Mathematics Subject Classification: 14J17, 14K05, 14K12
## 1. Introduction
We work over complex number field \(\mathbb{C}\) throughout this article.
A minimal surface \(S\) of general type with \(\chi(\mathcal{O}_{S})=\chi(\omega_{S})=1\) is in some sense the simplest surface of general type since the important birational invariant \(\chi(\mathcal{O}_{S})\) takes the minimal value. However, the classification of such surfaces is wildly open.
For these surfaces, Beauville pointed out in the appendix of [D] that \(p_{g}=q\leq 4\) and \(S\) is isomorphic to the product of two smooth projective curves of genus \(2\) when the equality holds.
When \(p_{g}=q=3\) or \(p_{g}=q=2\) and the Albanese morphism \(a_{S}\) is a fibration onto a curve, a full classification has been worked out due to the effort of several people (see [HP, Pg, Z, Pm]). The case when \(p_{g}=q=2\) and \(a_{S}\) is generically finite is already very complicated. By Debarre's inequality and Miyaoka-Yau's inequality, we know that \(4\leq K_{S}^{2}\leq 9\). When \(K_{S}^{2}=4\), we know that \(S\) is a double cover of a principally polarized abelian surface \((A,\Theta)\) branched over a divisor \(D\in|2\Theta|\) with negligible singularities (see [CMLP]). Indeed, the surface is on the Severi line and by the work of [BPS1] and [LZ], it should be a simple double cover of a principally polarized abelian surface.
Chen and Hacon constructed the first family of minimal surfaces \(S\) of general type with \(K_{S}^{2}>4\), \(p_{g}=q=2\), and \(a_{S}\) generically finite in [CH]. These surfaces are called Chen-Hacon surfaces. A Chen-Hacon surface \(S\) satisfies the property that \(K_{S}^{2}=5\), \(p_{g}=q=2\), and \(\deg a_{S}=3\). They have also been extensively studied in [PP1] and [AC].
Since Chen and Hacon's work, many surfaces with \(p_{g}=q=2\) and \(6\leq K_{S}^{2}\leq 8\) have been constructed (see [AC, Pm, PP2, PP3, PP4, PRR]). There are several different ways to construct these surfaces including taking double covers of an abelian surface branched over a curve with prescribed singularities, taking quotients of a product of curves, and taking triple or quadruple covers of abelian surfaces.
Nevertheless, a fine classification theory of surfaces with \(5\leq K_{S}^{2}\leq 9\) and \(p_{g}=q=2\) is still missing. Even an effective bound of the degrees of the Albanese maps of these surfaces is out of reach.
This paper solves two cases among the previously unknown five cases.
**Theorem 1.1**.: _A minimal surface \(S\) of general type with \(K_{S}^{2}=5\) and \(p_{g}=q=2\) is a Chen-Hacon surface._
This theorem answers questions raised in [PP1] and [AC] about the possible structures of minimal surfaces of general type with \(K^{2}=5\) and \(p_{g}=q=2\). We also determined the possible structures of minimal surfaces of general type with \(K^{2}=6\) and \(p_{g}=q=2\).
**Theorem 1.2**.: _Assume that \(S\) is a minimal surface of general type with \(K_{S}^{2}=6\) and \(p_{g}=q=2\), then \(2\leq\deg a_{S}\leq 4\). Moreover, surfaces with \(2\leq\deg a_{S}\leq 4\) are exactly those constructed by Penegini-Polizzi in [PP2], Alessandro-Catanese in [AC], and Penegini-Polizzi in [PP3]._
In order to prove Theorems 1.1 and 1.2, our idea is to study the possible structure of \(a_{S*}\omega_{S}\). We first show that the Chen-Jiang decomposition of \(a_{S*}\omega_{S}\) is in some sense simple and then cohomological rank functions developed in [JP] provide important information about \(S\). It is interesting to note that one needs to pick a polarization when considering cohomological rank functions but in this case, there is a canonical choice of polarization, which is the double dual of the Fourier-Mukai dual of the M-regular direct summand of \(a_{S*}\omega_{S}\).
The proof of Theorem 1.1 is a bit easier, since surfaces with \(K_{S}^{2}=5\) are on the line of Barja-Pardini-Stoppino inequality (see Subsection 2.6) in some sense and hence cohomological rank functions provide enough information to show that the Albanese morphisms are always of degree \(3\). The proof of Theorem 1.2 requires much more effort. Some main ingredients to get the bound of the degrees of the Albanese morphisms are Castelnuovo's inequality, the symmetries of the finite Heisenberg groups, and Clifford's lemma. The work of Alessandro-Catanese [AC] also provides us important inspiration.
**Acknowledgements.** The authors thank Prof. Catanese for various comments. The second author thanks Marti Lahoz for discussions on this topic
several years ago and thanks Songbo Lin for his interests about possible applications of the Chen-Jiang decomposition on this topic.
The second author is a member of the Key Laboratory of Mathematics for Nonlinear Science, Fudan University and he is supported by the Foundation for Innovative Research Groups of the Natural Science Foundation of China (No. 12121001), by the National Key Research and Development Program of China (No. 2020YFA0713200), and by the Natural Science Foundation of Shanghai (No.21ZR1404500, No.23ZR1404000).
## 2. Preliminaries
### Heisenberg groups
Let \((\hat{A},L)\) be a polarized abelian surface and we denote by \(A:=\operatorname{Pic}^{0}(\hat{A})\) the dual abelian surface. We know that if \(L\) is of type \((\delta_{1},\delta_{2})\), \(A\) carries a polarization of the same type (see [BL, 14.4]).
We recall a few facts about the kernel group and the finite Heisenberg group of \(L\) (see [BL, Section 6] or [M, Section 1]).
The kernel of the isogeny
\[\varphi_{L}:\ \ \ \hat{A} \to A\] \[x \to t_{x}^{*}L\otimes L^{-1},\]
where \(t_{x}\) is the translation by \(x\), is denoted by \(K_{L}\). Note that if \(L\) is of polarization type \((\delta_{1},\delta_{2})\),
\[K_{L}\simeq(\mathbb{Z}_{\delta_{1}}\times\mathbb{Z}_{\delta_{2}})^{2}\]
and the corresponding finite Heisenberg group \(\mathscr{K}_{L}\) of \(L\) is a central extension of \(K_{L}\):
\[1\to\mu_{\delta_{2}}\to\mathscr{K}_{L}\to K_{L}\to 0.\]
The finite Heisenberg group can be interpreted as the group of isomorphisms \((\chi,a)\) of \(L\):
\[\chi:L\overset{\simeq}{\to}t_{a}^{*}L,\ \ a\in K_{L}.\]
The group \(\mathscr{K}_{L}\) is not commutative. Indeed, there exists a non-degenerate skew-symmetric pairing \(e^{L}:K_{L}\times K_{L}\to\mathbb{C}^{*}\) measuring the noncommutativity of \(\mathscr{K}_{L}\). Write \(A_{L}\) and \(B_{L}\) two maximally isotropic group of \(K_{L}\) with respect to the skew-symmetric pairing \(e^{L}\). Then \(A_{L}\simeq\mathbb{Z}_{\delta_{1}}\times\mathbb{Z}_{\delta_{2}}\) and \(B_{L}\simeq\widehat{A_{L}}:=\operatorname{Hom}(A_{L},\mathbb{C}^{*})\simeq \mathbb{Z}_{\delta_{1}}\times\mathbb{Z}_{\delta_{2}}\). As a group, \(\mathscr{K}_{L}\) is isomorphic to \(\mu_{\delta_{2}}\times A_{L}\times\widehat{A_{L}}\) with the group law
\[(\alpha_{1},x_{1},l_{1})\cdot(\alpha_{2},x_{2},l_{2})=(\alpha_{1}\alpha_{2} \cdot l_{2}(x_{1}),x_{1}+x_{2},l_{1}l_{2}).\]
The representation of \(\mathscr{K}_{L}\) on the vector space \(H^{0}(\hat{A},L)\) is isomorphic to the Schrodinger representation, which can be described as the irreducible representation of \(\mathscr{K}_{L}\) on
\[V_{L}:=\{\mathbb{C}\text{-}valued\text{ functions on }A_{L}\}\]
via
\[(\alpha,x,l)\cdot f(z)=\alpha\cdot l(z)\cdot f(z+x).\]
### Examples of surfaces of general type with \(p_{g}=q=2\) and \(K_{s}^{2}=5,6\)
In this subsection, we simply describe the constructions of Chen-Hacon surfaces (CH surfaces for short) constructed in [CH], Alessandro-Catanese surfaces (AC3 surfaces) constructed in [AC] and [CS] and Penegini-Polizzi surfaces (PP2 and PP4 surfaces) constructed in [PP2, PP3].
**Example 2.1** (CH surfaces).: Given a general polarized abelian surface \((\hat{A},L)\) of type \((1,2)\). Assume that \(L\) is symmetric. Then the Fourier-Mukai transform \(\mathscr{M}:=\widehat{L}^{\vee}\) is a vector bundle of rank \(2\) on \(A=\hat{\hat{A}}\). Let \(\varphi_{L}:\hat{A}\to A\) be the the isogeny induced by \(L\). By [M] and [CH] (see also [PP1]), there is a \(1\)-dimensional family of triple covers \(f:X\to A\) with Tschirnhausen bundle \(\mathcal{E}=\mathcal{F}^{\vee}\), corresponding to the \(2\)-dimensional vector space \(H^{0}(A,S^{3}\mathcal{F}\otimes\bigwedge^{2}\mathcal{F}^{\vee})\). It is known that \(X\) has \(1\) singular points, which are \(\frac{1}{3}(1,1)\) singularities. Let \(\mu:S\to X\) be the resolutions of singularities. Then \(S\) is a minimal surface of general type with \(K_{S}^{2}=5,\ p_{g}=q=2\), which is called a Chen-Hacon surface.
In [PP1], Penegini and Polizzi extended Chen and Hacon's construction and described in details the moduli space which is connected and irreducible, whose general point corresponds to Chen and Hacon's construction.
**Example 2.2** (PP2 surfaces).: Penegini and Polizzi classified minimal surfaces \(S\) of general type with \(K_{S}^{2}=6\), \(p_{g}(S)=q(S)=2\), and \(\deg a_{S}=2\) in [PP2]. For these surfaces, the Albanese variety is a polarized abelian surfaces \((A,L)\) of type \((1,2)\) and the Albanese morphism \(a_{S}:S\to A\) is branched over a divisor \(D\equiv 2L\) whose unique non-negligible singularity is an ordinary quadruple point. Penegini and Polizzi described the moduli space of surfaces constructed in this way, which has \(3\) connected irreducible components.
**Example 2.3** (PP4 surfaces).: The idea of the construction of PP4 surfaces is inspired the construction of Chen-Hacon surfaces.
Start to considering a \((1,3)\)-polarized abelian surface \((\hat{A},L)\). Similarly, the Fourier-Mukai transform \(\mathscr{M}:=\widehat{L}^{\vee}\) is a vector bundle of rank \(3\) and \(\varphi_{L}^{*}\widehat{L}^{\vee}\cong H^{0}(L)^{\vee}\otimes L\). The idea is to consider quadruple covers of \(\hat{A}\) with Tschirnhausen bundle \(\varphi_{L}^{*}\mathscr{M}^{\vee}\) and identify the covers that descend to quadruple covers of \(A\) with Tschirnhausen bundle \(\mathscr{M}^{\vee}\).
Using the data of constructing quadruple cover ([HM]), Penegini and Polizzi managed to construct quadruple covers \(\hat{a}:\hat{S}\to\hat{A}\) over general \((1,3)\)-polarized abelian surface such that \(\hat{a}\) descends to a cover \(a_{S}:S\to A\)
where \(S\) is a surface of general type with \(K_{S}^{2}=6,\ p_{g}=q=2\) and \(a_{S}\) is the Albanese morphism of \(S\). The degree of \(a_{S}\) is \(4\).
**Example 2.4** (AC3 surfaces).: Let \((A,L)\) be a polarized abelian surface of type \((1,3)\). Let \(\varphi_{L}:A\to\hat{A}\) be the isogeny induced by \(L\) and its kernel \(K(L)\cong(\mathbb{Z}_{3})^{2}\). Put \(V:=H^{0}(A,L)\), then \(V\) is isomorphic to the Schrodinger representation of \(\mathscr{K}_{L}\). Hence there is an action of the Heisenberg group \(\mathscr{K}_{L}\) on \(\mathbb{P}^{2}=\mathbb{P}(V)\). Consider the surfaces \(S^{\prime}\subset\mathbb{P}^{2}\times A\) in the family
\[S^{\prime}:=\{(y,z)\in\mathbb{P}(V)\times A:\sum_{j}y_{j}x_{j}(z)=0,\sum_{i}y_ {i}^{2}y_{i+1}=0\}\]
where \(y:=(y_{1}:y_{2}:y_{3})\in\mathbb{P}^{2},\{x_{1},x_{2},x_{3}\}\) is a canonical basis of \(V\). It was observed that indeed \(S^{\prime}\subset C\times A\), where \(C\) is the cubic curve
\[C:=\{y\in\mathbb{P}^{2}:\sum_{i}y_{i}^{2}y_{i+1}=0\}.\]
Since \(G\) does not act as translations on \(C\), the quotient surface \(S:=S^{\prime}/G\), called an \(AC3\) surface, has \(p_{g}(S)=q(S)=2,K_{S}^{2}=6\) and the Albanese morphism of \(S\) has degree \(3\).
### Fourier-Mukai transform
Let \(A\) be an abelian variety and \(\hat{A}=\operatorname{Pic}^{0}(A)\) be the dual abelian variety. Let \(\mathscr{P}\) be the normalized Poincare bundle on \(A\times\hat{A}\) and \(p_{1}\) and \(p_{2}\) be respectively the projection from \(A\times\hat{A}\) to \(A\) and \(\hat{A}\). Then the functor between the derived categories of bounded complexes of coherent sheaves
\[\Phi_{\mathscr{P}}:D^{b}(A)\to D^{b}(\hat{A})\] \[\star\to Rp_{2*}(p_{1}^{*}(\star)\otimes\mathscr{P})\]
is called the Fourier-Mukai transform, which induces a derived equivalence between the two derived categories (see [Mu]). It is well-known that if \(L\) is an ample line bundle on \(A\), then \(\widetilde{L}:=\Phi_{\mathscr{P}}(L)\) is a negative vector bundle on \(\hat{A}\). Moreover, if we denote by \(\varphi_{L}:A\to\hat{A}\) the isogeny induced by \(L\), then \(\varphi_{L}^{*}\widetilde{L}\simeq H^{0}(L)\otimes L^{-1}\).
Let \(\mathscr{F}\) be a coherent sheaf on an abelian variety \(A\). Denote by
\[V^{i}(\mathscr{F}):=\{P\in\operatorname{Pic}^{0}(A)\mid H^{i}(A,\mathscr{F} \otimes P)\neq 0\}\]
the \(i\)-th cohomological support loci. We say that \(\mathscr{F}\) is \(\operatorname{IT}^{0}\) if \(V^{i}(\mathscr{F})=\emptyset\) for each \(i>0\). We say that \(\mathscr{F}\) is GV (resp. M-regular) if
\[\operatorname{codim}_{\operatorname{Pic}^{0}(A)}V^{i}(\mathscr{F})\geq i\]
(resp. \(\operatorname{codim}_{\operatorname{Pic}^{0}(A)}V^{i}(\mathscr{F})>i\)) for each \(i>0\).
It is easy to see that if \(\mathscr{F}\) is \(\operatorname{IT}^{0}\), then \(\Phi_{\mathscr{P}}(\mathscr{F})\) is a vector bundle on \(\hat{A}\). It is also known (see [PaPo2]) that \(\mathscr{F}\) is GV (resp. M-regular) iff \(\Phi_{\mathscr{P}}((\mathscr{F})^{\vee})=R^{g}\Phi_{\mathscr{P}}((\mathscr{F})^ {\vee})[-g]\) is a coherent sheaf (resp. torsion-free coherent sheaf).
Moreover, Pareschi and Popa showed in [PaPo1] that if \(\mathscr{F}\) is M-regular, it is continuously globally generated, which means that for any Zariski open subset \(U\subset\operatorname{Pic}^{0}(A)\), the evaluation map
\[\bigoplus_{P\in U}H^{0}(\mathscr{F}\otimes P)\otimes P^{-1}\to\mathscr{F}\]
is surjective.
### Generic vanishing theory
Given a morphism \(f:X\to A\) from a smooth projective variety to an abelian variety, Hacon proved in [H] that the push-forward of the canonical bundle \(f_{*}\omega_{X}\) is a GV sheaf. This important result was strengthened in [CJ] when \(f\) is generically finite and in [PPS] in general. We then know that there exists quotients between abelian varieties with connected fibers \(p_{i}:A\to A_{i}\), M-regular sheaves \(\mathscr{F}_{i}\) on \(A_{i}\), and torsion line bundles \(Q_{i}\in\operatorname{Pic}^{0}(A)\) such that
\[f_{*}\omega_{X}\simeq\bigoplus_{i}p_{i}^{*}\mathscr{F}_{i}\otimes Q_{i}. \tag{1}\]
This formula is called the Chen-Jiang decomposition of \(f_{*}\omega_{X}\).
### Cohomological rank functions
Let \(\mathscr{F}\) be a coherent sheaf on an abelian variety \(A\) and \(L\) an ample line bundle on \(A\). Recall that the \(i\)-th cohomological rank functions in [JP], which are exactly the continuous rank functions defined in [BPS2] when \(i=0\):
\[h^{i}_{\mathscr{F},L}:t\in\mathbb{Q}\to\frac{1}{M^{2\dim A}}h^{i}(A,\pi_{M}^{* }\mathscr{F}\otimes L^{M^{2}t}\otimes Q)\in\mathbb{Q},\]
where \(M\) is a sufficiently divisible integer, \(\pi_{M}:A\to A\) is the multiplication-by-\(M\) map, and \(Q\in\operatorname{Pic}^{0}(A)\) general. We verify easily that \(h^{i}_{\mathscr{F},L}(t)\) is independent of the choice of \(M\). It is also know that \(h^{i}_{\mathscr{F},L}\) can be extended to a continuous function from \(\mathbb{R}\) to \(\mathbb{R}\) and in a small right or left neighborhood of a rational point \(x\in\mathbb{Q}\), \(h^{i}_{\mathscr{F},L}\) is locally a polynomial (see [JP, Theorem A]).
Cohomological rank functions of \(\mathscr{F}\) have deep connections with the positivity of \(\mathscr{F}\). We just mentioned that when \(\mathscr{F}\) is \(\Pi^{0}\), \(h^{0}_{\mathscr{F},L}(t)=\chi(\mathscr{F}\otimes L^{\otimes t})\) for \(-1\ll t<0\) (see [JP, Theorem 5.2]).
### The eventual map
Let \(f:X\to A\) be a morphism from a smooth projective variety to an abelian variety.Let \(H\) be a line bundle on \(X\) such that
\[h^{0}(X,H\otimes f^{*}Q)>0\]
for \(Q\in\operatorname{Pic}^{0}(A)\). Barja, Pardini, and Stoppino defined in [BPS2] the eventual map of \(H\) with respect to \(f\). In [J2, Section 2.3], it is showed that there
exists a coherent subsheaf \((f_{*}H)_{c}\) of \(f_{*}H\) such that \((f_{*}H)_{c}\) is continuously globally generated and
\[H^{0}(A,(f_{*}H)_{c}\otimes Q)=H^{0}(A,f_{*}H\otimes Q)\]
for \(Q\in\operatorname{Pic}^{0}(A)\) general and the eventual map of \(H\) with respect to \(f\) is birationally equivalent to the relative evaluation map of \((f_{*}H)_{c}\) on \(f(X)\). We note that \((f_{*}\omega_{X})_{c}\) is nothing but the M-regular direct summand of \(f_{*}\omega_{X}\) in (1). The eventual map of \(K_{X}\) is called the eventual paracanonical map of \(X\).
We can also formally consider the eventual map of \(H\langle tL\rangle\) for \(t\in\mathbb{Q}\) as follows. We may take a positive integer \(M\) such that \(M^{2}t\in\mathbb{Z}\) and let
be the Cartesian. We say that the eventual map of \(H\langle tL\rangle\) with respect to \(f\) is birational (resp. of degree \(d\)) if the eventual map of \(\mu_{M}^{*}H\otimes f_{M}^{*}(L^{\otimes(M^{2}t)})\) with respect to \(f_{M}\) is birational (resp. of degree \(d\)). This definition is independent of the choice of \(M\) (see [1, Lemma 2.10]) and hence is well-defined.
### The Barja-Pardini-Stoppino inequality
Barja, Pardini, and Stoppino refined the classical Severi inequality in [1]. For a minimal surface \(S\) of maximal Albanese dimension, they showed that \(K_{S}^{2}\geq 5\chi(\omega_{S})\) if the Albanese morphism \(a_{S}\) of \(S\) is birationally onto its image (see [1, Corollary 5.6 and Proposition 6.14]). Since the surfaces we consider here are covers of abelian surfaces, we need to modify their arguments. We can now state a variant of Corollary 5.6 of [1].
**Proposition 2.5**.: _Let \(f:S\to A\) be a morphism from a minimal surface of general type to an abelian variety such that \(f\) is generically finite onto its image. Let \(L\) be an ample line bundle on \(A\) such that the eventual map of \(K_{S}\langle tL\rangle\) is birational onto its image whenever \(h^{0}_{f_{*}\omega_{S},L}(t)>0\) for rational \(t<0\). Then_
\[K_{S}^{2}\geq 5\chi(\omega_{S})\]
_and the equality holds iff_
\[h^{0}_{f_{*}\omega_{S}^{2},L}(2t)=6h^{0}_{f_{*}\omega_{S},L}(t)\]
_for \(t\leq 0\)._
Proof.: We can simply mimic the proof of Corollary 5.6 of [1]. Let \(F(t)=h^{0}_{f_{*}\omega_{S}^{2},L}(2t)\) and \(G(t)=h^{0}_{f_{*}\omega_{S},L}(t)\). At each rational point \(t_{0}\in\mathbb{Q}\), the left derivatives \(D^{-}F(t_{0})\) and \(D^{-}G(t_{0})\) are well-defined.
The assumption that the eventual map of \(K_{S}\langle tL\rangle\) is birational onto its image whenever \(h^{0}_{f_{*}\omega_{S},L}(t)>0\) for rational \(t<0\) makes sure that \(D^{-}F(t_{0})\geq 6D^{-}G(t_{0})\) for each rational number \(t_{0}<0\) (see [1, Proposition 5.4 (ii)]). Since \(F\) and \(G\) are zero functions for \(t\) sufficiently negative, \(F(0)\geq 6G(0)\) and equality holds iff \(F(t)=6G(t)\) for all \(t\leq 0\). Finally, \(F(0)=\chi(\omega_{S})+K_{S}^{2}\) and \(G(0)=\chi(\omega_{S})\) by generic vanishing.
## 3. Chen-Jiang decomposition of \(a_{S*}\omega_{S}\)
For surfaces \(S\) with \(p_{g}=q=2\) and \(a_{S}\) generically finite, we now consider the Chen-Jiang decomposition of \(a_{S*}\omega_{S}\). In general, we have
\[a_{S*}\omega_{S}=\mathcal{O}_{A}\oplus\mathcal{M}\oplus_{i}(p_{i}^{*}M_{i} \otimes Q_{i}),\]
where \(\mathcal{M}\) is a M-regular sheaf on \(A\), \(p_{i}:A\to E_{i}\) is a fibration over an elliptic curve, \(M_{i}\) is an ample vector bundle on \(E_{i}\), and \(Q_{i}\in\operatorname{Pic}^{0}(A)\) is torsion for each \(i\). We will say that the Chen-Jiang decomposition of \(a_{S*}\omega_{S}\) is simple if \(a_{S*}\omega_{S}=\mathcal{O}_{A}\oplus\mathcal{M}\) is the direct sum of \(\mathcal{O}_{A}\) with an M-regular sheaf \(\mathcal{M}\).
Note that \(\chi(\omega_{S})=\chi(a_{S*}\omega_{S})=\chi(\mathcal{M})=1\).
**Theorem 3.1**.: _Let \(S\) be a smooth projective surface of general type such that \(p_{g}=q=2\) and \(a_{S}\) generically finite. Then the Chen-Jiang decomposition of \(a_{S*}\omega_{S}\) is simple, unless \(K_{S}^{2}=8\), and \(S\) is isogenous to a product of a smooth projective curve of genus \(3\) and a smooth projective curve of genus \(3\) or \(4\) or \(5\)._
Proof.: We assume that the Chen-Jiang decomposition of \(a_{S*}\omega_{S}\) contains a direct summand which is of the form \(p^{*}M\otimes Q\), where \(p:A\to E\) is a surjective morphism to an elliptic curve \(E\) with connected fibers and \(M\) is an ample vector bundle on \(E\), and \(Q\in\operatorname{Pic}^{0}(A)\). In particular, \(\deg a_{S}:=\kappa>2\).
We first remark that \(Q\in\operatorname{Pic}^{0}(A)\setminus p^{*}\operatorname{Pic}^{0}(E)\). Indeed, if \(Q=p^{*}Q^{\prime}\in p^{*}\operatorname{Pic}^{0}(E)\), we have \(R^{1}p_{*}a_{S*}\omega_{S}\) contains a direct summand \(M\otimes Q^{\prime}\). We denote by \(p\circ a_{S}:S\xrightarrow{h}C\xrightarrow{q}E\) the Stein factorization of \(p\circ a_{S}\). Note that \(R^{1}p_{*}a_{S*}\omega_{S}=q_{*}R^{1}h_{*}\omega_{S}=q_{*}\omega_{C}\). Thus \(C\) is a smooth projective curve of genus \(\geq 2\). This implies that the Albanese morphism \(a_{S}\) is not surjective, which is a contradiction.
Denote by \(k\) the order of \(Q\in\operatorname{Pic}^{0}(A)\setminus p^{*}\operatorname{Pic}^{0}(E)\). We claim that \(M\) is a line bundle and moreover there exist ample line bundles \(M_{i}\) on \(E\) where \(M_{1}=M\) such that
\[\mathcal{O}_{A}\oplus\bigoplus_{i=1}^{k-1}(p^{*}M_{i}\otimes Q^{\otimes i})\]
is a direct summand of \(a_{S*}\omega_{S}\). This is essentially a special case of [JLT, the Claim in Page 2991]. We simply state it using the language of Chen-Jiang decomposition.
After replacing \(Q\) by a line bundle of order \(k\) in \(Q\otimes p^{*}\operatorname{Pic}^{0}(E)\), we may simply assume that \(Q\) is of order \(k\) and denote by \(\pi:\widetilde{S}\to S\) the cyclic etale cover induced by \(Q\). By considering the Stein factorization of \(p\circ a_{S}\circ\pi\), we have the following commutative diagram:
where \(q:C\to E\) is a cyclic \(\mathbb{Z}_{k}\)-cover such that \(q_{*}\omega_{C}=\mathscr{O}_{E}\oplus\bigoplus_{i=1}^{k-1}M_{i}\), since \(\omega_{C}=R^{1}h_{*}\omega_{\widetilde{S}}\) (see [JLT, the Claim in Page 2991]).
Note that \(g(C)=1+\sum_{i=1}^{k-1}\deg M_{i}\geq k\). On the other hand, \(\chi(h_{*}\omega_{\widetilde{S}})=\chi(\omega_{\widetilde{S}})+\chi(R^{1}h_{* }\omega_{\widetilde{S}})=k\chi(\omega_{S})+\chi(\omega_{C})=k+(g(C)-1)\). We denote by \(r\) and \(d\) respectively the rank and the degree of \(h_{*}\omega_{\widetilde{S}/C}\). Note that \(r=g(F)\), where \(F\) is a general fiber of \(h\) or \(p\circ a_{S}\). We also know that \(h_{*}\omega_{\widetilde{S}/C}\) is nef and hence \(d\geq 0\) (see [V]). By Riemann-Roch, we then have
\[\chi(h_{*}\omega_{\widetilde{S}})=r(g(C)-1)+d=k+g(C)-1.\]
Hence \(d+(r-1)(g(C)-1)=k\leq g(C)\). Thus we have the following possibilities
* \(d=1\), \(r=2\), and \(g(C)=k\);
* \(d=0\), \(r=2\) and \(g(C)=k+1\);
* \(d=0\), \(r=3\), and \(g(C)=k=2\).
We first remark that if \(d=0\), by [BHPV, Theorem 17.3], \(h\) is locally trivial and hence \(h\) is smooth and all fibers of \(h\) are isomorphic to \(F\). In this case, \(K_{S}^{2}=8\chi(\omega_{F})=8\). It is a bit more tricky to deal with the case that \(d=1\).
If \(d=1\), either \(h\) is isotrivial but not locally trivial or \(h\) has non-constant moduli. If \(h\) has non-constant moduli, since \(g(F)=r=2\), we apply results of Gang Xiao in [X1]. Let \(E^{\prime}\) be the kernel of \(p\). Then \(E^{\prime}\) can be seen as the fixed part of the Jacobian of fibers of \(h\). Moreover, the degree of \(F\) over \(E^{\prime}\) equal to \(\deg a_{S}=\kappa\) is \(>2\). Thus we can say that \(h:\widetilde{S}\to C\) is a fibration of genus 2 curve of type \((E^{\prime},\kappa)\) in the terminology of Xiao ([X1, Chapter 3]). Thus by [X1, Theoreme 3.10 and Corollaire in Page 47], there exists a semistable family \(f:S(E^{\prime},\kappa)\to X(\kappa)\) from a minimal surface to a smooth projective curve and a surjective morphism \(\varphi:C\to X(\kappa)\) such that \(h\) is the desingularization of the pull-back of \(f\) and moreover both \(f\) and \(h\) are
semistable. By [V, Lemma 3.1],
\[h_{*}\omega_{S/C}=\varphi^{*}f_{*}\omega_{S(E^{\prime},\kappa)/X(\kappa)}.\]
The numerical invariants of \(S(E^{\prime},\kappa)\) and \(X(\kappa)\) were also computed in [X1, Theoreme 3.13]. Since
\[\deg f_{*}\omega_{S(E^{\prime},\kappa)/X(\kappa)}=\chi(\omega_{S(E^{\prime}, \kappa)})-g(X(\kappa))+1,\]
we see that \(\deg f_{*}\omega_{S(E^{\prime},\kappa)/X(\kappa)}\) is \(\geq 2\) when \(\kappa\geq 4\) and is equal to \(1\) when \(\kappa=3\) (see [X1, Page 48 and 52]). When \(\kappa=3\), \(X(3)=\mathbb{P}^{1}\). Thus \(\deg h_{*}\omega_{S/C}\geq 2\) in any case and we get a contradiction.
When \(h\) is locally trivial or isotrivial and \(g(F)=r=2\), \(S\) is birational to a diagonal quotient \((F\times C^{\prime})/G\), where \(G\) acts faithfully on both \(F\) and \(C^{\prime}\) and both \(F/G\) and \(C^{\prime}/G\) are elliptic curves since \(q(S)=2\) and \(a_{S}\) is generically finite. It is elementary that a Galois cover from a genus \(2\) curve to an elliptic curve has to be a double cover. Thus \(\deg a_{S}=2\) and we have a contradiction.
The only rest of case is that \(d=0\), \(r=3\) and \(g(C)=k=2\). The fibration \(h\) is locally trivial and thus \(K_{S}^{2}=8\chi(\omega_{S})=8\). Thus \(\widetilde{S}\) is isogenous to \(F\) with a cover \(C^{\prime}\) of \(C\). We can check Penegini's list [Pm, Theorem 1.1] that \(g(C^{\prime})=3\), or \(4\), or \(5\).
**Example 3.2**.: Let \(a_{i}:C_{i}\to E_{i}\) be two \((\mathbb{Z}_{2}\times\mathbb{Z}_{2})\)-cover of elliptic curves such that \(a_{i*}\omega_{C_{i}}=\mathcal{O}_{E_{i}}\oplus L_{i}\oplus P_{i}\oplus(L_{i} \otimes P_{i})\) for \(i=1,2\), where \(L_{i}\) and \(P_{i}\) are respectively degree \(1\) and torsion line bundles. Note that \(a_{i}\) is nothing but a composition of a ramified double cover with an etale double cover.
We may also assume that \(\operatorname{Gal}(C_{1}/E_{1})=\operatorname{Gal}(C_{2}/E_{2})=G=\langle \sigma\rangle\times\langle\tau\rangle\) and that \(L_{1}\) (resp. \(P_{2}\)) is the \(\sigma\)-invariant in \(a_{1*}\omega_{C_{1}}\) (resp. \(a_{2*}\omega_{C_{2}}\) )and \(P_{1}\) (resp. \(L_{2}\)) is the \(\tau\)-invariant part in \(a_{1*}\omega_{C_{1}}\) (resp. \(a_{2*}\omega_{C_{2}}\)). Let \(S\) be the diagonal quotient \((C_{1}\times C_{2})/G\) and \(a:S\to E_{1}\times E_{2}\) be the natural morphism (which is also the Albanese morphism of \(S\)), we have
\[a_{S*}\omega_{S}=\mathcal{O}_{E_{1}\times E_{2}}\oplus(L_{1}\boxtimes P_{2}) \oplus(P_{1}\boxtimes L_{2})\oplus((L_{1}\otimes P_{1})\boxtimes(L_{2} \otimes P_{2})).\]
## 4. Cohomological rank functions of surfaces of general type with \(p_{g}=q=2\)
We assume in this section that \(a_{S*}\omega_{S}=\mathcal{O}_{A}\oplus\mathcal{M}\) and \(\mathcal{M}\) is M-regular. It was shown in [CH] that \(\Phi_{\mathcal{P}_{A}}(\mathcal{M}^{\vee})[2]=L\otimes\mathcal{I}_{Z}\), where \(L\) is an ample line bundle on \(\operatorname{Pic}^{0}(A)\).
Then we have a short exact sequence by taking the inverse Fourier-Mukai transform and taking duality:
\[0\to\widehat{L_{2}}^{\vee}\to\widehat{L}^{\vee}\to\mathcal{M}\to 0. \tag{2}\]
We observe that \(\operatorname{rank}\,\mathcal{M}=\deg a_{S}-1\leq h^{0}(L)\) and equality holds iff \(Z=\emptyset\). It seems easier to work on \(\operatorname{Pic}^{0}(A)=\hat{A}\). Let \(\varphi_{L}:\hat{A}\to\hat{A}=A\) be the isogeny induced by \(L\) and let \(\hat{S}:=S\times_{A}\hat{A}\). Then \(\varphi_{L}^{*}\widehat{L}\simeq H^{0}(L)\otimes L^{-1}\). We also remark that \(\widehat{L}^{\vee}_{Z}\) is a successive extension of numerically trivial line bundles. Hence it is easy to compute the cohomological rank function of \(\hat{a}_{*}\omega_{\hat{S}}\).
**Lemma 4.1**.: _For \(-1\leq t\leq 0\), we have_
\[h^{0}_{\hat{a}_{*}\omega_{\hat{S}},L}(t) =h^{0}(L)^{2}(t+1)^{2}\] \[h^{1}_{\hat{a}_{*}\omega_{\hat{S}},L}(t) =(h^{0}(L)+1-\deg a_{S})h^{0}(L)t^{2}\] \[h^{2}_{\hat{a}_{*}\omega_{\hat{S}},L}(t) =h^{0}(L)t^{2}.\]
It is also clear that the Hilbert-Poincare polynomial
\[\chi(\hat{S},K_{\hat{S}}+t\hat{a}^{*}L)=\chi(\omega_{\hat{S}})+\frac{1}{2}(K_ {\hat{S}}\cdot\hat{a}^{*}L)t+\frac{1}{2}(\hat{a}^{*}L)^{2}t^{2}\]
is nothing but
\[h^{0}_{\hat{a}_{*}\omega_{\hat{S}},L}(t)-h^{1}_{\hat{a}_{*}\omega_{\hat{S}},L} (t)+h^{2}_{\hat{a}_{*}\omega_{\hat{S}},L}(t).\]
Comparing the coefficient of \(t\), we get an equality.
**Lemma 4.2**.: \((K_{\hat{S}}\cdot\hat{a}^{*}L)=4h^{0}(L)^{2}\)_._
We believe that Lemma 4.1 is crucial in the classification of all surfaces with \(p_{g}=q=2\).
**Corollary 4.3**.: _Let \(S\) be a minimal surface with \(p_{g}=q=2\) and \(K_{S}^{2}=9\). Then \(\operatorname{length}(Z)\geq 2\) and in particular \(V^{1}(\omega_{S})\neq\emptyset\). Similarly, if \(S\) be a minimal surface with \(p_{g}=q=2\) and \(K_{S}^{2}=8\), then \(V^{1}(\omega_{S})\neq\emptyset\)._
Proof.: By Hodge index, we have
\[(K_{\hat{S}}\cdot\hat{a}^{*}L)\geq\sqrt{K_{\hat{S}}^{2}\cdot(\hat{a}^{*}L)^{2}}.\]
If \(K_{S}^{2}=9\), \(4h^{0}(L)^{2}\geq\sqrt{9h^{0}(L)^{2}\cdot(2(\deg\hat{a})h^{0}(L))}\). Since \(\deg\hat{a}=1+h^{0}(L)-\operatorname{length}(Z)\), we have \(\operatorname{length}(Z)\geq 1+\frac{h^{0}(L)}{8}\).
If \(K_{S}^{2}=8\), we may assume that the Chen-Jiang decomposition of \(a_{S*}\omega_{S}\) is simple, then we have \((K_{\hat{S}}\cdot\hat{a}^{*}L)\geq\sqrt{K_{\hat{S}}^{2}\cdot(\hat{a}^{*}L)^{2}}\). Similarly, we have \(\operatorname{length}(Z)\geq 1\).
**Lemma 4.4**.: _For \(-1\ll t\leq 0\),_
\[h^{0}_{\hat{a}_{*}\omega_{\hat{S}}^{2},L}(2t)=h^{0}(L)^{2}(1+K_{S}^{2})+3(K_{ \hat{S}}\cdot\hat{a}^{*}L)t+4(\deg a_{S})h^{0}(L)t^{2}.\]
Proof.: It is known that \(\hat{a}_{*}\omega_{\hat{S}}^{\otimes 2}\) is \(\mathrm{IT}^{0}\). Thus, by [JP, Theorem 5.2], \(h^{0}_{\hat{a}_{*}\omega_{\hat{S}}^{\otimes 2},L}(2t)\) is simply the Hilbert-Poincare polynomial
\[\chi(\hat{S},2K_{\hat{S}}+2t\hat{a}^{*}L)=h^{0}(L)^{2}(1+K_{S}^{2})+3(K_{\hat{S }}\cdot\hat{a}^{*}L)t+4(\deg a_{S})h^{0}(L)t^{2}\]
for \(-1\ll t\leq 0\).
## 5. The eventual paracanonical maps
We still assume that the Chen-Jiang decomposition of \(a_{S*}\omega_{S}\) is simple in this section and then study the eventual paracanonical maps of \(S\) or \(\hat{S}\).
**Lemma 5.1**.: _Assume that \(\deg a_{S}\geq 3\), then the eventual map of \(K_{\hat{S}}\langle tL\rangle\) for \(-1<t\leq 0\) is birational onto its image._
Proof.: Indeed, by (2), for \(-1<t\leq 0\), the eventual map of \(K_{\hat{S}}\langle tL\rangle\) is birationally equivalent to the relative evaluation map of (the etale pull-back of) \(\varphi_{L}^{*}\mathscr{M}\). Let \(\Phi\) be the relative evaluation map and \(Z\) be its image. We have the following commutative diagram:
By construction, \(\deg(Z/\hat{A})\geq\mathrm{rank}(\mathscr{M})=\deg a_{S}-1\). Thus if \(\deg a_{S}\geq 3\), we have \(\Phi:\hat{S}\to Z\) is birational since \(\deg a_{S}=\deg(\hat{S}/\hat{A})=\deg(\hat{S}/Z)\deg(Z/\hat{A})\).
We also need to study the eventual paracanonical map more carefully. Let \(\varphi:a_{S}^{*}\mathscr{M}\to\omega_{S}\) be the evaluation map and let \(\omega_{S}\otimes\mathscr{I}_{W}\) be the image, where \(W\) is a subscheme of \(S\).
**Lemma 5.2**.: \(W\) _is supported on the exceptional locus of \(a_{S}\)._
Proof.: Let \(U\subset A\) be the finite locus of \(a_{S}\). Then by [CE, Theorem 2.1], \(a_{S}^{*}\mathscr{M}\to\omega_{S}\) is surjective over \(a_{S}^{-1}(U)\).
**Remark 5.3**.: Since \(a_{S*}\omega_{S}=\mathscr{O}_{A}\oplus\mathscr{M}\) and \(\mathscr{M}\) is M-regular and hence continuously globally generated, we conclude by Lemma 5.2 that, for any open subset \(V\) of \(\hat{A}\),
\[\bigcap_{P\in V}\mathrm{Bs}(|K_{S}+P|)\]
is supported on the exceptional locus of \(a_{S}\).
Let \(\rho:S^{\prime}\to S\) be a resolution of indeterminacy of the eventual para-canonical map of \(S\) (or equivalently the principalization of \(\Cal{I}_{W}\)) and let \(\Phi^{\prime}:S^{\prime}\to\mathbb{P}_{A}(\Cal{M})\) be the corresponding morphism. We have the following commutative diagram:
(3)
where the upper diagram is obtained via the etale base change of \(\varphi_{L}\). We then see that
\[\Phi^{\prime*}\Cal{O}_{\pi}(1)=\rho^{*}(K_{S}-F_{1})-F_{2}, \tag{4}\]
where \(F_{1}\) is \(a_{S}\)-exceptional and \(F_{2}\) is \(\rho\)-exceptional. We note that \(\Cal{O}_{\pi}(1)\simeq L\boxtimes\Cal{O}_{\mathbb{P}^{h^{0}(L)-1}}(1)\) and thus we may write
\[\hat{\rho}^{*}(K_{\hat{S}}-\hat{F_{1}})-\hat{F_{2}}=\hat{\Phi}^{\prime*}(L \boxtimes\Cal{O}_{\mathbb{P}^{h^{0}(L)-1}}(1)), \tag{5}\]
where \(\hat{F_{1}}\) and \(\hat{F_{2}}\) are the pull-back of \(F_{1}\) and \(F_{2}\) on \(\hat{S}^{\prime}\).
## 6. Characterizations of minimal surfaces with \(p_{g}=q=2\) and \(K_{s}^{2}=5\)
### Surfaces with \(p_{g}=q=2\) and \(\deg a_{S}=2\)
Suppose that the Albanese morphism \(a_{S}:S\to A:=\operatorname{Alb}(S)\) is of degree \(2\). Consider the Stein factorization of \(a_{S}\)
then \(f:\overline{S}\to A\) is a finite double cover. It is determined by a reduced even divisor \(B\) on \(A\) and its square root \(\delta\in\operatorname{Pic}(A)\) with \(\Cal{O}_{A}(B)\cong\delta^{2}\). Note that the normality of \(S\) implies the normality of \(\overline{S}\). Thus \(\overline{S}\) has at most isolated singularities.
After a minimal even resolution \(\rho:\widetilde{A}\to A\) of singularities of \(B\), we obtain a double cover data \((\overline{B},\overline{\delta})\) on \(\widetilde{A}\) with its image under \(\rho\) is \((B,\delta)\), where \(\overline{B}\) is a smooth reduced even divisor on \(\widetilde{A}\) with the square root \(\widetilde{\delta}\in\operatorname{Pic}(\widetilde{A})\). More precisely, let \(\rho=\rho_{1}\circ\cdots\circ\rho_{n}\) be a composition of a series of blow-ups
\(\rho_{i}\), and let \(m_{i}\) be the order of even and reduced inverse image of \(B\) at the center of \(\rho_{i}\). Define \(k_{i}=\lfloor\frac{1}{2}m_{i}\rfloor\), then one has
\[\widetilde{\delta}\sim\rho^{*}\delta-\sum_{i=1}^{n}k_{i}E_{i},\widetilde{B} \sim\rho^{*}B-2\sum_{i=1}^{n}k_{i}E_{i}.\]
where each \(E_{i}\) is the total inverse image of the center of \(\rho_{i}\) in \(\widetilde{A}\). Let \(\widetilde{f}:\widetilde{S}\to\widetilde{A}\) be the double cover associated to \((\widetilde{B},\widetilde{\delta})\). Then \(\widetilde{S}\) is smooth but may be not minimal. Note that
\[K_{\widetilde{S}}=\widetilde{f^{*}}(\rho^{*}(K_{A}+\delta)-\sum_{i=1}^{n}(k_{i }-1)E_{i}).\]
The numerical invariants of \(\widetilde{S}\) are
\[K_{\widetilde{S}}^{2}=2(K_{A}+\delta)^{2}-2\sum_{i=1}^{n}(k_{i}-1)^{2},\]
\[\chi(\mathcal{O}_{\widetilde{S}})=2\chi(\mathcal{O}_{A})+\frac{1}{2}(\delta^{ 2}+K_{A}\cdot\delta)-\frac{1}{2}\sum_{i=1}^{n}k_{i}(k_{i}-1).\]
Contracting the \((-1)\)-curves on \(\widetilde{S}\) yields a birational morphism \(\epsilon:\widetilde{S}\to S\). It is well-known that \(\epsilon\) is a composition of contractions of \((-1)\)-curves. We denote by \(l\) the numbers of contractions of \((-1)\)-curves of \(\epsilon\). Then
\[K_{S}^{2}=K_{\widetilde{S}}^{2}+l,\,\chi(\mathcal{O}_{\widetilde{S}})=\chi( \mathcal{O}_{S}).\]
**Lemma 6.1**.: _Let \(S\) be a minimal surface of general type with \(p_{g}=q=2\) and \(\deg a_{S}=2\). Then \(K_{S}^{2}\neq 5\)._
Proof.: By above discussion, one has
\[\chi(\mathcal{O}_{\widetilde{S}})=\frac{1}{2}\delta^{2}-\frac{1}{2}\sum_{i=1} ^{n}k_{i}(k_{i}-1),K_{\widetilde{S}}^{2}=2\delta^{2}-2\sum_{i=1}^{n}(k_{i}-1) ^{2}.\]
Hence
\[K_{S}^{2}\geq K_{\widetilde{S}}^{2}=4+2\sum_{i=1}^{n}(k_{i}-1). \tag{6}\]
Now suppose that \(K_{S}^{2}=5\). It follows that all \(k_{i}=1\) and \(K_{\widetilde{S}}=\widetilde{f^{*}}\rho^{*}\delta\) and thus all the blow-ups centers of \(\rho\) are negligible singularities and \(\widetilde{S}\simeq S\) is minimal. In this case, \(K_{S}^{2}=4\), a contradiction.
**Theorem 6.2**.: _Let \(S\) be a minimal surface of general type with \(p_{g}(S)=q(S)=2\) and \(K_{S}^{2}=5\), the Albanese map \(a_{S}\) is generically finite of degree \(3\)
Proof.: We have already seen that \(\deg a_{S}\geq 3\) when \(K_{S}^{2}=5\). By Lemma 5.1, we can apply Proposition 2.5. Since \(K_{S}^{2}=5\chi(\omega_{S})\), we have \(h^{0}_{\hat{a}_{*}\omega_{S}^{2},L}(2t)=6h^{0}_{\hat{a}_{*}\omega_{S}^{2},L}(t)\) for \(t<0\). By Lemma 4.1 and Lemma 4.4, comparing the coefficient of \(t^{2}\), we have \(4\deg a_{S}=6h^{0}(L)\). On the other hand, \(\deg a_{S}-1=\operatorname{rank}\mathcal{M}\leq h^{0}(L)\). Thus \(h^{0}(L)\leq 2\). We then have \(h^{0}(L)=2\), \(\deg a_{S}=3\), and \(\mathcal{M}=\hat{L}^{\vee}\).
**Corollary 6.3**.: _A minimal surface \(S\) of general type with \(p_{g}(S)=q(S)=2\) and \(K_{S}^{2}=5\) is a Chen-Hacon surface._
Proof.: Let \(a_{S}:S\to\overline{S}\xrightarrow{\overline{a}}A\) be the Stein factorization of \(a_{S}\). By Theorem 6.2, \(a_{S}.\mathcal{O}_{S}=\overline{a}_{*}\mathcal{O}_{\overline{S}}=\mathcal{O}_ {A}\oplus\widehat{L}\) is a rank 3 vector bundle and \(K_{S}^{2}=3c_{1}(\widehat{L})^{2}-3c_{2}(\widehat{L})=5\). Thus, \(\overline{S}\) has negligible singularities by [PP1, Definition 1.5]. Thus by Theorem A of [PP1], \(S\) is a Chen-Hacon surface. We remark that Corollary 6.3 also follows from [AC, Theorem 4.1].
**Remark 6.4**.: By the results in this section, we see that the method to prove the Barja-Pardini-Stoppino inequality in [BPS2] is indeed sharp. By [J1, Proposition 2.12], the Barja-Pardini-Stoppino inequality is not sharp for those surfaces whose Albanese morphism is birational onto its image. Thus it is interesting to ask whether or not there exists a sharp Severi type inequality when the Albanese morphism is birational onto its image.
## 7. Minimal surfaces with \(p_{g}=q=2\) and \(K_{S}^{2}=6\)
The classification of surfaces \(S\) with \(p_{g}=q=2\) and \(K_{S}^{2}=6\) is more complicated. Note that when \(\deg a_{S}=2\), Penegini and Polizzi had classified the structure of \(S\) in [PP2]. Thus we will always assume that \(\deg a_{S}\geq 3\) in this section. In particular, the eventual paracanonical map of \(S\) is birational onto its image by Lemma 5.1.
The proof of Theorem 1.2 is quite complicated. Let's briefly outline the steps of the proof. We first show that the eventual paracanonical map of \(S\) is indeed the contraction onto its canonical model. Then we realized that it is crucial to study \(\hat{S}\) via the morphism induced by the linear system \(|K_{\hat{S}}-\hat{a}^{*}L|\). From which, we prove that \(\deg a_{S}=\deg\hat{a}\leq 7\). We finally apply the symmetry induced by the Heisenberg groups and other delicate geometric considerations to finish the proof of Theorem 1.2.
**Proposition 7.1**.: _Assume that \(K_{S}^{2}=6\) and \(\deg a_{S}>2\), the eventual paracanonical map of \(\hat{S}\) is a morphism from \(\hat{S}\) onto its canonical model._
Proof.: We just need to show that \(F_{1}\) and \(F_{2}\) in (4) are \(0\) and equivalently \(\hat{F}_{1}\) and \(\hat{F}_{2}\) in (5) are zero. We know that \((K_{\hat{S}}^{2})=6h^{0}(L)^{2}\), \((K_{\hat{S}}\cdot\hat{a}^{*}L)=4h^{0}(L)^{2}\)
and \(\deg\hat{a}=\deg a=1+h^{0}(L)-r\), where \(r:=\text{length}(Z)\). Thus
\[(\hat{\Phi^{\prime}}^{*}\mathcal{O}_{\mathbb{P}^{h-1}}(1))^{2}\] \[= (\hat{\rho}^{*}(K_{\hat{S}}-\hat{F_{1}})-\hat{F_{2}}-\hat{\rho}^{* }\hat{a}^{*}L)^{2}\] \[= (K_{\hat{S}}^{2})-2(K_{\hat{S}}\cdot\hat{a}^{*}L)+(\hat{a}^{*}L)^ {2}-2(K_{\hat{S}}\cdot\hat{F_{1}})+(\hat{F_{1}})^{2}+(\hat{F_{2}})^{2}\] \[= 2(1-r)h^{0}(L)-2(K_{\hat{S}}\cdot\hat{F_{1}})+(\hat{F_{1}})^{2}+ (\hat{F_{2}})^{2}.\]
If \(F_{1}\neq 0\), we know that \((F_{1}^{2})<0\) and \(K_{S}\cdot F_{1}\geq 0\) since \(F_{1}\) is \(a_{S}\)-exceptional. We verify easily that \(-2(K_{S}\cdot F_{1})+(F_{1}^{2})\leq-2\). Thus \(-2(K_{\hat{S}}\cdot\hat{F_{1}})+(\hat{F_{1}})^{2}\leq-2h^{0}(L)^{2}\), which implies that \((\hat{\Phi^{\prime}}^{*}\mathcal{O}_{\mathbb{P}^{h^{0}(L)-1}}(1))^{2}<0\), which is impossible.
If \(F_{2}\neq 0\), the same argument implies that \(r=0\), \((F_{2}^{2})=-1\), \((\hat{F_{2}})^{2}=-h^{0}(L)^{2}\), and \(h^{0}(L)=2\). In this case, \(F_{2}\) is supported on the exceptional locus of \(S^{\prime}\to S\) and \(\hat{S^{\prime}}\) is birational onto a hypersurface \(W\) of \(\hat{A}\times\mathbb{P}^{1}\). Since \(\deg\hat{a}=\deg a_{S}=3\) in this case, \(W\in|M\boxtimes\mathcal{O}_{\mathbb{P}^{1}}(3)|\), where \(M\) is a line bundle on \(\hat{A}\). Moreover, from the construction of \(\hat{\Phi}^{\prime}\), we know that \(\hat{\Phi}^{\prime*}(L\boxtimes\mathcal{O}_{\mathbb{P}^{1}}(1))=\hat{\rho}^{* }K_{\hat{S}}-\hat{F_{2}}\). Since \(\hat{S}^{\prime}\) is a smooth model of \(W\) and \(W\) is Gorenstein, \(K_{\hat{S}^{\prime}}\) is a subsheaf of \(\hat{\Phi}^{\prime*}(M\boxtimes\mathcal{O}_{\mathbb{P}^{1}}(1))\). Thus \(\rho^{*}\hat{a}^{*}(M-L)-\hat{F_{2}}\) is effective. Thus \(M-L\) is a non-zero effective line bundle on \(\hat{A}\). On the other hand,
\[5h^{0}(L)^{2}=(\hat{\rho}^{*}K_{\hat{S}}-\hat{F_{2}})^{2}=(L\boxtimes\mathcal{ O}_{\mathbb{P}^{1}}(1))^{2}\cdot(M\boxtimes\mathcal{O}_{\mathbb{P}^{1}}(3)).\]
We deduce that \((L\cdot M)=4\) and thus by Hodge index, \(M\equiv L\) and we have a contradiction.
**Remark 7.2**.: This proposition is quite important in the classification of minimal surfaces with \(p_{g}=q=2\) and \(K^{2}=6\). Indeed, it verifies [AC, Assumption 0.3] in this case.
The structure of the eventual paracanonical morphism
\[\hat{\Phi}:\hat{S}\to\mathbb{P}_{\hat{A}}(\varphi_{L}^{*}\mathcal{M})\subset \hat{A}\times\mathbb{P}^{h^{0}(L)-1}\]
plays an important role in the classification of \(S\). Let \(\psi:\hat{S}\to\mathbb{P}^{h^{0}(L)-1}\) be the natural morphism. We denote by \(\eta=\psi^{*}\mathcal{O}_{\mathbb{P}^{h-1}}(1)\) and thus \(\eta=K_{\hat{S}}-\hat{a}^{*}L\). Note that \((\eta^{2})=2(1-r)h^{0}(L)\geq 0\). Thus the following lemma is clear.
**Lemma 7.3**.: _We have \(0\leq r\leq 1\) and \(\eta\) is big iff \(r=0\) and the image of \(\psi\) is a curve when \(r=1\)._
We have the following commutative diagram
**Lemma 7.4**.: _The morphism \(\psi\) is induced by the complete linear system \(|\eta|\)._
Since \(\hat{a}_{*}\omega_{\hat{S}}=\mathscr{O}_{\hat{A}}\oplus\varphi_{L}^{*}\mathscr{M}\), and
\[0\to\varphi_{L}^{*}\widehat{L|_{Z}}^{\vee}\to H^{0}(L)^{\vee}\otimes L\to \varphi_{L}^{*}\mathscr{M}\to 0,\]
we have \(H^{0}(\hat{S},\eta)=H^{0}(\hat{S},K_{\hat{S}}-\hat{a}^{*}L)\simeq H^{0}(L)^{ \vee}\).
**Proposition 7.5**.: _The morphism \(\psi\) is \(\mathscr{K}_{L}\)-equivariant and the image of \(\psi\) is \(K_{L}\)-invariant. Moreover, the representation of \(\mathscr{K}_{L}\) on \(H^{0}(\eta)\) is isomorphic to the dual of the Schrodinger representation._
Proof.: We have already seen in Lemma 7.4 that \(\psi\) is induced by the complete linear series \(|K_{\hat{S}}-\hat{a}^{*}L|\). Note that \(\mathscr{K}_{L}\) acts on \(\hat{a}^{*}L\) via the action of \(\mathscr{K}_{L}\) on \(L\). Since \(\hat{S}\to S\) is a \(K_{L}\)-etale cover, we have the regular action of \(K_{L}\) on \(K_{\hat{S}}\), which induces an action of \(\mathscr{K}_{L}\) on \(K_{\hat{S}}\). Thus we have a linear action of \(\mathscr{K}_{L}\) on \(\eta\). Thus the morphism \(\psi\) is \(\mathscr{K}_{L}\)-equivariant and since the kernel of \(\mathscr{K}_{L}\to K_{L}\) acts on \(H^{0}(\eta)\) by multiplication-by-scalar, the image of \(\psi\) is \(K_{L}\)-invariant.
Since \(\hat{a}_{*}\omega_{\hat{S}}=\mathscr{O}_{\hat{A}}\oplus\varphi_{L}^{*}\mathscr{ M}\) and by the exact sequence (2), we have
\[0\to\varphi_{L}^{*}\widehat{L|_{Z}}^{\vee}\to H^{0}(\eta)\otimes L\to\varphi_ {L}^{*}\mathscr{M}\to 0.\]
For \(Q\in\operatorname{Pic}^{0}(A)\) general, we see that
\[H^{0}(\hat{S},\eta)\otimes\hat{a}^{*}H^{0}(\hat{A},L\otimes\varphi_{L}^{*}Q) \subset H^{0}(K_{\hat{S}}\otimes\hat{a}^{*}\varphi_{L}^{*}Q)\]
and there exists a unique \(\mathscr{K}_{L}\)-invariant section of
\[H^{0}(\hat{S},\eta)\otimes\hat{a}^{*}H^{0}(\hat{A},L\otimes\varphi_{L}^{*}Q),\]
which descends to the unique section (up to scalar) of
\[H^{0}(\hat{A},\varphi_{L}^{*}\mathscr{M}\otimes\varphi_{L}^{*}Q)^{K_{L}}=H^{0 }(A,\mathscr{M}\otimes Q)\subset H^{0}(K_{S}\otimes a_{S}^{*}Q).\]
Since the representation of \(\mathscr{K}_{L}\) on \(H^{0}(\hat{A},L\otimes\varphi_{L}^{*}Q)\) is isomorphic to the Schrodinger representation, the representation of \(\mathscr{K}_{L}\) on \(H^{0}(\eta)\) is isomorphic to the dual of the Schrodinger representation.
### The case where \(r>0\)
**Theorem 7.6**.: _When \(r=1\), we have \(h^{0}(L)=3\) and the image of \(\psi\) is a genus \(1\) curve \(C\subset\mathbb{P}^{2}\)._
Proof.: Let \(C\) be the normalization of the image of \(\psi\). Then \(\hat{S}\) is the minimal smooth model of a divisor of \(\hat{A}\times C\).
Since \(r=1\), \(Z\) is a reduced point and we have a short exact sequence
\[0\to P\to H^{0}(\eta)\otimes L\to\varphi_{L}^{*}\mathcal{M}\to 0,\]
where \(P\in\operatorname{Pic}^{0}(\hat{A})\) is a torsion line bundle. In particular,
\[q(\hat{S})=\begin{cases}3,&\text{if $P$ is trivial,}\\ 2,&\text{if $P$ is non-trivial.}\end{cases}\]
Since \(\hat{a}\) is generically finite, the Albanese morphism \(a_{\hat{S}}:\hat{S}\to A_{\hat{S}}\) of \(\hat{S}\) is also generically finite onto its image. Note that the morphism \(\hat{S}\to C\subset JC\) factors through \(a_{\hat{S}}\) and \(A_{\hat{S}}\to JC\) is surjective. Thus \(g(C)\leq 2\). Moreover, if \(g(C)=2\), by the universal property of \(a_{\hat{S}}\), \(q(\hat{S})=3\) and we have the following commutative diagram
We then conclude that the image of \(a_{\hat{S}}\) is an elliptic curve bundle over \(C\). On the other hand, the morphism \(\Phi:\hat{S}\to\hat{A}\times C\to\hat{A}\times\mathbb{P}^{h^{0}(L)-1}\) is birational onto its image. We conclude that \(\hat{S}\) is birational to an elliptic bundle over \(C\), which is a contradiction.
Thus \(g(C)\leq 1\). If \(C=\mathbb{P}^{1}\), then \(\hat{S}\) is birational to a hypersurface of \(\hat{A}\times\mathbb{P}^{1}\). This hypersurface is the canonical model of \(\hat{S}\) and thus is a section of \(L\boxtimes\mathcal{O}_{\mathbb{P}^{1}}(h^{0}(L)-1)\). But then \((K_{\hat{S}}^{2})<6h^{0}(L)^{2}\), which is a contradiction. Thus \(C\) is of genus \(1\), \(C\subset\mathbb{P}^{h^{0}(L)-1}\) and \(\deg_{C}(\mathcal{O}_{\mathbb{P}^{h^{0}(L)-1}}(1))=h^{0}(L)\geq 3\).
Since the canonical model of \(\hat{S}\) is an ample divisor of the abelian threefold \(\hat{A}\times C\), \(q(\hat{S})=3\) and hence \(P=\mathcal{O}_{\hat{A}}\) is trivial.
Note that \(C\subset\mathbb{P}^{h^{0}(L)-1}\) is \(K_{L}\)-invariant. From the Schrodinger representation, we see that \(K_{L}\) acts on \(C\) faithfully and hence \(K_{L}\) is a subgroup of \(\operatorname{Aut}(C)\). Since \(q(S)=2\), \(C/K_{L}\) is rational.
We know that \(\operatorname{Aut}(C)\) is the semi-product of \(C\) with
\[\begin{cases}(\sqrt{-1})\simeq\mathbb{Z}_{4},&\text{if $C\simeq\mathbb{C}/( \mathbb{Z}\oplus\mathbb{Z}\sqrt{-1})$,}\\ \langle-e^{\frac{2\pi i}{3}}\rangle\simeq\mathbb{Z}_{6},&\text{if $C\simeq\mathbb{C}/( \mathbb{Z}\oplus\mathbb{Z}e^{\frac{2\pi i}{3}})$,}\\ \langle-1\rangle\simeq\mathbb{Z}_{2},&\text{otherwise.}\end{cases}\]
We then verify that the only possibility is that \(K_{L}\simeq\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) or \(\mathbb{Z}_{3}\times\mathbb{Z}_{3}\). Since \(h^{0}(L)\geq 3\), \(h^{0}(L)=3\).
By [AC, Theorem 0.8], Proposition 7.1, and Propostion 7.5, we conclude that \(S\) is an AC3-surface.
### Boundedness of \(h^{0}(L)\)
In this subsection, we assume that \(r=0\) and hence \(\varphi_{L}^{*}\mathscr{M}=H^{0}(L)^{\vee}\otimes L\) is locally free, \((\eta^{2})=2h^{0}(L)>0\). Thus \(h^{0}(L)\geq 3\). Moreover, \((\hat{a}^{*}L)^{2}=2h^{0}(L)(h^{0}(L)+1)\) and \((\hat{a}^{*}L\cdot\eta)=2h^{0}(L)(h^{0}(L)-1)\). We also note that \(q(\hat{S})=2\) and thus \(\hat{a}\) is the Albanese morphism of \(\hat{S}\). Let \(W\subset\mathbb{P}^{h^{0}(L)-1}\) be the image of \(\psi\). We now list the possible structure of \(W\).
**Theorem 7.7**.: _Assume that \(r=0\), we have \(h^{0}(L)\leq 6.\) More precisely, there are the following possibilities of \(\psi\):_
* \(h^{0}(L)=3\) _and_ \(\psi:\hat{S}\to W=\mathbb{P}^{2}\) _is of degree_ \(6\)_;_
* \(h^{0}(L)=4\) _and_ \(\psi:\hat{S}\to W\subset\mathbb{P}^{3}\) _is birational onto an octic;_
* \(h^{0}(L)=4\) _and_ \(\psi:\hat{S}\to W\subset\mathbb{P}^{3}\) _is of degree_ \(2\) _over a quartic;_
* \(h^{0}(L)=4\) _and_ \(\psi:\hat{S}\to W\subset\mathbb{P}^{3}\) _is of degree_ \(4\) _onto a quadric;_
* \(h^{0}(L)=6\) _and_ \(\psi:\hat{S}\to W\subset\mathbb{P}^{5}\) _is of degree_ \(3\) _onto a surface of degree_ \(4\)_._
Proof.: Since \(W\) is non-degenerate, \(\deg(W)\geq h^{0}(L)-2\).
If \(\deg(\hat{S}/W)\geq 3\), we have
\[2h^{0}(L)=(\eta^{2})=\deg(\hat{S}/W)\deg(W)\geq 3(h^{0}(L)-2).\]
Thus \(h^{0}(L)\leq 6\). The only possible cases are that \(h^{0}(L)=6\) and \(\psi\) is of degree \(3\) onto its image or \(h^{0}(L)=4\) and \(\psi\) is of degree \(4\) onto its image or \(h^{0}(L)=3\) and \(\psi\) is of degree \(6\) onto its image.
If \(\psi:\hat{S}\to W\subset\mathbb{P}^{h^{0}(L)-1}\) is birational onto its image, then \(h^{0}(L)\geq 4\). Let \(C\) be a general hyperplane section of \(\eta\). Since \((K_{\hat{S}}\cdot\eta)=2h^{0}(L)^{2}\), \(g(C)=h^{0}(L)^{2}+h^{0}(L)+1\). By the proof of [B1, Lemma 5.1], we have \(\frac{2h^{0}(L)+4}{3}\geq h^{0}(L)-1\) and hence \(h^{0}(L)\leq 7\). To be more precisely, we apply Castelnuovo's analysis (see [ACGH, Page 116]). Let \(m:=\lfloor\frac{2h^{0}(L)-1}{h^{0}(L)-3}\rfloor=2+\lfloor\frac{5}{h^{0}(L)-3}\rfloor\) and write
\[2h^{0}(L)-1=m(h^{0}(L)-3)+\epsilon.\]
Then Castelnuovo's inequality implies that
\[h^{0}(L)^{2}+h^{0}(L)+1=g(C)\leq\frac{m(m-1)}{2}(h^{0}(L)-3)+m\epsilon.\]
We then verify that it is only possible that \(h^{0}(L)=4\) and in this case \(C\) is isomorphic to a smooth plane octic
In the rest of the proof, we assume that \(\psi:\hat{S}\to W\) is of degree \(2\) and then \(\deg(W)=h^{0}(L)\). We denote by \(\tau\) the corresponding involution on \(\hat{S}\) and we have
\[\hat{S}\xrightarrow{\iota}\hat{S}/\langle\tau\rangle\xrightarrow{\rho}W\subset \mathbb{P}^{h-1}.\]
If \(\hat{S}/\langle\tau\rangle\) is of general type, let \(D\) be a general hyperplane of \(\eta^{\prime}:=\rho^{*}\mathcal{O}_{W}(1)\). Then \(D\) is a smooth projective curve and \(\rho^{*}\mathcal{O}_{W}(1)|_{D}\) is a special divisor since \(K_{D}=(K_{\hat{S}/\langle\tau\rangle}+\eta^{\prime})|_{D}\). Then Clifford's lemma implies that
\[\deg(\eta^{\prime}|_{D})=h^{0}(L)\geq 2(h^{0}(L)-2)\]
and hence \(h^{0}(L)\leq 4\).
Thus we may assume in the following that \(\hat{S}/\langle\tau\rangle\) is not of general type. We then write \(\iota_{*}\mathcal{O}_{\hat{S}}=\mathcal{O}_{K_{\hat{S}/\langle\tau\rangle}} \oplus V^{-1}\) and \(K_{\hat{S}}=\iota^{*}(K_{\hat{S}/\langle\tau\rangle}+V)\), where \(V\) is a divisor and \(2V\) is linearly equivalent to the branched divisor of \(\iota\). We conclude that \(\hat{a}^{*}L\sim\iota^{*}(K_{\hat{S}/\langle\tau\rangle}+V-\eta^{\prime})\). Moreover, since \(K_{\hat{S}/\langle\tau\rangle}-\eta^{\prime}\) has no global sections, we see that the morphism on \(\hat{S}\) induced by the complete linear system \(|\hat{a}^{*}L|\) also factors birationally through \(\psi\).
We can also study \(|\hat{a}^{*}L|\) via the morphism \(\hat{a}\). Since
\[\hat{a}_{*}\mathcal{O}_{\hat{S}}=\mathcal{O}_{\hat{A}}\oplus(H^{0}(L)\otimes L ^{-1}),\]
we see that the linear system \(|\hat{a}^{*}L|\) separates general fibers of \(\hat{a}\). By Reider's theorem (see [1, Section 10.4]), we know that if \(h^{0}(L)\geq 5\) and there exists no elliptic curve \(E\) on \(\hat{A}\) such that \((L\cdot E)\leq 2\), \(L\) is a very ample divisor on \(\hat{A}\), which implies that \(|\hat{a}^{*}L|\) induces a birational morphism of \(\hat{S}\). Thus we have \(h^{0}(L)\leq 4\) or there exists an elliptic curve \(E\) on \(\hat{A}\) such that \((L\cdot E)\leq 2\).
We now argue that we still have \(h^{0}(L)\leq 4\) in the latter case. Assume the contrary that \(h^{0}(L)\geq 5\). Then there exists a unique elliptic curve \(E\) such that \(d:=(L\cdot E)\leq 2\) by Poincare's reducibility. Let \(\hat{A}\to\hat{A}/E\) be the quotient and we have a commutative diagram
It is easy to verify that \(q\) is also a fibration since \(\hat{a}\) is the Albanese morphism of \(\hat{S}\) and we denote by \(C\) a general fiber of \(q\). We consider the morphism \(\hat{a}|_{C}:C\to E\). By base change, we have
\[(\hat{a}|_{C})_{*}\omega_{C}=\mathcal{O}_{E}\oplus(L|_{E})^{\oplus h^{0}(L)}.\]
Thus \(g(C)=1+dh^{0}(L)\). On the other hand, \(\deg(\hat{a}^{*}L|_{C})=d(h^{0}(L)+1)\) and \(h^{0}(C,\hat{a}^{*}L|_{C})=h^{0}(L)+d\). It is clear that \(\hat{a}^{*}L|_{C}\) is special. Thus by Clifford's lemma, \(d=2\). We apply again Reider's theroum to see that \(|L|\) induces a morphism of \(\hat{A}\) of degree \(2\) and we denote by \(\tau_{A}\) the corresponding
involution on \(\hat{A}\). The quotient \(\hat{A}/\langle\tau_{A}\rangle\) is a \(\mathbb{P}^{1}\)-bundle over \(\hat{A}/E\). We then have the commutative diagram
(7)
We still take \(D\) a general section of \(\eta^{\prime}\). Consider the short exact sequence
\[0\to\mathcal{O}_{\hat{S}/\langle\tau\rangle}\to\eta^{\prime}\to\eta^{\prime}|_ {D}\to 0.\]
Note that
\[H^{1}(\hat{S}/\langle\tau\rangle,\eta^{\prime}) = H^{1}(\hat{S},\eta)^{\tau}=H^{1}(\hat{S},K_{\hat{S}}-\hat{a}^{*}L )^{\tau}\] \[\simeq (H^{1}(\hat{S},\hat{a}^{*}L)^{\vee})^{\tau}=(H^{1}(\hat{A},L \oplus(\mathcal{O}_{A})^{\otimes h^{0}(L)})^{\vee})^{\tau_{A}}. \tag{8}\]
Thus \(h^{1}(\hat{S}/\langle\tau\rangle,\eta^{\prime})=h^{0}(L)q(\hat{A}/\langle\tau _{A}\rangle)=h^{0}(L)\). We thus conclude that \(h^{1}(\eta^{\prime}|_{D})>0\) and hence \(\eta^{\prime}|_{D}\) is again special. We conclude again by Clifford's lemma that
\[h^{0}(L)=\deg(\eta^{\prime}|_{D})\geq 2(h^{0}(\eta^{\prime}|_{D})-1)\geq 2(h^{0} (L)-2)\]
and hence \(h^{0}(L)\leq 4\).
### The case that \(h^{0}(L)=6\) cannot occur
We exclude the case that \(h^{0}(L)=6\) via the symmetry induced by the finite Heisenberg group.
**Lemma 7.8**.: _The case that \(h^{0}(L)=6\) cannot occur._
Proof.: In this case \(W\subset\mathbb{P}^{5}\) is a surface of minimal degree. Thus we know that \(W\) is smooth and \((W,\mathcal{O}_{\mathbb{P}^{5}}(1)|_{W})\) is isomorphic to one of the followings (see [EH, Theorem 1]):
* a cone over a rational quartic curve;
* \((\mathbb{P}_{\mathbb{P}^{1}}(\mathcal{O}_{\mathbb{P}^{1}}(1)\oplus\mathcal{O} _{\mathbb{P}^{1}}(3),H)\), where \(H\) is the relative hyperplane divisor;
* \((\mathbb{P}^{1}\times\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(1)\boxtimes \mathcal{O}_{\mathbb{P}^{1}}(2))\);
* \((\mathbb{P}^{2},\mathcal{O}_{\mathbb{P}^{2}}(2))\).
We shall apply the \(\mathcal{K}_{L}\)-equivariant structure to rule out all possibilities.
Note that \(W\) admits an \((\mathbb{Z}_{6}\times\mathbb{Z}_{6})\)-automorphism and let \(\sigma\) and \(\tau\) be two generators of \(\mathcal{K}_{L}\) such that \(\sigma\) is the cyclic permutation
\[z_{0}\to z_{1}\to z_{2}\cdots z_{4}\to z_{5}.\]
and \(\tau\) induces
\[z_{i}\to\zeta^{i}z_{i},\ 0\leq i\leq 5,\]
Where \(\zeta\) is a primitive \(6\)-th root of unity.
We see that \(K_{L}\) is an automorphism group of the pair \(W\subset\mathbb{P}^{5}\) and \(K_{L}\) has no fixed points.
We also recall that the automorphism group \(\mathrm{PGL}_{2}\) of \(\mathbb{P}^{1}\) has only \(\mathbb{Z}_{r}\), \(D_{r}\), \(A_{4}\), \(S_{4}\) and \(A_{5}\) as finite subgroups and there is only one conjugacy class for each finite subgroup (see [B2]).
In the first case, the only singular point of \(W\) is preserved by \(K_{L}\), which would imply that the Schrodinger representation is reducible. This is absurd.
In the second case, \(W\) contains a unique line \(D\) with self-intersection \(-2\), which corresponding to the quotient \(\mathcal{O}_{\mathbb{P}^{1}}(1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(3)\to \mathcal{O}_{\mathbb{P}^{1}}(1)\). Thus \(K_{L}\) preserves \(D\) and the ideal sheaf of \(D\) is preserved by \(\mathcal{K}_{L}\), which again contradicts the irreduciblity of the Schrodinger representation.
In the third case, since \(K_{L}\) moves lines to lines, it preserve the second projection \(\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{1}\). We now consider the action of \(K_{L}\) on the base, which induces a homomorphism \(\gamma:K_{L}\to\mathrm{PGL}_{2}\). If the image is a cyclic group, this cyclic group has a fixed point and hence \(K_{L}\) preserves a line of \(\mathbb{P}^{5}\), which is impossible as we have seen above. Thus the image of \(\gamma\) has to be \(D_{2}\) according to the above classification of subgroups of \(\mathrm{PGL}_{2}\). But this implies that \(\mathrm{PGL}_{2}\) contains \(\mathbb{Z}_{3}\times\mathbb{Z}_{3}\), which is again impossible.
In the last case, \(K_{L}\) preserve \(W\) and hence we have an injective homomorphism \(\gamma:K_{L}\to\mathrm{PGL}_{3}=\mathrm{Aut}(\mathbb{P}^{2})\). By the classification of finite subgroups of \(\mathrm{PGL}_{3}\) (see [DI, Corollary 4.6, Theorem 4.7, Theorem 4.8]), the image of \(\gamma\) is intransitive and hence is conjugate to groups of automorphisms generated by diagonal matrices
\[(x_{0}:x_{1}:x_{2})\to(\zeta^{a_{1}}x_{0}:\zeta^{a_{2}}x_{1}:x_{2})\] \[(x_{0}:x_{1}:x_{2})\to(\zeta^{b_{1}}x_{0}:\zeta^{b_{2}}x_{1}:x_{2}),\]
where \(a_{1}\), \(a_{2}\), \(b_{1}\) and \(b_{2}\) are integers. However we can then verify easily that that the representation \(\mathcal{K}_{L}\) on \(H^{0}(L)\) is not compatible with the action of \(K_{L}\) on \(|\mathcal{O}_{\mathbb{P}^{2}}(2)|\).
### Remarks on the case that \(h^{0}(L)=3\)
When \(h^{0}(L)=3\), \(\deg a_{S}=4\). We consider the Stein factorization \(a_{S}:S\to\overline{S}\xrightarrow{\rho}A\). Since \(a_{S*}\omega_{S}=\mathcal{O}_{A}\oplus\widehat{L^{-1}}\) is locally free, \(a_{S*}\mathcal{O}_{S}=\rho_{*}\widehat{\mathcal{O}_{\overline{S}}}=(a_{S*} \omega_{S})^{\vee}\) is locally free and \(R^{1}a_{S*}\mathcal{O}_{S}=0\). Thus \(\overline{S}\) has rational singularities and \(\rho\) is a finite flat quadruple cover. By the main result of [HM], we know that \(\rho\) is determined by a totally decomposable section of \(H^{0}(A,\wedge^{2}S^{2}\mathcal{M}\otimes\det\mathcal{M}^{-1})\). These totally decomposable sections are determined by Penegini and Polizzi in [PP3, Proposition 2.3 and 2.4] as \(K_{L}\)-invariant totally decomposable sections of \(H^{0}(\hat{A},\varphi_{L}^{*}(\wedge^{2}S^{2}\mathcal{M}\otimes\det\mathcal{M} ^{-1}))\simeq H^{0}(\hat{A},L)^{\oplus 15}\). To be more precise,
\[\dim H^{0}(\hat{A},\varphi_{L}^{*}(\wedge^{2}S^{2}\mathcal{M}\otimes\det \mathcal{M}^{-1}))^{K_{L}}=5\]
and there are a smooth conic \(C_{A,L}\) inside
\[|H^{0}(\hat{A},\varphi_{L}^{*}(\wedge^{2}S^{2}\mathscr{M}\otimes\det\mathscr{M}^ {-1}))^{K_{L}}|\]
parametrizing totally decomposable sections.
Thus a minimal surface with \(p_{g}=q=2\) and \(K_{S}^{2}=6\) corresponds to a point in \(C_{A,L}\) such that the corresponding quadruple cover \(\overline{S}\) has suitable singularities.
When \(A\) is general in the moduli spaces of \((1,3)\)-polarized abelian surfaces, Penegini and Polizzi determined the points in \(C(A,L)\) such that \(a_{S}\) is finite (see [13, Proposition 2.6]).
### Remarks on the case that \(h^{0}(L)=4\)
In this case there are two possibilities. The polarization can be of type \((1,4)\) or \((2,2)\). The two different polarizations have different symmetries and need to be treated differently.
**Lemma 7.9**.: _The case that \(W\subset\mathbb{P}^{3}\) is an octic cannot occur._
Proof.: Assume the contrary that \(W\subset\mathbb{P}^{3}\) is an octic. In this case, we have seen in the proof of Theorem 7.7 that a general hyperplane section of \(W\) is a smooth plane octic. Thus \(W\) is smooth in codimension \(1\) and thus as a divisor in \(\mathbb{P}^{3}\), \(W\) is normal. Hence \(\psi_{*}\mathscr{O}_{\hat{S}}=\mathscr{O}_{W}\). Since \(h^{1}(\mathscr{O}_{\hat{S}})=2\) and \(h^{1}(\mathscr{O}_{W})=0\), \(R^{1}\psi_{*}\mathscr{O}_{\hat{S}}\) is supported on singular points, where \(W\) does not have rational singularities. Since the length of \(R^{1}\psi_{*}\mathscr{O}_{\hat{S}}\) is \(2\), the action of \(K_{L}\) on \(\mathbb{P}^{3}\) has a fixed point or a fixed line, which is a contradiction.
**Lemma 7.10**.: \(W\) _cannot be a quadric in \(\mathbb{P}^{3}\)._
Proof.: It is easy to verify that there exists no \(K_{L}\)-invariant quadrics when \(L\) is of \((1,4)\) type. However, if \(L\) is of type \((2,2)\), we get no contradictions from the symmetry of \(K_{L}\). In this case, \(K_{L}\simeq(\mathbb{Z}_{2})^{\oplus 4}\) and we may assume that \(W\simeq\mathbb{P}^{1}\times\mathbb{P}^{1}\) is a smooth quadric. By the classification of finite subgroups of \(\operatorname{Aut}(\mathbb{P}^{1}\times\mathbb{P}^{1})\) (see [1, Theorem 4.9]), after a conjugation, we may assume that \(K_{L}\simeq K_{1}\times K_{2}\), where \(K_{i}\simeq\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) acting faithfully on the \(i\)-th factor of \(W=\mathbb{P}^{1}\times\mathbb{P}^{1}\) and acting trivially on the \((3-i)\)-th factor for \(i=1,2\). Let \(\eta_{i}=p_{i}^{*}\mathscr{O}_{\mathbb{P}^{1}}(1)\), where \(p_{i}:\hat{S}\to\mathbb{P}^{1}\) is the natural morphism onto the \(i\)-th factor of \(W\). It is easy to verify that \(p_{i}\) is a fibration. Note that \(\eta_{1}+\eta_{2}=\eta\) and recall that
\[(K_{\hat{S}}\cdot(\eta_{1}+\eta_{2}))=(K_{\hat{S}}^{2})-(K_{\hat{S}}\cdot \hat{a}^{*}L)=32.\]
Let \(S_{i}:=\hat{S}/K_{i}\) and \(B_{i}:=\hat{A}/K_{i}\) for \(i=1,2\). Note that \(p_{3-i}\) is preserved by \(K_{i}\) and we thus have a factorization \(p_{3-i}:\hat{S}\to S_{i}\overset{q_{i}}{\to}\mathbb{P}^{1}\) and the following
commutative diagram:
We see that a general fiber of \(p_{3-i}\) is an etale \(K_{i}\)-cover of the corresponding fiber \(C_{i}\) of \(q_{i}\). In particular, \((K_{\delta}\cdot\eta_{3-i})=4(2g(C_{i})-2)\) for \(i=1,2\). Note that \(g(C_{i})>2\), since otherwise \(q_{i}\) is birationally trivial (see [D, Lemme in the appendix]), which is obviously impossible. We thus conclude that \(g(C_{1})=g(C_{2})=3\).
Note that \(S_{i}/K_{3-i}=S\) and the action of \(K_{3-i}\) on \(\mathbb{P}^{1}\) is conjugate to the subgroup generated by two involutions
\[(x:y)\to(y:x),\ (x:y)\to(x:-y).\]
We have the following commutative diagram:
Thus \(S\) admits two fibrations \(t_{1}\) and \(t_{2}\) onto \(\mathbb{P}^{1}\), whose general fibers still denoted by \(C_{i}\) are genus \(3\) curves. Let \(f_{1}\in H^{0}(K_{S})\) be a section by pulling-back the canonical divisor of \(A\) and let \(f_{2}\in H^{0}(K_{S})\) be a section coming from \(H^{0}(A,\mathscr{M})\). By [X2, the proof of Theorem 1], the image of
\[H^{0}(S,K_{S})\to H^{0}(C_{i},K_{C_{i}})\]
is of dimension \(1\). Thus, for \(t\in\mathbb{C}\) general, the divisor of \(f_{1}-tf_{2}\) contains a general fiber of \(t_{1}\) and \(t_{2}\). This implies that \(K_{\delta}-4\eta\) is effective. This is a contradiction, since \(h^{0}(K_{\delta})=17\) and \(h^{0}(\mathbb{P}^{1}\times\mathbb{P}^{1},\mathscr{O}_{\mathbb{P}^{1}}(4) \boxtimes\mathscr{O}_{\mathbb{P}^{1}}(4))=25\).
**Lemma 7.11**.: _The case that \(W\) is a quartic in \(\mathbb{P}^{3}\) cannot occur._
Proof.: We apply the same argument in the proof of Theorem 7.7.
When \(W\) is a quartic, \(\psi:\hat{S}\to W\) is of degree \(2\). Let \(\tau\) and \(\tau_{A}\) be respectively the involution on \(\hat{S}\) and \(\hat{A}\). In this case, the morphism induced by \(|\hat{a}^{*}L|\) also factors birationally through \(\psi\). The linear system \(|L|\) also factors
through the quotient of \(\tau_{A}\). We recall the commutative diagram (7):
If \(L\) is a \((1,4)\)-polarization, we apply the results in [BLvS]. By [BLvS, Section 2], we know that the morphism induced by \(|L|\) is not birational onto its image iff there exists an elliptic curve \(E\) on \(\hat{A}\) such that \((E\cdot L)=2\) and in this case the morphism induced by \(|L|\) is birationally equivalent to the quotient of \(\hat{A}\) by \(\tau_{A}\), where \(\hat{A}/\langle\tau_{A}\rangle\) is a \(\mathbb{P}^{1}\)-bundle over \(\hat{A}/E\). Thus \(q(\hat{S}/\tau)=1\). We now apply the same argument below (7). Let \(\eta^{\prime}=\rho^{*}\mathcal{O}_{W}(1)\) and \(D\) a general section of \(\eta^{\prime}\). By the same computation in (8), we see that \(h^{1}(\eta^{\prime}|_{D})\geq h^{1}(\eta^{\prime})-1\geq 3\). On the other hand, since \(\rho\) is birational, \(D\) is birational onto a quartic in \(\mathbb{P}^{2}\). Thus \(\eta^{\prime}|_{D}-K_{D}\) is effective, which is a contradiction.
If \(L=2\Theta\) is a \((2,2)\)-polarization and \(\Theta\) is an irreducible principal polarization, \(|L|\) defines an embedding of \(\hat{A}/\langle(-1)_{\hat{A}}\rangle\) in \(\mathbb{P}^{3}\). Thus \(\tau_{A}=(-1)_{\hat{A}}\). Note that the canonical bundle of \(\hat{A}/\langle\tau_{A}\rangle\) is trivial and \(K_{\hat{S}/\langle\tau\rangle}\) is a subsheaf of \(\rho^{*}K_{W}=\mathcal{O}_{\hat{S}/\langle\tau\rangle}\). Thus \(K_{\hat{S}/\langle\tau\rangle}\) is also trivial. Thus \(a_{\tau}\) is quasi-etale, which implies that the quotient \(\hat{a}\) is also quasi-etale, which is impossible. We then assume that \((\hat{A},\Theta)\simeq(E_{1},\Theta_{1})\times(E_{2},\Theta_{2})\) is a split PPAV. Then \(\tau_{A}\) could be \((-1)_{\hat{A}}\), or \((-1_{E_{1}})\times\operatorname{Id}_{E_{2}}\), or \(\operatorname{Id}_{E_{1}}\times(-1_{E_{2}})\). In the first case, we apply the same argument when \(\Theta\) is irreducible to get a contradiction. In the last two cases, we can argue exactly as the \((1,4)\)-polarization case to get a contradiction.
|
2306.17570 | ELM-based Timing Synchronization for OFDM Systems by Exploiting
Computer-aided Training Strategy | Due to the implementation bottleneck of training data collection in realistic
wireless communications systems, supervised learning-based timing
synchronization (TS) is challenged by the incompleteness of training data. To
tackle this bottleneck, we extend the computer-aided approach, with which the
local device can generate the training data instead of generating learning
labels from the received samples collected in realistic systems, and then
construct an extreme learning machine (ELM)-based TS network in orthogonal
frequency division multiplexing (OFDM) systems. Specifically, by leveraging the
rough information of channel impulse responses (CIRs), i.e., root-mean-square
(r.m.s) delay, we propose the loose constraint-based and flexible
constraint-based training strategies for the learning-label design against the
maximum multi-path delay. The underlying mechanism is to improve the
completeness of multi-path delays that may appear in the realistic wireless
channels and thus increase the statistical efficiency of the designed TS
learner. By this means, the proposed ELM-based TS network can alleviate the
degradation of generalization performance. Numerical results reveal the
robustness and generalization of the proposed scheme against varying
parameters. | Mintao Zhang, Shuhai Tang, Chaojin Qing, Na Yang, Xi Cai, Jiafan Wang | 2023-06-30T11:45:15Z | http://arxiv.org/abs/2306.17570v1 | # ELM-based Timing Synchronization for OFDM Systems by Exploiting Computer-aided Training Strategy
###### Abstract
Due to the implementation bottleneck of training data collection in realistic wireless communications systems, supervised learning-based timing synchronization (TS) is challenged by the incompleteness of training data. To tackle this bottleneck, we extend the computer-aided approach, with which the local device can generate the training data instead of generating learning labels from the received samples collected in realistic systems, and then construct an extreme learning machine (ELM)-based TS network in orthogonal frequency division multiplexing (OFDM) systems. Specifically, by leveraging the rough information of channel impulse responses (CIRs), i.e., root-mean-square (r.m.s) delay, we propose the loose constraint-based and flexible constraint-based training strategies for the learning-label design against the maximum multi-path delay. The underlying mechanism is to improve the completeness of multi-path delays that may appear in the realistic wireless channels and thus increase the statistical efficiency of the designed TS learner. By this means, the proposed ELM-based TS network can alleviate the degradation of generalization performance. Numerical results reveal the robustness and generalization of the proposed scheme against varying parameters.
Timing synchronization, orthogonal frequency division multiplexing, computer-aided, extreme learning machine.
## I Introduction
Orthogonal frequency division multiplexing (OFDM), as a kind of the modern wireless and mobile communication systems, has been subject to extensive research efforts not only from the 5G field communication systems [1] but also from the wireless local area networks (WLANs) [2] and the Internet-of-Things (IoT) systems [3]. For the OFDM systems, timing synchronization (TS) plays a pivotal role in overall system performance, which aims to locate starting of the receiver fast Fourier transform (FFT) window within the inter-symbol-interference (ISI)-free region [4]. For the TS, extensive studies have been conducted so far to improve the timing metric for OFDM systems [5, 6, 7, 8, 9, 10]. However, classic TS schemes inevitably encounter nonlinear effects, e.g., channel fading, hardware imperfections, etc, posing a great challenge to avoid the ISI in the TS stage, and affecting the subsequent signal processing, e.g., channel estimation [11] and symbol detection [12], etc.
Due to its powerful capability in coping with nonlinear problems [13], machine learning (ML) is a promising solution to tackle the challenge of nonlinear effects in the TS stage and also aroused extensive interest in the design of the physical layer communications [14, 15, 16, 17, 18, 19] over the past few years. In ML-based applications, substantial researches have proven that the predictive performance of supervised learning outperforms that of unsupervised learning because the supervised ones could directly take feedback for the prediction [20]. As for synchronization, several supervised learning-based schemes [21, 22, 23, 24, 25] have been investigated. For example in [21], a deep neural network (DNN)-based estimator was constructed to estimate the timing offset and carrier frequency offset (CFO). On the basis of DNN-based auto-encoders, the authors in [22] and [23] separately implemented the timing synchronization and the sampling time synchronization. With DNN approaches, the authors in [24] investigated the packet detection. In [25], an extreme learning machine (ELM) was employed to estimate the residual symbol timing offset (STO). From [21, 22, 23, 24, 25], previous studies suggested that the performance improvements for synchronization can be achieved by employing supervised learning.
Although supervised learning-based TS could provide performance improvements, the implementation bottleneck, essentially, will be encountered in the training data collection. For classification tasks such as TS, training data set is consist of the collected samples and their corresponding learning labels (e.g., labels by STOs). In the context of realistic wireless communication systems, collecting training data is time and labor consuming, especially for the supervised learning-based TS. For the supervised learning-based TS learner, the implementation bottleneck of training data collection and its correspondingly possible solution mainly exist in the following aspects:
1. Precisely labeling the ground truth labels of TS is impractical to the receiver in realistic wireless communication systems. For one thing, due to unknown attributes of wireless channels, e.g., STOs, CFOs, and Channel Impulse Response (CIRs), the availability and optimal timing index of the received samples cannot be judged correctly. For another, conventional digital receivers distinguish the start point of collected samples according to the sampling grid. Namely, the received samples on off-grid are bound to be lost, making it challenging to precisely label the collected samples. As a result, the correlation between
the estimated-based labels and the ground truth labels has been weakened, leading to the degradation of learning efficiency for the supervised learning-based TS learner. _To solve this issue, desired training samples and their corresponding learning labels can be generated by computers, tackling the challenge of generating the ground truth labels in TS. Superior to the collected data, the computer-aided generated training data could perfectly know the generation parameters and methods (e.g., STO, CFO, path delays, etc.) while the former one cannot._
2. The completeness of training data collection is hindered in realistic wireless communication systems. In the long term, the realistic wireless channel environments are mainly affected by the climate or weather factors, such as the alternation of spring, summer, autumn, and winter, the temperature variation in the morning and evening, and the environments variation in rainy and sunny days. In the short term, the realistic wireless channel environments are mainly affected by obstacles in the cellular, such as vehicles and pedestrians. Thus, the maximum multi-path delay is time-varying, which makes it impractical to collect complete training data (including training samples and their corresponding learning labels) from the realistic wireless channels. _To alleviate this challenge, more complete training samples can be generated by computers [26], alleviating the completeness problem of the data collection in a realistic wireless communication. The insufficiency of training data may reduce the statistical efficiency of the supervised learning-based TS learner, while computers can generate rich data that inclose the changes of TS with environments. Namely, by employing a large number of wireless communication scenarios, the computer-aided approach can generate data more sufficiently than collect the desired data from realistic wireless communication systems. Especially, the generated data features are more conducive and faster to be obtained than those cannot be directly obtained by digital receivers [27], avoiding the time and labor consumptions._
3. Storing and transmitting the collected large-scale training data may cause substantial disasters. On the one hand, the supervised learning-based TS learner requires sufficient data to improve the adaptability to the changing wireless environments. However, large-scale data collection inevitably leads to great difficulty in storage capacity. On the other hand, large-scale training data transmission will bring unacceptable bandwidth requirements to the current wireless communication systems. Given the rapidly increasing demand for bandwidth resources, limited transmission bandwidth is mainly taken by the business data of users. Namely, it is unlikely to set aside valuable bandwidth resources merely for sending massive collected samples for network learning. _To mitigate the pressures of storing and transmitting the collected large-scale training data for the TS, a computer-aided approach can deploy the corresponding algorithm of data generation at the local devices. By providing the constraints of data features, the desired data can be generated rapidly and released temporarily, which alleviates the pressure from the requirement of huge storage space. Besides, it is not necessary to transmit data for network training, avoiding the large bandwidth requirement of data transmission for supervised learning-based TS learners._
In addition, the current supervised learning-based TS learners relying on the DNN models (e.g., [22, 23, 24]) have achieved remarkable success in improving the TS performance. However, several public problems are still required to be solved, such as complex parameter tuning, long-time training, and local optimal solution [28]. Unlike the DNN-based approaches, ELM is one kind of the single hidden layer feed-forward neural networks (SLFNs) without the requirement of gradient back-propagation [29], which may result in less time-consuming for training the TS learner.
Inspired by the advantages of computer-aided approaches and ELM networks, we propose the computer-aided training strategy for ELM-based TS in this paper, which aims to tackle the bottleneck facing in the training data collection and enhance the generalization performance of the designed TS learner. The main contributions of our work are summarized as follows:
1. We develop a computer-aided approach to generate the training data for the ELM-based TS network in OFDM systems. As a study case of this paper, by exploring the partial properties of CIRs, some important data features of realistic wireless channels can be rapidly captured, e.g., the root-mean-square (r.m.s.) delay. Compared with classic TS schemes for training data collection, the proposed scheme has the ability to reduce the consumption of extracting the available and desired training data from the raw received data, i.e., less time- and labor-consuming. Meanwhile, due to the good controllability of computer-aided approaches, as many realistic situations as possible can be simulated with the assistance of computers, such that increases the completeness of training data sets. In addition, by generating training data at the local device, the proposed scheme reduces the burden of storage space and transmission bandwidth in the training data collection.
2. We exploit the rough features of CIRs (i.e., the expected maximum multi-path delay) to design the training strategy for the ELM-based TS network. Particularly, the loose constraint- and flexible constraint-based training strategies are derived from exploring the statistical information of multi-path delays, which aims to increase the generalization performance of TS learners and the correlation between the generated learning labels with the ground truth labels. The proposed training strategies inherit the advantage of the training strategy of [30] (i.e., increasing the possibility of locating STOs into the ISI-free region) and also well adapt themselves to the volatile ground truth labels (i.e., offering a strong correlation between the generated learning labels and the ground truth labels). With this, the designed training strategy can improve the generalization performance of the designed TS learner compared with that of [30]. Moreover, sufficient training data can be rapidly produced with the
assistance of the computer-aided approach, which avoids collecting the training data from realistic systems.
3. We form a lightweight TS framework for the OFDM systems by fusing the classic synchronizer and the conventional ELM, i.e., the ELM-based TS network. Particularly, the classic TS scheme is performed to directly extract the relevant features of timing information from the raw received samples. By this means, ELM can easily learn the transformation between the input features and the learning labels, even without the deeper hidden layers being used for fusing the implicit data features.
The rest of this paper is organized as follows. For notation clarity, the meaning of symbols that have been adopted in this paper is listed in Table I. Section II briefly describes the system model and problem formulation. Section III provides the computer-aided training strategies for the ELM-based TS network and the implementation of the proposed ELM-based TS network. Section IV comprehensively evaluates the TS performance of the proposed computer-aided training strategy for ELM-based TS network, and is followed by conclusions in Section V.
## II System Model
We consider an OFDM system with \(N\) sub-carriers, which is plotted in Fig 1. According to the inverse fast Fourier transform (IFFT), the frequency-domain symbol \(\mathbf{d}\in\mathbb{C}^{N\times 1}\) is first transformed into the time domain, and the OFDM symbol \(\mathbf{s}\in\mathbb{C}^{N\times 1}\) in the time domain is expressed as
\[\mathbf{s}=\mathbf{F}_{N}\mathbf{d}, \tag{1}\]
where \([\mathbf{s}]_{k,1}=s_{k},k\in\{0,1,\cdots,N-1\}\) and \(\mathbb{E}\{|s_{k}|^{2}\}=\sigma_{d}^{2}\) with \(\sigma_{d}^{2}\) being data power. \(\mathbf{F}_{N}\in\mathbb{C}^{N\times N}\) stands for the \(N\)-by-\(N\) dimensional IDFT matrix and \([\mathbf{F}_{N}]_{k,l}=\frac{1}{\sqrt{N}}e^{j\frac{2\pi l}{N}}\), \(k,l\in\{0,1,\cdots,N-1\}\). The cyclic prefix (CP) addition matrix \(\mathbf{G}_{\mathrm{cp}}\in\mathbb{R}^{N+L_{c}\times N}\) is employed to denote the operation of adding the CP for each OFDM symbol [31], where \(L_{c}\) represents the length of CP. Then, the transmitted signal vector is given by
\[\mathbf{x}=\mathbf{G}_{\mathrm{cp}}\mathbf{s}, \tag{2}\]
where \(\mathbf{x}\) is the \(N_{u}\times 1\) transmitted signal vector with \(N_{u}=N+L_{c}\), and
\[\mathbf{G}_{\mathrm{cp}}=\left[\begin{array}{cc}\mathbf{0}_{L_{c}\times N- L_{c}}&\mathds{I}_{L_{c}}\\ \mathds{I}_{N}&\mathbf{0}_{N\times L_{c}}\end{array}\right]. \tag{3}\]
By denoting \(\mathbf{x}=[x_{0},x_{1},\cdots,x_{N_{u}}]^{T}\), the received signal \(r_{n}\) over the multi-path fading channel is described as
\[r_{n}=\sum_{l=1}^{L}h_{l}x_{n-\tau-\eta}e^{j\frac{2\pi}{N}\varepsilon n}+w_{n},n=0,1,\cdots,N_{u-1}, \tag{4}\]
where \(L\), \(h_{l}\), \(\tau\), \(\eta_{l}\), and \(\varepsilon\) stand for the taps of propagation channel, the channel coefficient of \(l\)th path, the unknown STO, the path delay of \(l\)th path, and the CFO being normalized by sub-carrier space, respectively. In (4), \(w_{n}\) is the additive white noise satisfying the independent Circularly symmetric complex Gaussian (CSCG) distribution with zero-mean and variance \(\sigma^{2}\). Without loss of generality, all multi-path delays are assumed to be shorter than the length of CP, i.e., \(\eta_{l+1}>\tau_{l},l=1,2,\cdots,P\)[32]. In the TS stage, the STO estimation is conducted, denoted as \(\widehat{\tau}\). Particularly, we adopt an ELM network to combine with the classic synchronizer to enhance the TS for the OFDM systems.
One goal of our work is to generate more complete and accurate learning labels for TS, compared with the traditional labeling of learning labels for the collected training data in realistic wireless communication scenarios. Another goal is to avoid seriously occupying the bandwidth resource and storage space caused by the transmission and storage of collected training data. The computer-aided approach can appoint the generation method of training data at local devices, without additionally transmitting the collected data over the wireless channel. In addition, the training data can be timely generated without storing them and discarded promptly. To this end, the occupations of bandwidth resource and extra storage space could be avoided, inspiring us to develop computer-aided solutions.
Fig. 1: System model.
## III Computer-Aided Training Strategy for ELM-Based TS Network
In this section, we first characterize the existing strategy from the perspective of obtaining learning labels, then elaborate the computer-aided strategy and finally present the designed TS framework for the OFDM systems. For the sake of clarity, a \(K\)-length learning label, denoted as \(\mathbf{T}\in\mathbb{C}^{K\times 1}\), can be represented by using a time-indexed sequence, i.e.,
\[\mathbf{T}=[T_{0},T_{1},\cdots,T_{j},\cdots,T_{K-1}]^{T}. \tag{5}\]
### _Insights of Existing Strategy_
Deploying ML has become a promising solution to enhance/replace the conventional signal processing [33], however, ML-based signal processing still faces significant challenges with determining the estimated labels for training samples, and the ELM-based TS network is no exception.
#### Iii-A1 Case I
If the information of CIRs are perfectly known to the receiver, three types of existed learning-label designs are mentioned and compared in [30], which are listed as follows:
* _One-hot Coding:_ When the expected maximum timing offset is known to be less than the length of OFDM symbol (i.e., \(N\)), the correct start location of preamble is employed to design the learning label. By utilizing the one-hot coding, the label value is given by [34] \[T_{j}=\left\{\begin{array}{ll}1,&j=\tau+L_{c}\\ 0,&\text{others}\end{array}\right.,\] (6) where \(j\in\{0,1,\cdots,K-1\},K=N_{u}=N+L_{c}\), due to the existence of CP. The vector form of this learning label is \(\mathbf{T}=[\mathbf{0}_{1\times\tau+L_{c}},1,\mathbf{0}_{1\times K-\tau-L_{c} -1}]^{T}\). Since the TS of OFDM system only requires that the STO estimation falls into the ISI-free region, the timing instants within the ISI-free region can be regarded as the ground truth labels. Without the use of cyclic suffix, the nonzero label feature points to the edge of the ISI-free region such that results in a weak correlation between the generated learning labels and truth ground labels.
* _Labeling ISI-free Midpoint:_ With the use of known timing offsets and maximum multi-path delay, the midpoint of ISI-free region is exploited to design learning label [30]. The midpoint of ISI-free region is denoted as \(p_{c}=\lceil(\tau_{P}+L_{c})/2\rceil\), which is used for designing the label value [30] \[T_{j}=\left\{\begin{array}{ll}1,&j=\tau+p_{c}\\ 0,&\text{others}\end{array}\right..\] (7) and its vector form is \(\mathbf{T}=[\mathbf{0}_{1\times\tau+p_{c}},1,\mathbf{0}_{1\times K-\tau-p_{c} -1}]^{T}\). By allocating nonzero value corresponding to the midpoint of ISI-free region, this label increases the correlation between the nonzero label feature and the ground truth label, such that increases the possibility of residing the estimated STO to the ISI-free region.
* _Labeling ISI-free Region:_ By completely exploring the prior information of ISI-free region, the label value is given by [30] \[T_{j}=\left\{\begin{array}{ll}1,&\tau+\tau_{P}\leq j\leq\tau+L_{c}\\ 0,&\text{others}\end{array}\right.,\] (8) and its vector form can be expressed as \(\mathbf{T}=[\mathbf{0}_{1\times\tau+\tau_{P}},\mathbf{1}_{1\times L_{c}-\tau_ {P}+1},\mathbf{0}_{1\times K-\tau-L_{c}-1}]^{T}\). By setting nonzero label values corresponding to the whole ISI-free region, this label provides extra information to increase the correlation between the nonzero label features and the timing indexes of ISI-free region, which greatly enhances the ELM learning. As a result, the ELM-based TS network combined with the label in (8) has a lower error probability of TS than that combined with the label in (7).
However, the complete real-time multi-path delays are difficult to be pre-known at the receiver, so the completeness of learning labels in (7) and (8) are facing challenge with the varied maximum multi-path delay.
#### Iii-A2 Case II
If the information of CIRs cannot be perfectly known at the receiver, employing the classic TS schemes to collect the training data is a simple and natural option. By taking the TS scheme of [5] as an example, the learning labels can be obtained according to the timing metric. With the received samples of \(r_{n}\), the timing metric \(M_{j}\), at \(j\)th lag of correlation, is given by
\[M_{j}=\frac{\left|\sum\limits_{d=0}^{N}r_{j+d}^{*}r_{j+\frac{N}{2}+d}\right|^{ 2}}{\left|\sum\limits_{d=0}^{N}r_{j+\frac{N}{2}+d}^{*}r_{j+\frac{N}{2}+d} \right|^{2}}, \tag{9}\]
and then \(\widehat{\tau}\) is obtained by \(\widehat{\tau}=\mathop{\arg\max}\limits_{0\leq j\leq K-1}\left\{M_{j}\right\}\). With the use of \(\widehat{\tau}\), the generated learning label can be expressed as
\[T_{j}=\left\{\begin{array}{ll}1,&j=\widehat{\tau}\\ 0,&\text{others}\end{array}\right.. \tag{10}\]
This label is direct, and its vector form is denoted as \(\mathbf{T}_{\text{CT}}\in\mathbb{R}^{K\times 1}\) and given by \(\mathbf{T}_{\text{CT}}=[\mathbf{0}_{1\times\widehat{\tau}},1,\mathbf{0}_{1 \times K-\widehat{\tau}-1}]^{T}\). We assume that, \(N_{t}\)-samples of \(\mathbf{r}_{m}\in\mathbb{C}^{K+N\times 1}\) are required for the model training. In addition, \(\widehat{\tau}_{m}\) and \(\mathcal{R}_{\text{IS-free}}^{(m)}\) are denoted as the estimated STO and the ISI-free region for \(\mathbf{r}_{m}\), wherein \(\mathcal{R}_{\text{ISI-free}}^{(m)}\) is given by \(\mathcal{R}_{\text{IS-free}}^{(m)}=\tau^{(m)}+\{\tau_{m}^{(m)},\tau_{P}^{(m)}+ 1,\cdots,L_{c}\}\). Then, the metric for the accuracy of estimated learning labels can be expressed as
\[P_{\text{label}}=\frac{1}{N_{t}}\sum\limits_{m=1}^{N_{t}}\theta\left(\widehat{ \tau}_{m}\right), \tag{11}\]
where
\[\theta\left(\widehat{\tau}_{m}\right)=\left\{\begin{array}{ll}1,&\widehat{ \tau}_{m}\in\mathcal{R}_{\text{ISI-free}}^{(m)}\\ 0,&\widehat{\tau}_{m}\notin\mathcal{R}_{\text{ISI-free}}^{(m)}\end{array}\right.. \tag{12}\]
The above \(\theta\left(\widehat{\tau}_{m}\right)\) indicates whether the estimated STO (i.e., \(\widehat{\tau}_{m}\) for generating learning label) is correct or not.
**Remark 1**.: _According to (12), the obtained labels are partially flawed due to the TS errors. That is, when the estimated \(\widehat{\tau}\) points to the edge or outside of the ISI-free region, the correlation between the estimated labels (i.e., \(\widehat{\tau}\)) and the timing indexes of ISI-free region is seriously impaired. The same issues can be found in other classic TS schemes. To guarantee the efficiency of training data, a large-scale of
training samples are required to be collected for selection. For example, if the accuracy of estimated learning labels is \(P_{\text{label}}\), at least \(N_{t}/P_{\text{label}}\) training samples are required to obtain \(N_{t}\) of effective learning labels. As a result, by leveraging the classic TS scheme, the training data collection is unfriendly to limited storage space and transmission bandwidth resources._
To alleviate the above challenge, we develop computer-aided training strategies, which is presented in Section III-B in detailed.
### _Computer-Aided Strategy_
It could be obtained from (12) that, the accuracy of estimated learning labels is heavily influenced by the TS errors. This is because the performance of classic TS schemes is vulnerable to the impacts of CIRs and noise. To avoid incorrectly labeling the training samples, computers can be utilized to generate the training data (including the training samples and their corresponding learning labels) at the local device, since the computer-aided generated training data is controllable and known to the transceiver. In the following, we mainly discuss the strategy of improving the completeness with respect to the varied maximum multi-path delays.
#### Iii-B1 **Loose Constraint**
The completeness of training data is mainly impaired by the profile of CIRs. In the TS stage, the key insight is to avoid the ISI, i.e., locating the timing indexes in the ISI-free region. To this end, we propose a loose constraint for the profile of CIRs to improve the completeness of training data. In other words, the loose constraint is utilized to generate the training samples and design the learning labels. The key idea of the proposed loss constraint is motivated by these two factors: 1) there is a time range for the realistic maximum multi-path delay to dynamically change; 2) statistical information of multi-path delays in a special geographical environment is easier to be obtained compared with acquiring the realistic multi-path delays themselves. The main motivation is that, the r.m.s. delay is usually measured prior to the establishment of terrestrial mobile communication systems e.g., the wireless communication standards for the 4G and 5G communication systems [35, 36], and so that the r.m.s. delay can constrain the maximum multi-path delay to a specific time range with high probability.
Here, we denote \(\mathcal{L}\) as the loose constraint for the maximum multi-path delay in the training stage, which is utilized to generate the maximum multi-path delays via computer-aided data generation. Specifically, to avoid ISI, \(\tau_{P}<\mathcal{L}<L_{c}\) is given, and \(\mathcal{L}\) is sufficient to tolerate the increase of \(\tau_{P}\). Although the underlying mechanism of \(\mathcal{L}\) is similar to the CP in the aspect of avoiding ISI, this loose constraint is only used to generate training data in the training stage, which does not need extra overhead to change the frame structure. For \(N_{t}\) samples of \(\{\mathbf{r}_{m}\}_{m=1}^{N_{t}}\) being required, \(\mathcal{R}_{\text{ISI-free}}^{(m)}=\tau^{(m)}+\{\mathcal{L},\mathcal{L}+1, \cdots,L_{c}\}\) is denoted as the target ISI-free region (i.e., target STOs for designing learning label) for each sample of \(\mathbf{r}_{m}\). Correspondingly, the ground truth label for the realistic communication systems are denoted as \(\mathcal{R}_{\text{truth}}=\tau+\{\tau_{P},\tau_{P}+1,\cdots,L_{c}\}\). Then, for the case where \(\tau^{(m)}=\tau\), the correlation between \(\mathcal{R}_{\text{ISI-free}}^{(m)}\) and \(\mathcal{R}_{\text{truth}}\) is described as
\[\mathcal{R}_{\text{ISI-free}}^{(m)}\subset\mathcal{R}_{\text{truth}},\ \forall m\in\{1,2,\cdots,N_{t}\}. \tag{13}\]
According to (13), since \(\mathcal{L}\) is sufficient to the possible maximum multi-path delays, \(\mathcal{R}_{\text{ISI-free}}^{(m)}\) always contains the correct learning labels that correspond to the ground truth labels. This improves the completeness of training data. For example, when \(L_{c}=32\) and \(\tau_{P}\) possibly ranges from 16 to 23, \(\mathcal{L}\) can be set as \(26\) for generating each training sample.
The works in [30] verified that the ELM network with the learning label in (7) has good generalization against the multi-path delays, and so we mainly focus on the improvement for the learning label in (8). With the use of \(\mathcal{L}\), the improved learning label is written as
\[T_{j}=\left\{\begin{array}{ll}1,&\tau+\mathcal{L}\leq j\leq\tau+L_{c}\\ 0,&\text{others}\end{array}\right., \tag{14}\]
and its vector form is denoted as \(\mathbf{T}_{\text{LC}}\in\mathbb{R}^{K\times 1}\) and given by \(\mathbf{T}_{\text{LC}}=[\mathbf{0}_{1\times\tau+\mathcal{L}},\mathbf{1}_{1 \times L_{c}-\mathcal{L}+1},\mathbf{0}_{1\times K-\tau-L_{c}-1}]^{T}\). Since \(\tau_{P}\) is smaller than \(\mathcal{L}\), the supervised learning-based TS learner combined with \(\mathbf{T}_{\text{LC}}\) can actively avoid the potentially discriminatory behaviors that degrade the generalization performance.
However, according to (13), the loose constraint-based strategy for training data generation lacks a sufficient number of training labels that have a strong correlation with the ground truth labels, since \(\mathcal{R}_{\text{truth}}\neq\mathcal{R}_{\text{ISI-free}}^{(m)}\) or \(\mathcal{R}_{\text{truth}}\not\subset\mathcal{R}_{\text{ISI-free}}^{(m)}\). On the one hand, if \(\mathcal{R}_{\text{truth}}\equiv\mathcal{R}_{\text{ISI-free}}^{(m)}\), the training labels will show a strong correlation with the ground truth labels, such that facilitates the performance of TS learners. On the other hand, if \(\mathcal{R}_{\text{truth}}\subset\mathcal{R}_{\text{ISI-free}}^{(m)}\), the training data set can be regarded as the complete set of possible maximum multi-path delays, such that improves the statistical efficiency of TS learner. As a result, the quality and efficiency of the supervised learning-based TS leaner are degraded.
#### Iii-B2 **Flexible Constraint**
Flexible constraint has the ability to improve the adaptability to the various demands in practical applications, such as the flexible constraint length Viterbi decoder [37]. For the supervised learning-based TS leaner, the key insight of introducing the flexible constraint into the design of training strategy is that: increasing the correlation between the training labels and the ground truth labels is helpful to improve the statistical power of the supervised learning-based TS leaner. To this end, we propose the flexible constraint for generating the training data on the basis of loose constraints.
By denoting \(\widetilde{\mathcal{L}}\) as the flexible constraint for the maximum multi-path delay in the training stage, the target STOs for designing learning label is denoted by \(\mathcal{R}_{\text{ISI-free}}^{(m)}=\tau^{(m)}+\{\widetilde{\mathcal{L}}^{(m)}, \widetilde{\mathcal{L}}^{(m)}+1,\cdots,L_{c}\}\), with \(\widetilde{\mathcal{L}}^{(m)}\) being distribution as \(\widetilde{\mathcal{L}}^{(m)}\sim\mathcal{U}(L_{c}/2,\mathcal{L})\). Similarly, we denote \(\mathcal{R}_{\text{truth}}=\tau+\{\tau_{P},\tau_{P}+1,\cdots,L_{c}\}\) as the ground truth labels for the realistic communication systems. Then, for the cases where \(\tau_{P}>L_{c}/2\) and \(\tau^{(m)}=\tau\), the correlation between \(\mathcal{R}_{\text{ISI-free}}^{(m)}\) and \(\mathcal{R}_{\text{truth}}\) is expressed as
\[\mathcal{R}_{\text{ISI-free}}^{(m)}\supseteq\mathcal{R}_{\text{truth}},\ \forall m \in\{1,2,\cdots,N_{t}\}. \tag{15}\]
With this, if the amount of training samples is sufficient, the given \(\mathcal{R}_{\text{ISI-free}}^{(m)}\) can be approximately regarded as the complete
set of possible maximum multi-path delays in some cases. As for the cases where \(\tau_{P}<L_{c}/2\) and \(\tau^{(m)}=\tau\), the correlation given in (15) can be rewritten as \(\mathcal{R}_{\text{IS-free}}^{(m)}=\mathcal{R}_{\text{truth}},~{}\exists m\in\{1,2,\cdots,N_{t}\}\). Wherein \(\tau_{P}<L_{c}/2\) can be regraded as a case of \(\mathcal{L}\), i.e., \(\mathcal{L}=L_{c}/2\). Therefore, the proposed flexible constraint inherits the advantages of the proposed loose constraint and also improves the correlation between the training labels and ground truth labels.
By leveraging \(\widetilde{\mathcal{L}}\), the improved learning label is given by
\[T_{j}=\left\{\begin{array}{l}1,~{}\tau+\widetilde{\mathcal{L}}\leq j\leq \tau+L_{c}\\ 0,~{}\text{others}\end{array}\right.. \tag{16}\]
and its vector form is denoted as \(\mathbf{T}_{\text{FC}}\in\mathbb{R}^{K\times 1}\) and given by \(\mathbf{T}_{\text{FC}}=[\mathbf{0}_{1\times\tau+\mathcal{L}},\mathbf{1}_{1 \times L_{c}-\mathcal{L}+1},\mathbf{0}_{1\times K-\tau-L_{c}-1}]^{T}\). Due to the fact that \(\widetilde{\mathcal{L}}\sim\mathcal{U}\left(L_{c}/2,\mathcal{L}\right)\), the supervised learning-based TS learner can learn more training data that may appear in practical environments. This improves the correlation between the training labels and the ground truth labels, and so that improves the statistical power of TS learners. Moreover, instead of collecting sufficient data features from realistic systems, a large number of rich and diverse data features can be rapidly generated by \(\widetilde{\mathcal{L}}\) at local devices. Thus, not only the occupations of storage space and transmission bandwidth caused by frequently collecting desired data features are correspondingly avoided, but also more memory traces to the neural network can be retained against the changing profile of CIRs.
**Remark 2**.: _It is impractical to completely collect training data from realistic systems since the received samples are unavailable before the completion of synchronization. As for the supervised learning-based TS learner, it is available for local devices (i.e., employing the Central Processing Unit) to generate efficient and more complete training data. By this means, once the computer-aided approach is adopted, the training samples and their corresponding learning labels can be timely generated and released. Compared with the training data collected by the classic TS schemes, more complete and accurate training data can be generated at the local device, and thus greatly reduces the storage consumption and bandwidth occupation of collecting large-scale data from the realistic systems. It is worth mentioning that, the more complete training data features we obtain, the more statistical efficiency the neural network will own. Thus, a good generalization capability of ELM-based TS network and low occupations of storage space and transmission bandwidth can be achieved by the proposed strategies of loose constraint and flexible constraint._
### _ELM-Based TS Network Architecture_
The proposed ELM-based TS network consists of two phases, i.e., a pre-processing phase of feature extraction and an ELM batch learning phase. In the pre-processing phase, the received signals are processed via a classic TS scheme for feature extraction. The timing metrics calculated from the raw received samples are regarded as extracted features, which are then filled up for the ELM batch learning phase. Within timing metrics, the extracted features should be normalized by the Frobenius norm (i.e., \(\ell_{2}\)-norm). Following the pre-processing phase, the ELM batch learning phase is carried out to learn the mapping relationship between the input features and the designed learning labels as desired. To achieve our desired, the training data sets are generated with the proposed computer-aided training strategy (i.e., the loose constraint- and flexible constraint-based strategies in Section III-B), which are controllable, distinct, and reliable for the ELM learning.
For the convenience of expression, preliminary preparation of ELM-based TS network is characterized below. Selecting \(N_{h}\)-numbers of hidden layer nodes, we denote \(\boldsymbol{\mathcal{D}}_{\text{ca}}=\{(\mathbf{r}_{m},\mathbf{T}_{m})| \mathbf{r}_{m}\in\mathbb{C}^{N_{w}\times 1},\mathbf{T}_{m}\in\mathbb{R}^{K \times 1},m=1,2,\cdots,N_{t}\}\) as the original training data set. Wherein \(N_{w}\) denotes the size of the observation window for received signals, and \(K\) is the size of the observation window for timing metrics. Specifically, the received samples of \(\{\mathbf{r}\}_{m=1}^{N_{t}}\) are obtained according to (1)-(4) without interferences of CFO and noise. The generation of learning labels (i.e., \(\{\mathbf{T}\}_{m=1}^{N_{t}}\)) is given in Section III-B. Besides, the architecture of the adopted ELM network is presented in Table II.
#### Iii-C1 Offline Training
Algorithm 1 summarizes the process of offline training for the ELM-based TS network, and the detailed description is elaborated as follows.
**Step 1) Initial Feature Extraction:** Initially extracting the features using received samples \(\{\mathbf{r}_{m}\}_{m=1}^{N_{t}}\) from the given training data set \(\mathcal{D}_{\text{ca}}\). By taking the classic synchronizer (i.e., TS scheme in [5]) as an example, the specifical preamble consists of two identical parts. In addition, the observed sub-vector of \(\mathbf{r}_{m}\) at \(j\)th lag of correlation, denoted as \(\mathbf{r}_{m,j}=[\mathbf{r}_{m}]_{j:j+\frac{N}{2}}\), is expressed as
\[\mathbf{r}_{m,j}=[r_{m,j},\cdots,r_{m,j+d},\cdots,r_{m,j+N-1}]^{T}, \tag{17}\]
where \(j\leq N_{w}-N+1\) and \(N_{w}=2N_{u}\) with the aim of observing a complete preamble at the receiver. Then, the timing metric is employed as the ELM input and given by
\[M_{m,j}=\frac{{|P_{m,j}|}^{2}}{\frac{1}{4}{|R_{m,j}|}^{2}}, \tag{18}\]
where \(P_{m,j}\) and \(R_{m,j}\) stand for the autocorrelation factor [5] and the normalization factor [38], respectively, which are defined as
\[\left\{\begin{array}{l}P_{m,j}=\sum\limits_{d=0}^{\frac{N}{2}-1}r_{m,j+d}^{ *}r_{m,j+d+\frac{N}{2}}\\ R_{m,j}=\sum\limits_{d=0}^{N-1}r_{m,j+d}^{*}r_{m,j+d}\end{array}\right.. \tag{19}\]
The iteration formula for \(P_{m,j+1}\) is derived in [5] to reduce the time complexity, i.e.,
\[M_{m,j+1}=\frac{\left|P_{m,j}+r_{m,j+\frac{N}{2}}^{*}r_{m,j+N}-r_{m,j}^{*}r_{m,j +\frac{N}{2}}\right|^{2}}{\frac{1}{4}\Big{|}R_{m,j}+r_{m,j+N}^{*}r_{m,j+N}-r_{m,j}^{*}r_{m,j}\Big{|}^{2}}. \tag{20}\]
By buffering \(K\)-samples of \(M_{m,j}\), the timing metric vector \(\mathbf{M}_{m}\in\mathbb{R}^{K\times 1}\) is constructed by
\[\mathbf{M}_{m}=[M_{m,0},M_{m,1},\cdots,M_{m,K-1}]^{T},\ K=N_{u}. \tag{21}\]
To increase the accuracy of the classification and the learning efficiency, \(\mathbf{M}_{m}\) is normalized by \(\ell_{2}\)-norm, which is expressed as
\[\widetilde{\mathbf{g}}_{m}=\frac{\mathbf{M}_{m}}{\left\|\mathbf{M}_{m}\right\| _{2}}, \tag{22}\]
where \(\widetilde{\mathbf{g}}_{m}\in\mathbb{R}^{K\times 1}\) denotes the normalized timing metric vector. For the followed ELM batch learning, the above (1)-(4) and (17)-(22) are repeated till \(N_{t}\)-samples of \(\widetilde{\mathbf{g}}_{m}\) are achieved.
**Step 2) ELM Batch Learning:** Denoting \(N_{t}\)-samples of \(\{(\widetilde{\mathbf{g}}_{m},\mathbf{T}_{m})|\widetilde{\mathbf{g}}_{m}\in \mathbb{R}^{K\times 1},\mathbf{T}_{m}\in\mathbb{R}^{K\times 1},m=1,2, \cdots,N_{t}\}\) as the training data for the ELM learning. To initialize the network, the input weight matrix \(\mathbf{W}\in\mathbb{R}^{N_{h}\times K}\) and hidden bias vector \(\mathbf{b}\in\mathbb{R}^{N_{h}\times 1}\) are randomly selected from Gaussian variables, i.e., \(\mathcal{N}(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}} \mathbf{}}\mathbf{}\mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}} }\mathbf{}}\mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}} \mathbf{}}\mathbf{}}\mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}} \mathbf{}}\mathbf{}}\mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}} \mathbf{}}\mathbf{}}\mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}} \mathbf{}}\mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}} \mathbf{}}\mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}} }\mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}} \mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}} \mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{}}}}}}}}}}}}}} \mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}}}} \mathbf{}}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbf{ }}}}}}}}}}}}} }}\mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}} }}}\mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}}} }}}}\mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{}}}}}}}}}}} }}}}}\mathbf{}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{ }}}}}}}}}}}}}} }}\mathbf{}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}}}}}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}}\] \end{ \end{ \end{ }\end
to discuss, the default parameter of \(\tau_{P}\) is set as 20 sample delays. Correspondingly, the default loose constraint (i.e., \(\mathcal{L}\)) is set as 26 sample delays, and the flexible constraint (i.e., \(\widetilde{\mathcal{L}}\)) satisfies that \(\widetilde{\mathcal{L}}\sim\mathcal{U}(16,26)\).
Some definitions in the simulations are given as follows. The signal-to-noise-ratio (SNR) in decibel (dB) [28] is considered in the following simulations, i.e.,
\[\mathrm{SNR}_{p}=10\log_{10}\frac{\sigma_{d}^{2}}{\sigma_{n}^{2}}. \tag{28}\]
The error TS means that the estimated STO points outside the ISI-free region, wherein the error probability of TS, denoted as \(P_{e}\), is defined as
\[P_{e}=\frac{Q_{0}}{Q_{1}}, \tag{29}\]
where \(Q_{0}\) denotes the number of arriving preambles with error estimated STOs, and \(Q_{1}\) denotes the total number of transmitted preambles.
For the sake of clarity, we denote "DatCol" as the existed strategy for collecting training data (including the training samples and the estimated labels), in which the collection of training data is conducted by the classic TS scheme (i.e., [8] for example), with \(\mathrm{SNR}_{p}=20\)dB. And "Ref [30]" is denoted as the strategy of [30] for generating learning labels. Correspondingly, "Prop_LC" and "Prop_FC" are denoted as the proposed loose constraint-based strategy and the proposed flexible constraint-based strategy for generating training data, respectively. In addition, we adopt the TS scheme of [8], denoted as "Corr", as the baseline in the simulations. For a fair comparison, except for the comparative parameters, the given TS methods in each figure(e.g., Fig. 2 to Fig. 7) separately have the same parameter settings.
### _Effectiveness Analysis_
In Fig. 2(a) and Fig. 2(b), the accuracy and completeness of learning labels are validated, respectively. Additionally, the storage consumption and bandwidth occupation of different strategies for obtaining training data are given in Table III.
From Fig. 2(a), the error probability of "DatCol" is higher than those of the proposed "Prop_LC" and "Prop_FC". This is due the fact that the collected training data set includes the incorrect learning labels according to (12). Therefore, the existed strategy of training data collection (i.e., "DatCol") inevitably deteriorates the performance of ELM-based TS network, increasing the error probability of TS. By contrast, the proposed strategies of "Prop_LC" and "Prop_FC" achieve smaller error probabilities of TS than "DatCol". This indicates that the proposed strategy for training data generation avoids incorrectly estimating learning labels, due to the good controllability of computer-aided data generation.
To present the merits of the proposed TS methods in improving the TS correctness when changing wireless scenarios, Fig. 2(b) depicts the error probability of TS by changing the testing maximum multi-path delay. From Fig 2(b), when \(\tau_{P,\text{test}}>\tau_{P,\text{train}}\), the ELM-based TS network trained with the strategies of "DatCol" and "Ref [30]" reach higher error probabilities than those of "Prop_LC" and "Prop_FC", due to the impacts of the impaired completeness of training data. Especially for "Ref [30]" in the whole SNR region, its error probabilities significantly increase when \(\tau_{P,\text{test}}>\tau_{P,\text{train}}\). For example, when \(\tau_{P,\text{test}}=24\), the error probability of "DatCol" is higher than 0.25 for each given value of SNR, and the error probabilities of "Ref [30]" are higher than 0.15 for the relatively high SNRs. As a result, the ELM-based TS
Fig. 2: The error probability of TS vs. SNRs, where \(N=128\), \(L_{e}=32\), and \(\eta=0.2\). \(\mathcal{L}\) and \(\widetilde{\mathcal{L}}\) denote the proposed loose constraint-based strategy and the proposed flexible constraint-based strategy for generating the training data, respectively. \(\tau_{P,\text{train}}\) and \(\tau_{P,\text{test}}\) stand for the training maximum multi-path delay and the testing maximum multi-path delay, respectively.
networks trained with the existed strategies of "DatCol" and "Ref [30]" hinder themselves to the practical application for the impaired completeness of training data. By contrast, after training the TS network, the error probabilities of "Prop_LC" and "Prop_FC" are much smaller than those of "DatCol" and "Ref [30]" when \(\tau_{P,\text{test}}=24\), for all given SNRs. This reflects that the proposed strategies of "Prop_LC" and "Prop_FC" can generate more complete training data. This is because "Prop_LC" and "Prop_FC" utilize the channel models with randomly generated multi-path delays (i.e., \(\tau_{P,\text{train}}=\mathcal{L}\) and \(\mathcal{L}\sim(16,26)\)) for model training, the diverse \(\mathcal{L}\) generates more power delay profiles (PDPs) to increase the diversity of training data for increasing the completeness of training data. Then, a good completeness of training data can be obtained to facilitate the performance of ELM-based TS networks, reducing the error probability of TS. For example, when \(\mathrm{SNR}_{p}\geq 10\)dB and \(\tau_{P,\text{test}}=24\), the error probabilities of "Prop_LC" and "Prop_FC" are lower than \(0.0583\). Therefore, compared with the existed strategies of "DatCol" and "Ref [30]" for obtaining training data, the proposed computer-aided strategies for generating training data (i.e., "Prop_LC" and "Prop_FC") can obtain more complete training data set, providing improvement in the reduction of TS errors against the parameter of \(\tau_{P,\text{test}}\).
According to (12), the estimated learning labels are critically dependent on the error probability of TS, during the training data collection. By assuming the training data collected under \(\mathrm{SNR}_{p}=20\)dB as shown in Fig. 2, the error probability of TS equals about 0.257 so that the accuracy of estimated learning labels is about \(0.743\). To collect \(10^{5}\) effective training samples (i.e., \(10^{5}\) correct learning labels), at least \(1.35\times 10^{5}\) received samples are required. Since the collected samples need to be transmitted by the device of data collection to the Central Processing Unit, there are storage and bandwidth occupation. From Table III, we can see that, the storage cost of the proposed strategies (i.e., "Prop_LC" and "Prop_FC") are less than that of "DatCol", since the method of data generation is known to the transceiver. That is, the training data can be timely generated and released, leading to less consumption of storage compared with "DatCol". Furthermore, there are no incorrect learning labels and the data transmission (i.e., bandwidth occupation) is avoided. If the transmission bandwidth is given by \(100\)MHz, it could be obtained that "DatCol" requires 50.6822 sec for transmitting the collected samples, as shown in Table III. By contrast, "Prop_LC" and "Prop_FC" avoid bandwidth occupation for transmitting training samples during the wireless transmission, i.e., the bandwidth occupation in sec equals to 0. _This is because, the training data is generated at local receiver, which does not occupy the transmission bandwidth._ As a result, compared with "DatCol", the proposed "Prop_LC" and "Prop_FC" significantly improve the storage consumption and bandwidth occupation.
In summary, compared with "DatCol", the error probability of TS reveals that the proposed strategies of "Prop_LC" and "Prop_FC" can avoid the incorrect learning labels and improve the completeness of training data. In addition, compared with "DatCol" in Table III, the proposed strategies of "Prop_LC" and "Prop_FC" possess less storage consumption and bandwidth occupation, which are friendly to practical implementation.
### _Generalization Analysis_
The more complete the training data is, the stronger the statistical efficiency of the neural network will be. Hence, we utilize the error probability of TS to validate the generation performance of the designed TS learner (i.e., the ELM-based TS network with training strategies of "Prop_LC" and "Prop_FC"). In addition, we do not discuss the generalization performance of "DatCol", since the TS performance of "DatCol" is mainly affected by incorrect learning labels in the training data collection. Except for the parameter that needs to discuss, other basic parameters remain the same as those adopted in Section IV-B.
From Fig. 3, for each given value of \(\tau_{P,\text{test}}\), the error probability of "Prop_FC" is less than those of "Corr" and "Ref [30]" in the whole SNR region. This is due to the fact that the proposed strategy of "Prop_FC" can generate more complete training data, which improves the generalization performance against the impact of \(\tau_{P,\text{test}}\). As for \(\tau_{P,\text{test}}=22\), "Ref [30]" reaches a lower error probability of TS than "Prop_LC". Even so, with \(\tau_{P,\text{test}}\) increases to \(24\), the error probability of "Ref [30]" is significantly higher than that of "Prop_LC", for the relatively high SNRs. As a result, compared with the existed strategy of "Ref [30]", the proposed strategies of "Prop_LC" and "Prop_FC" are able to deal with the varied maximum multi-path delays.
### _Robustness Analysis_
This subsection presents the robustness analysis of the proposed TS scheme against the changed parameters such as the length of CP (i.e., \(L_{c}\)), the size of OFDM symbol (i.e., \(N\)), and the decay factor (i.e., \(\eta\)), etc. In addition, different TS schemes (e.g., [5, 6]) are adopted as the feature
Fig. 3: The generalization analysis with respect to the maximum multi-path delay, where \(N=128\), \(L_{c}=32\), \(\eta=0.2\), and different testing values of \(\tau_{P,\text{test}}\) (i.e., \(\tau_{P,\text{test}}=22\) and \(\tau_{P,\text{test}}=24\)) are considered.
extraction, which aims to reveal the robustness against the feature extraction. The error probabilities of the proposed TS scheme are plotted from Fig. 4 to Fig. 7. For the sake of clarity, "Ref [5]" and "Ref [6]" denote that the TS schemes mentioned in [5] and [6] are utilized for the feature extraction, respectively. The ELM-based TS network which directly learns the received samples is denoted as "DS_Learn", i.e., leveraging the received samples as the ELM input without the procedure of feature extraction. In the following simulations, unless otherwise specified, parameters remain the same as those mentioned in Section IV-B.
#### Iv-B1 Robustness Against the TS Scheme for Feature Extraction
Fig. 4 illustrates the error probability of TS to present the robustness against the impact of feature extraction. From Fig. 4, it could be obtained that, the error probability of "DS_Learn" is higher than that of "Corr", even for the relatively high SNRs. This reflects that the ELM-based TS network cannot work well without the process of feature extraction. For each method of feature extraction, the error probabilities of "Prop_LC" and "Prop_FC" are smaller than that of "Corr". This is due to the fact that the TS schemes as feature extraction successfully extract the hidden state information about STOs, and thus improve the correctness of estimating STOs. In addition, the ELM-based TS networks with different feature extractions exhibit different performance improvements in reducing the error probability of TS. Several reasons may be used to explain this result. 1) The TS scheme of [6] is vulnerable to the impact of channel fading. 2) The TS scheme of [6] provides a sharper correlation peak compared with that of [5], but the timing index of the correlation peak is largely independent of the ISI-free region. 3) The TS scheme in [5] has robust features highlighting the ISI-free region compared with that in [6]. Therefore, the ELM-based TS network assisted by the TS scheme of [5] outperforms that assisted by [6], for the relatively high SNRs. To sum up, compared with "Corr" and "DS_Learn", the ELM-based TS network with feature extraction can reach significantly lower error probabilities of TS.
#### Iv-B2 Robustness Against \(L_{c}\)
To reveal the robustness of the proposed TS scheme against the impact of \(L_{c}\), Fig. 5 illustrates the error probability of TS by changing \(L_{c}\). From Fig. 5, for each given value of \(L_{c}\), the error probabilities of "Prop_LC" and "Prop_FC" are smaller than that of "Corr" in the whole SNR region. It is worth noting that, by decreasing \(L_{c}\), the error probabilities of "Prop_LC" and "Prop_FC" are enlarged. Especially, "Prop_FC" reaches a smaller error probability than "Prop_LC". The reason is that "Prop_FC" owns a more complete profile of CIRs, which facilitates the statistical efficiency of the TS network. Although the ISI-free region is shortened by narrowing the length of CP, the proposed "Prop_LC" and "Prop_FC" still obtain a robust improvement in reducing the error probability of TS.
#### Iv-B3 Robustness Against \(N\)
Fig. 6 depicts the error probability of TS against the impacts of varying \(N\). From Fig. 6, for each given value of \(N\), the error probability of "Corr" is higher than those of "Prop_LC" and "Prop_FC" in the whole SNR region. Among the proposed strategies of "Prop_LC" and "Prop_FC", "Prop_FC" reveals a remarkable improvement in the reduction of error probability relative to "Prop_LC", for each given \(N\). This is due to the fact that, "Prop_FC" has the ability to improve the correlation between the training labels and the ground truth labels, and thus provides improvements in reducing error probability of TS. For example, when \(\mathrm{SNR}_{p}=12\)dB and \(N=256\), the error probability of "Prop_LC" is around \(0.084\), while it achieves 0.014 for "Prop_FC". Although the error probability of TS increases with the increase of \(N\), the proposed "Prop_LC" and "Prop_FC" still outperform "Corr". As a result, the proposed strategies of "Prop_LC" and "Prop_FC" achieve robust improvements in the reduction of TS errors against the impact of \(N\).
Fig. 4: The Error probability of TS vs. SNRs, where \(N=128\), \(L_{c}=32\), \(\eta=0.2\), \(\tau_{P}=20\), and different TS schemes are used for feature extraction.
#### Iv-B4 Robustness Against \(\eta\)
To reflect the robustness of the proposed TS scheme against the impact of \(\eta\), Fig. 7 depicts the error probability of TS where \(\eta\) varies from 0.05 to 0.35 with the interval being 0.15. In Fig. 7, the error probability of "Corr" decreases with the decrease of \(\eta\). Because the correlation platform/peaks obtained by "Corr" have been enhanced by the large value of \(\eta\), and so that timing instants of the maximum peak appear in advance to the ISI region of CP. For each given value of \(\eta\) in Fig. 7, the error probability of "Corr" is higher than those of "Prop_LC" and "Prop_FC" in the whole SNR region. For the relatively high SNRs, the error provability of "Prop_FC" is smaller than that of "Prop_LC". This indicates the superiority of "Prop_FC" in reducing the TS errors. Also, it could be observed that, the error probabilities of "Prop_LC" and "Prop_FC" decrease with the increase of \(\eta\). This is likely due to the enlarged \(\eta\), which strengthens the power of the first propagation path. The stronger the first propagation path is, the stronger timing features will be, such that improves the learning efficiency of ELM-based TS network. In addition, although the differences of these error probabilities are not obvious to the varied \(\eta\), ELM-based TS networks with the proposed strategies of "Prop_LC" and "Prop_FC" still outperform "Corr" in the aspect of reducing TS errors. In a word, compared with "Corr", the proposed strategies of "Prop_LC" and "Prop_FC" can provide the robust and effective improvement in reducing the error probability of TS.
## V Conclusion
In this paper, we propose two computer-aided strategies to generate the training data, and construct an ELM-based TS network for the OFDM systems. Compared with training data collected from the realistic systems, the desired training data could be timely generated by our proposed strategies at the local device, and this avoids the occupations of storage space and transmission bandwidth during the training data collection. Furthermore, the proposed computer-aided training strategies increase the completeness of training data against the maximum multi-path delay, therefore improving statistical efficiency of the ELM-based TS network. By simulations, numerical results exhibit that the proposed ELM-based TS scheme is superior in reducing the error probability of TS and possesses a good generalization performance against the maximum multi-path delay.
## VI Acknowledgement
This work is supported in part by the Sichuan Science and Technology Program: (Grant-No. 2021JDRC0003, 2023YFG0316, 2021YFG0064), the Demonstration Project of Chengdu Major Science and Technology Application (Grant No. 2020-YFG9- 00048-SN),the Special Funds of Industry Development of Sichuan Province (Grant No. zyf-2018-056), and the Industry University Research Innovation Fund of China University: (Grant No. 2021ITA10016).
|
2307.16547 | On the 'Loose' Constraint from IceCube Neutrino Non-Detection of GRB
230307A | The recent extremely bright gamma-ray burst (GRB), GRB 230307A from a binary
neutron star merger may offer a good probe for the production of GRB-neutrinos.
Within the constraint from IceCube neutrino non-detection, the limits for key
physical parameters of this burst are extracted in different scenarios
including the fireball, Poynting-flux-dominated (PFD) and hybrid jet. Different
from the former nearby `monsters' and due to its smaller isotropic equivalent
radiated energy ($E_{\gamma,\rm iso}\sim4\times10^{52}$ erg), the constraint
seems loose if non-thermal neutrinos produced from photomeson interactions are
the only consideration. However, a quasi-thermal neutrino emission from
hadronuclear processes is constrained in this neutron-rich post-merger
environment, and the upper limit of the allowed nucleon loading factor is about
a few. Based on this, a discussion is presented on the possible prompt emission
mechanism and jet composition for GRB 230307A in the context of multi-messenger
astrophysics. It is worth noting that till now no GRB-neutrinos have been ever
detected, even for the two brightest nearby GRBs ever observed (GRB 221009A and
GRB 230307A) which have different dissipation mechanisms. | Xin-Ying Song | 2023-07-31T10:27:36Z | http://arxiv.org/abs/2307.16547v7 | # On the 'Loose' Constraint from IceCube Neutrino Non-Detection of GRB 230307A
###### Abstract
The recent extremely bright gamma-ray burst (GRB), GRB 230307A from a binary neutron star merger may offer a good probe for the production of GRB-neutrinos. Within the constraint from IceCube neutrino non-detection, the limits for key physical parameters of this burst are extracted in different scenarios including the fireball, Poynting-flux-dominated (PFD) and hybrid jet. Different from the former nearby'monsters' and due to its smaller isotropic equivalent radiated energy (\(E_{\gamma,{\rm iso}}\sim 4\times 10^{52}\) erg), the constraint seems loose if non-thermal neutrinos produced from photomeson interactions are the only consideration. However, a quasi-thermal neutrino emission from hadronuclear processes is constrained in this neutron-rich post-merger environment, and the upper limit of the allowed nucleon loading factor is about a few. Based on this, a discussion is presented on the possible prompt emission mechanism and jet composition for GRB 230307A in the context of multi-messenger astrophysics. **It is worth noting that till now no GRB-neutrinos have been ever detected, even for the two brightest nearby GRBs ever observed (GRB 221009A and GRB 230307A) which have different dissipation mechanisms.**
Gamma-ray bursts(629); Neutrino astronomy (1100) 0000-0002-4882-2888]Xin-Ying Song
## 1 Introduction
For decades, GRBs are proposed to be ultra-high-energy cosmic-ray (UHECR) accelerators and power potential sources of astrophysical high energy neutrinos in the GRB fireball scenario (e.g. Paczynski & Xu, 1994; Waxman, 1995; Vietri, 1995; Hummer et al., 2012). There are mainly two mechanisms for the production of the neutrinos relevant to GRBs: 1) the photomeson (\(p\gamma\)) process (e.g. Waxman & Bahcall, 1997), in which the photons of GRB are guaranteed photons; as predicted in a model-dependent theory, the prompt emission sites are different, which could affect the fluence of this kind of non-thermal (NT) neutrinos (e.g. Zhang & Kumar, 2013). 2) Hadronuclear (\(pn\), \(pp\) and \(nn\)) interactions (e.g. Bahcall & Meszaros, 2000; Koers & Giannios, 2007; Murase et al., 2013; Bartos et al., 2013) which produce quasi-thermal (QT) neutrinos during the expanding of hot jet material of the outflow; neutrons could be decoupled from protons if the bulk Lorentz Factor (\(\Gamma\)) is large enough. Thus inelastic collisions could occur and pions could be produced. The neutrinos with 5-10 GeV could be generated after pions decays; alternatively, if the coasting occurs earlier than \(np\) decoupling, the dissipation of neutrons via internal collisions between the compound flows may happen, which could also generate QT neutrinos in the sub-TeV range (e.g. Murase et al., 2013). Note that the neutrino emission could occur during the prompt phase of GRB as well as the (choked) jet's propagation within the macronova/kilonova ejecta or a massive progenitor envelope (e.g. Meszaros & Waxman, 2001; Murase & Ioka, 2013; Senno et al., 2016; Kimura et al., 2018), as a precursor for GRBs. However, till now, no evident correlation between neutrino events and observed GRBs is observed (e.g. Aartsen et al., 2017; Abbasi et al., 2023).
It is usually proposed that long GRBs are from the collapses of massive stars, while short GRBs are produced from mergers of compact stars. However, there are some exceptions, such as GRB 060614 (e.g. Gehrels et al., 2006) and GRB 211211A (e.g. Yang et al., 2022). Recently, another exception, GRB 230307A, associated with kilonova emission (Levan et al., 2023; Fermi GBM Team, 2023; Xiong et al., 2023; Svinkin et al., 2023) arises. The isotropic-equivalent radiated energy \(E_{\gamma,{\rm iso}}\sim 4\times 10^{52}\) erg of GRB 230307A is smaller compared with other
nearby bright GRBs (e.g., \(E_{\gamma,\rm iso}\sim 10^{54}\) erg for GRB 130427A and \(E_{\gamma,\rm iso}\sim 2\times 10^{54}\) erg for GRB 221009A), thus it could be inferred that the features of the burst can not be well constrained if the neutrino emission from \(p\gamma\) interactions is the only consideration. However, with the favoured redshift \(z\sim 0.065\)(e.g. O'Connor et al., 2023; Levan et al., 2023), \(d_{L}^{2}\) of GRB 230307A is about 6 times smaller than that of GRB 221009A, and 37 times smaller than that of GRB 130427A, where \(d_{L}\) is the distance of the GRB source. Therefore it may be still a good probe for the GRB-neutrinos.
It is worth to note that the IceCube Neutrino Observatory provides an upper limit (U.L.) of muon neutrinos (\(\nu_{\mu}\)), 1.0 GeV cm\({}^{-2}\) at 90% C.L. (IceCube Collaboration, 2023), which seems less constraining compared with \(3.9\times 10^{-2}\) GeV cm\({}^{-2}\) at 90% CL of GRB 221009A (IceCube Collaboration, 2022). Besides, there exists a significant photospheric emission in the prompt phase in GRB 230307A, which is different from synchrotron radiation-dominated GRB 221009A whose jet is dominated by the Poynting flux (Yang et al., 2023). The emission mechanism as well as the jet composition could affect the production of the neutrinos (Hummer et al., 2012; Gao et al., 2012; Pitik et al., 2021), thus one may wonder the limits on key features of this burst within this 'loose' constraint in different scenarios of emission mechanism and the jet composition. **The neutrinos annihilation mechanism and magnetohydrodynamic processes both may affect the jet composition, which could occur in the scenarios of massive stars collapse (a single progenitor or binary progenitors, e.g. in a binary-driven hypernova model in** Aimuratov et al. (2023)) and mergers of compact stars.**
The paper is organized as follows: in Section 2, the limits for key physical parameters of GRB 230307A are extracted within the constraint of the neutrino fluence in different GRB-neutrino scenarios; in Section 3, basic properties of the prompt emission for GRB 230307A are introduced and discussed; the discussion and summary are given in Section 4.
## 2 The Possible Neutrino Emissions from GRB 230307A
In this section, we consider the possible neutrino emissions via hadronuclear processes as well as \(p\gamma\) interactions. For convenience, we use spectral parameters of GRB 230307A in \([2.2,11.7]\) s after the trigger time: \(\alpha=-0.8\), \(\beta=-8\) and \(E_{\rm peak}=1.1\) MeV. The duration is taken to be 40 s and \(E_{\gamma,\rm iso}=4\times 10^{52}\) erg. For comparison, GRB 221009A is also analyzed, with parameters of \(\alpha=-1.0\), \(\beta=-2.3\), \(E_{\rm peak}=1.0\) MeV, a duration time of 327 s and \(E_{\gamma,\rm iso}=2\times 10^{54}\) erg. The convention \(Q=10^{n}Q_{n}\) is adopted for CGS units hereafter.
### photomeson (\(p\gamma\)) process
The decay chains of photomeson interactions for single-pion production are
\[\rm p+\gamma\to p+\pi^{0}, \tag{1}\] \[\pi^{0}\to\gamma+\gamma, \tag{2}\]
and
\[\rm p+\gamma\to n+\pi^{+}, \tag{3}\] \[\pi^{+}\to\mu^{+}+\nu_{\mu},\] (4) \[\mu^{+}\to e^{+}+\nu_{e}+\overline{\nu}_{\mu}. \tag{5}\]
In the processes shown in (3)-(5), one pion could produce four leptons. Before deducing formulae, some important definitions are introduced as below:
* the fractions of energy dissipated in protons, random magnetic fields (with a strength \(B\)), and radiated as \(\gamma\)-rays are \(\epsilon_{p}\), \(\epsilon_{\rm B}\) and \(\epsilon_{\gamma}\). The proton energy is estimated via \(E_{\rm proton}=\epsilon_{p}/\epsilon_{e}E_{\gamma,\rm iso}\), with the CR loading factor \(\xi_{\rm CR}=\epsilon_{p}/\epsilon_{e}\); note that the mark (\({}^{\prime}\)) in the superscript denotes that the quantity is in the comoving frame, while an overline denotes the rest frame of proton, hereafter; one has \(B^{\prime}=\left[\frac{L_{\rm tot}(\epsilon_{B}/\epsilon_{e})}{2R_{\rm d}^{2 }1^{2}c}\right]^{1/2}\) in the comoving frame, where \(L_{\rm tot}\) (or \(E_{\rm tot}\)) is the total luminosity (or energy);
* the energy distribution of accelerated protons by random magnetic fields is \(\frac{dN_{p}}{dE_{p}}\propto E_{p}^{-p}\), where the index \(p=2\) is assumed; \(E_{p,\rm max}\) and \(E_{p,\rm min}\) represent the maximum and minimum energies of accelerated protons in the observer's rest frame; \(\frac{dN_{p}}{dE_{p}}=\frac{(e_{p}/\epsilon_{e})E_{\rm tot}}{\ln(E_{p,\rm max}/E _{p,\rm min})}E_{p}^{-p}\), where \(E_{\rm proton}=e_{p}E_{\rm tot}\); the minimum energy of proton is set to be \(E_{p,\rm min}=\Gamma m_{p}c^{2}\) assuming it is rest in the comoving frame; the maximum energy \(E_{p,\rm max}\) is the energy of protons which are accelerated by the magnetic field, and estimated by equaling the dynamical timescale \(t^{\prime}_{\rm dyn}\sim R/\Gamma c\) with the accelerating timescale of protons \(t^{\prime}_{\rm acc}\sim E^{\prime}_{p}/(eB^{\prime}c)\);
* \(R_{\rm d}\) is the dissipation radius: for photospheric (PH) emissions, \(R_{\rm d,PH}\sim 10^{11}\) cm; for IS mechanism, \(R_{\rm d,IS}\sim 10^{12-13}\) cm; for ICMART mechanism, \(R_{\rm d,ICMART}\sim 10^{15}\) cm; they could be estimated from the parameters of the burst;
* the cross section of \(p\gamma\) interaction is \(\sigma_{p\gamma}=5\times 10^{-28}\) cm\({}^{-2}\); \(\pi^{+}\) decay time scales is \(\tau_{\pi^{+}}=2.8\times 10^{-28}\) cm\({}^{-2}\); \(\pi^{+}\) decay time scales are \(\tau_{\pi^{+}}=2.8\times 10^{-28}\) cm\({}^{-2}\).
\(10^{-8}\) s; in the comoving frame, the Lorentz factor of \(\pi^{+}\) is denoted as \(\gamma^{\prime}_{\pi^{+}}\). \(\varepsilon_{\gamma}\) denotes the specific energy of a photon; \(\gamma_{p}\), \(\beta_{p}\) is the Lorentz Factor and velocity of a specific proton in the comoving spectrum, and so on;
* the width of the Breight-Weigner form of \(\Delta(1232)\) resonance is \(\Delta\overline{\varepsilon}_{\rm pk}\sim 0.2\) GeV; \(\overline{\varepsilon}_{\rm pk}\simeq 0.3\) GeV are the photon energy at the resonance peak in the photomeson process.
The pion production rate is deduced to calculate the neutrino spectrum1. \(\sigma_{p\gamma}\) is given as a function of the photon energy in the proton-rest frame, \(\overline{\varepsilon}_{\gamma}=\gamma^{\prime}_{p}\varepsilon_{\gamma}(1- \beta^{\prime}_{p}\mu)\), where \(\gamma^{\prime}_{p}=\varepsilon^{\prime}_{p}/(m_{p}c^{2})\), \(\beta^{\prime}_{p}\), \(\varepsilon^{\prime}_{\gamma}\), \(\mu=\cos\theta_{p}\) are the Lorentz factor of protons, the proton velocity, the photon energy, and the angle between the directions of interacting proton and photon in the comoving frame, respectively. The energy loss rate of protons by pions production is written as
Footnote 1: The followed deduction could be found in many works (e.g. Kimura, 2022) and references therein.
\[t_{p\gamma}^{-1}=c\int d\Omega\int d\varepsilon^{\prime}_{\gamma}(1-\beta^{ \prime}_{p}\mu)n_{\gamma}(\varepsilon^{\prime}_{\gamma},\ \Omega)\sigma_{p\gamma}(\overline{\varepsilon}_{\gamma})\kappa_{p\gamma}( \overline{\varepsilon}_{\gamma}), \tag{6}\]
where \(n_{\gamma}(\varepsilon^{\prime}_{\gamma},\ \Omega)=dN/(d\varepsilon^{\prime}_{ \gamma}dVd\Omega)\) and \(\kappa_{p\gamma}\) is the inelasticity of the \(p\gamma\) interactions. This rate is approximately equivalent to the pion production rate. Then, to convert the integration variable from \(\mu\) to \(\overline{\varepsilon}_{\gamma}\), the photomeson production rate is represented as
\[t_{p\gamma}^{-1}=\frac{c}{2\gamma^{\prime 2}_{p}}\int_{\varepsilon_{\rm th}}^{ \infty}d\overline{\varepsilon}_{\gamma}\sigma_{p\gamma}\kappa_{p\gamma} \overline{\varepsilon}_{\gamma}\int_{\overline{\varepsilon}_{\gamma}/(2 \gamma^{\prime}_{p})}^{\infty}\frac{d\varepsilon^{\prime}_{\gamma}}{ \varepsilon^{\prime 2}_{\gamma}}n_{\varepsilon^{\prime}_{\gamma}}, \tag{7}\]
where we use \(n_{\gamma}(\varepsilon^{\prime}_{\gamma},\ \Omega)=n_{\varepsilon^{\prime}_{ \gamma}}/(4\pi)\), \(n_{\varepsilon^{\prime}_{\gamma}}=dN/(d\varepsilon^{\prime}_{\gamma}dV)\), \(\beta^{\prime}_{p}\sim 1\). Note that the energy threshold \(\varepsilon_{\rm th}\) is determined by the Breit-Weigner mass and width of \(\Delta(1232)\) resonance. We have
\[\sigma_{p\gamma}\kappa_{p\gamma}\approx\sigma_{\rm pk}\kappa_{\rm pk}\Delta \overline{\varepsilon}_{\rm pk}\delta(\overline{\varepsilon}_{\gamma}- \overline{\varepsilon}_{\rm pk}). \tag{8}\]
The pion production efficiency, or the fraction of CR protons producing pions is given by \(f_{p\gamma}=\min(1,t_{p\gamma}^{-1}/t_{\rm dyn}^{-1})\). The distribution of the target photons in the form of
\[n_{\gamma}(\varepsilon_{\gamma}) = \frac{dN_{\gamma}(\varepsilon_{\gamma})}{d\varepsilon_{\gamma}} \tag{9}\] \[= n_{\gamma,b}\begin{cases}\varepsilon^{-\alpha}_{\gamma,b} \varepsilon^{\alpha}_{\gamma},\ \varepsilon_{\gamma}<\varepsilon_{\gamma,b}\\ \varepsilon^{-\beta}_{\gamma,b}\varepsilon^{\beta}_{\gamma,}\varepsilon_{ \gamma}\geq\varepsilon_{\gamma,b},\end{cases}\]
where \(\varepsilon_{\gamma,\rm b}=\frac{(\alpha-\beta)E_{\rm peak}}{\alpha+2}\) and \(n_{\gamma,b}\) is the specific photon number of \(\varepsilon_{\gamma}=\varepsilon_{\gamma,\rm b}\). About half of the \(p\gamma\) inelastic interactions produce charged pions with the energy of \(\varepsilon_{\pi}\approx 0.2\varepsilon_{p}\). The high-energy pions lose their energies by either synchrotron radiation or dynamical expansion before they decay to neutrinos. Considering the average energy fraction in pion decaying to four leptons, \(\varepsilon_{\nu}=1/4\varepsilon_{\pi}\) and half decays to charged pions in \(p\gamma\) inelastic interaction, the differential total muon neutrino energy is
\[E_{\nu}^{2}N_{E_{\nu}}\approx\frac{1}{8}f_{p\gamma}f_{\pi,\rm sup}E_{p}^{2}N_{E _{p}}, \tag{10}\]
where \(f_{\pi,\rm sup}\) is the pion cooling suppression factor, and approximated to be
\[f_{\pi,\rm sup}\approx 1-\exp(-\frac{t^{\prime}_{\pi,\rm cl}}{t^{\prime}_{\pi, \rm dec}}), \tag{11}\]
where \(t^{\prime}_{\pi,\rm cl}=6\pi m_{\pi}^{4}c^{3}/(m_{e}^{2}\sigma_{T}B^{\prime 2} \varepsilon^{\prime}_{\pi})\) is the pion cooling time, and \(t^{\prime}_{\pi,\rm dec}=\varepsilon^{\prime}_{\pi}\tau_{\pi}/(m_{e}c^{2})\) is the decay time of charged pions. If \(\varepsilon^{\prime}_{\pi}<\varepsilon^{\prime}_{\pi,\rm cooling}\) (\(\varepsilon^{\prime}_{\pi,\rm cooling}=\frac{\Gamma}{1+z}\sqrt{\frac{6\pi m_{e}^{2} c^{5}}{m_{e}^{2}\sigma_{T}\tau_{\pi}B^{\prime 2}}}\) is the critical cooling energy estimated by equating \(t^{\prime}_{\pi,\rm syn}\) and \(t^{\prime}_{\pi,\rm dec}\)), the cooling effect is not efficient, and \(f_{\pi,\rm sup}\approx 1\), while \(f_{\pi,\rm sup}\approx(\varepsilon^{\prime}_{\pi}/\varepsilon^{\prime}_{\pi,\rm cooling })^{2}\) for \(\varepsilon^{\prime}_{\pi}>\varepsilon^{\prime}_{\pi,\rm cooling}\).
Considering the cooling process which causes the second break in neutrino spectrum2 and Equations (7)-(10), the neutrino spectrum could be described as a power law with two breaks,
Footnote 2: The second break is caused by \(E_{p,\rm max}\) if the pion cooling is not significant.
\[n_{\nu}(E_{\nu}) = \frac{dN_{\nu}(E_{\nu})}{dE_{\nu}}\] \[= n_{\nu,1}\begin{cases}\varepsilon^{\alpha_{\nu}}_{\nu,1}E^{- \alpha_{\nu}}_{\nu},&E_{\nu}<\varepsilon_{\nu,1},\\ \varepsilon^{\beta_{\nu}}_{\nu,1}E^{-\beta_{\nu}}_{\nu},&\varepsilon_{\nu,1}\leq E _{\nu}<\varepsilon_{\nu,2},\\ \varepsilon^{\beta_{\nu}}_{\nu,1}\varepsilon^{\gamma_{\gamma}-\beta_{\nu}}_{\nu,2 }E^{-\gamma_{\nu}}_{\nu},&E_{\nu}\geq\varepsilon_{\nu,2},\end{cases} \tag{12}\]
and one has
\[\alpha_{\nu}=p+1+\beta,\ \beta_{\nu}=p+1+\alpha,\ \gamma_{\nu}=\beta_{\nu}+2. \tag{13}\]
\(\varepsilon_{\nu,1}\) could be derived in two different cases according to \(\tau_{p\gamma}\). If \(\tau_{p\gamma}<5\), one has
\[\varepsilon_{\nu,1}=\varepsilon^{0}_{\nu,1}=6.33\times 10^{5}\rm GeV(1+z)^{-2}( \Gamma/300)^{2}(\frac{\varepsilon_{\gamma,\rm b}}{1MeV})^{-1}. \tag{14}\]
if \(\tau_{p\gamma}\) increases above 5, \(f_{p\gamma}\) exceeds 0.5 and quickly approaches 1, and the neutrino flux no longer significantly increases with \(\tau_{p\gamma}\), and \(\varepsilon_{\nu,1}\) has a form of
\[\varepsilon_{\nu,1}=\varepsilon^{0}_{\nu,1}\times(\frac{\tau^{\rm peak}_{p \gamma}}{3})^{1+\beta}, \tag{15}\]
where \(\tau_{p\gamma}^{\rm peak}\) is the peak \(p\gamma\) optical depth (the one for protons with energy \(E_{p}^{\rm peak}\) to interact with the photons with a peak energy) and described as
\[\tau_{p\gamma}^{\rm peak}=8.9L_{\rm GRB,52}\left(\frac{\Gamma}{300}\right)^{-2}R _{13}^{-1}\left(\frac{\epsilon_{\gamma,b}}{\rm MeV}\right)^{-1}. \tag{16}\]
\(f_{p\gamma\to N\pi}\) is the fraction of proton energy going into the \(\pi\) production, and one has \(f_{p\gamma\to N\pi}=1-(1-\langle\chi_{p\to\pi}\rangle)^{\tau_{p\gamma}}\), where \(\langle\chi_{p\to\pi}\rangle=0.2\) is the the average fraction of energy transferred from protons to pions3.
Footnote 3: \(\tau_{p\gamma}=3\) is used in Zhang & Kumar (2013), and \(f_{p\gamma\to N\pi}\) is much nearer to 100% with \(\tau_{p\gamma}=5\) shown in Ai & Gao (2023).
From Equation (14) and (15), \(\varepsilon_{\nu,1}\) is denoted with
\[\varepsilon_{\nu,1}=\varepsilon_{\nu,1}^{0}\times\min\left\{1,\left(\frac{\tau _{p\gamma}^{\rm peak}}{5}\right)^{1+\beta}\right\}. \tag{17}\]
\(\varepsilon_{\nu,2}\) is relevant to the \(\varepsilon_{\pi,\rm cooling}^{\prime}\) as
\[\varepsilon_{\nu,2}=\frac{1}{4}D\varepsilon_{\pi,\rm cooling}^{\prime}, \tag{18}\]
where \(D\) is the Doppler factor and \(D\simeq 2\Gamma\) if the observation is head on. The amplitude \(n_{\nu,1}\) could be derived from integration of \(E_{p}\frac{dN_{p}}{dE_{p}}dE_{p}\) from \(E_{p,\rm min}\) to \(E_{p,\rm max}\) as
\[n_{\nu,1}=\frac{1}{8}f_{p}f_{p\gamma\to N\pi}\frac{\epsilon_{\rm p}/\epsilon_ {\rm e}E_{\gamma,\rm iso}}{\ln(\varepsilon_{\nu,2}/\varepsilon_{\nu,1})} \varepsilon_{\nu,1}^{-2}, \tag{19}\]
where \(f_{\rm p}=\frac{\ln(\varepsilon_{\nu,2}/\varepsilon_{\nu,1})}{\ln(E_{p,\rm max }/E_{p,\rm min})}\).The observed fluence of the muon neutrinos is
\[E_{\nu}^{2}\phi_{\nu}(E_{\nu})=\frac{E_{\nu}^{2}n_{\nu}(E_{\nu})}{4\pi d_{\rm L }^{2}}. \tag{20}\]
With \(\Gamma=400\), \(\xi_{\rm CR}=\epsilon_{p}/\epsilon_{\gamma}=3\) and \(\epsilon_{B}=\epsilon_{\gamma}\) for each emission mechanism, the predicted neutrino spectra from GRB 230307A and 221009A are shown in Figure 1 (a) and (b). The fluence level of GRB 230307A with assuming a photospheric emission is the maximum of three mechanisms, which is about two orders of magnitude smaller than the U.L. of 1 GeV cm\({}^{-2}\), and one order of magnitude smaller than that of GRB 221009A. Furthermore, as shown in Figure 1 (c) and (d), \(\xi_{\rm CR}\) of the excluded region is extremely large (even if with a small \(R_{d}=10^{11}\) cm in Figure 1 (c)), which seems unreasonable for GRBs. The case is similar and the \(\xi_{N}\) is still large (\(>10\)) if with an U.L of one order of magnitude lower (0.1 GeV cm\({}^{-2}\)). Thus, the emission mechanisms as well as \(\xi_{\rm CR}\) are not constrained within the U.L. of the neutrino production via \(p\gamma\) interactions.
### hadronuclear process
#### 2.2.1 neutron-proton (\(np\)) decoupling
In the neutron-rich environment, an admixture of baryons exist in the outflow besides of photons and \(e^{\pm}\). During the expanding of the outflow, the electrons can exchange energy with photons via Compton scattering and with protons via Coulomb collisions; however, for the neutrons, only protons have enough mass to collider with them and cause a collisional \(np\) coupling (e.g. Rossi et al., 2006). This means that, if the neutron-to-proton ratio (denoted as \(\zeta_{n}\)) is large so that there are not enough protons, or \(\Gamma\) is large enough (above the the critical value \(\Gamma_{np}\)), the neutrons can not be accelerated so fast along with the other outflow component, even if they have drifted into the outflow in the early stage of acceleration. In this scenario the \(np\) decoupling happens. After decoupling both \(n\) and \(p\) continue to coast together due to inertia, and their relative velocities could reach the threshold for inelastic collisions, of which decay chains are
\[{\rm p+n\to p+n+\pi^{0}}, \tag{21}\] \[{\rm p+n\to n+n+\pi^{+}},\] (22) \[{\rm p+n\to p+p+\pi^{-}}. \tag{23}\]
The charged pions and their secondary particles muons, could produce leptons via the same decay chains as those in processes (4)-(5). The detected fluence of muon neutrinos number from \(np\) decoupling could be described as,
\[\Phi_{N_{\nu_{\mu}}}=\frac{N_{n}N_{\nu_{\mu}}\tau_{np}}{4\pi d_{L}^{2}}, \tag{24}\]
where \(N_{n}\) denotes the number of neutrons contained in the outflow, \(N_{\nu_{\mu}}\) denotes the number of muon neutrinos created per inelastic \(np\) interaction with considering the neutrinos flavour mixing, and \(\tau_{np}\) denotes the optical depth for the \(np\) collision, which could be approximated to the inelastic \(np\) interaction probability.
In the acceleration phase, the radius (\(R\)) and \(\Gamma(R)\) has a relation of
\[\Gamma(R)=\Gamma_{0}\left(\frac{R}{r_{0}}\right)^{m}, \tag{25}\]
where \(r_{0}\) is the initial radius of acceleration and \(\Gamma_{0}\) is the initial Lorentz factor. \(r_{0}\) is taken to be \(10^{8}\) cm\({}^{4}\); \(m\) ranges from 1/3 to 1. For the rapid acceleration of the fireball, \(m=1\), while for the slow acceleration by the Poynting flux, \(m\) could be 1/3. Note that in a collapsar
scenario of a massive star, \(2/3<\zeta_{n}\simeq 1\), while in a NS-NS merger scenario, \(\zeta_{n}\geq 1\) (e.g. Bahcall & Meszaros, 2000). In the neutron-rich environment after the NS-NS merger for GRB 230307A, the electron (or proton) to nucleon ratio \(Y_{e}=\frac{1}{1+\zeta_{n}}\sim 0.15\) for a fast dynamical ejecta and 0.3 for slow disk-wind ejecta as reported in Bulla et al. (2023), which corresponds to \(2.3<\zeta_{n}<5.7\). The neutrons could penetrate the outflow, and be contained in the outflow (e.g. Beloborodov, 2003).
First, the fireball scenario is considered with \(m=1\). The \(np\) collision time scale is \(t^{\prime}_{np}\approx 1/(n^{\prime}_{p}\sigma_{np}c)\), where the \(\sigma_{np}\approx 3\times 10^{-26}\) cm\({}^{2}\) is the approximate \(np\) cross section, \(n^{\prime}_{p}\approx L_{p}/(4\pi R^{2}\Gamma\eta m_{p}c^{3})\) is the proton density, \(L_{p}\) is the luminosity of the proton outflow and \(\eta\) is the maximum Lorentz Factor that all the energy is used to accelerate the protons. By equating \(t^{\prime}_{np}\) and the dynamical time scale \(t^{\prime}_{\rm dyn}\approx R/(\Gamma c)\), the the decoupling radius is estimated to be \(r_{\rm dec}\approx 8.7\times 10^{11}\) cm \(L_{p,53}^{1/3}r_{0,11}^{2/3}\Gamma_{0,1}^{-2/3}\eta_{2.9}^{-1/3}\), at which the Lorentz factor becomes (Bahcall & Meszaros, 2000; Murase et al., 2022),
\[\Gamma_{n,{\rm dec}}^{0}\approx 65\ L_{p,53}^{1/3}r_{0,11}^{-1/3}\Gamma_{0,1}^{ 1/3}\eta_{2.9}^{-1/3}. \tag{26}\]
Note that \(\Gamma\) must be greater than \(\Gamma_{np}\) which is
\[\Gamma_{np}\approx\left(\frac{\sigma_{np}L_{p}\Gamma_{0}}{4\pi m_{p}c^{3}r_{0 }}\right)^{1/4}=477L_{p,53}^{1/4}\Gamma_{0}^{1/4}r_{0,8}^{1/4}. \tag{27}\]
Thus the Lorentz Factor of neutron at the decoupling time is
\[\Gamma_{n,{\rm dec}}=\min(\Gamma_{n,{\rm dec}}^{0},\Gamma_{np}) \tag{28}\]
Figure 1: (a) and (b) the non-thermal neutrino spectra with \(\Gamma=400\) for GRB 230307A and GRB 221009A of different emission mechanisms. For GRB 221009A, \(f_{\rm p}\) is calculated to be about 0.3, 0.5 and 0.3 for PH, ICMART and IS models respectively; for GRB 230307A, \(f_{\rm p}\) is calculated to be about 0.3, 0.7 and 0.5 for PH, ICMART and IS models respectively. (c) and (d) are allowed ranges of \(\Gamma\), \(R_{d}\) and \(\xi_{\rm CR}\). The solid red line denotes the U.L. at 90% C.L. of GRB 230307A. The blue dotted line denotes one order of magnitude lower than U.L. = 1 GeV cm\({}^{-2}\). The allowed ranges are below the corresponding lines, while the excluded ranges are above them, the same below.
decoupling radius by definition, and the energy of QT neutrinos is
\[E_{\nu}^{\rm QT}\approx 0.1\Gamma_{n,{\rm dec}}m_{p}c^{2}/(1+z), \tag{29}\]
which predicts \(\sim 1-10\) GeV neutrinos with \(m_{p}c^{2}=0.938\) GeV and \(\Gamma_{n,{\rm dec}}\) is mild (\(\lesssim 100\)). The neutrino energy fluence is estimated to be
\[E_{\nu_{\mu}}^{2}\phi_{\nu_{\mu}}\approx\frac{1}{12}\frac{(1+z)}{4\pi d_{L}^{2} }\zeta_{n}\left(\frac{\Gamma_{n,{\rm dec}}}{\Gamma}\right)E_{\rm proton}, \tag{30}\]
where \(\phi_{\nu_{\mu}}=\frac{1}{4\pi d_{L}^{2}}\frac{dN_{\nu_{\mu}}}{dE_{\nu_{\mu}}}\). \(E_{\rm proton}\approx\xi_{N}E_{\gamma}^{\rm iso}\) is the kinetic energy of the proton outflow and \(\xi_{N}\) is the proton or nucleon loading factor. The coefficient \(1/12\) comes from the product of inelasticity (0.5), the fraction of charged pions (\(2/3\)) and the energy fraction of \(\nu_{\mu}\) after the flavour mixing (\(1/4\)).
Figure 2 (a) and (b) shows the allowed ranges of \(\xi_{\rm N}\), \(\Gamma\) and \(\zeta_{n}\) within the constraint from U.L. of neutrino fluence. With \(\Gamma=400\) and \(\zeta_{n}=5.7\), the U.L. of \(\xi_{\rm N}\) could reach up to 3. From Figure 2 (b), the U.L. of \(\xi_{n}\) decreases with the \(\zeta_{n}\). With one order of magnitude lower than 1 GeV cm\({}^{-2}\), the U.L. of \(\xi_{N}\lesssim 1\) as shown in dotted lines in Figure 2 (a) and (b). Compared with the largest allowed \(\xi_{\rm N}\lesssim 0.1\) for GRB 221009A (with \(\zeta_{n}=1\)), it seems that a heavier baryon loading could be allowed for GRB 230307A.
If another case that the Poynting flux is dominated (\(\sigma_{0}>10\), where \(\sigma_{0}\) is the Poynting-to-kinetic flux ratio) is considered, the neutrino fluence could be smaller. This is because \(\tau_{np}\) in Equation (24) has a form of
\[\tau_{np}(m)=\int_{\chi_{n}}^{\infty}(1-\frac{0.75}{\ln\chi})(\chi^{-1}-\chi^{- 3})\chi^{-1/m}, \tag{31}\]
Figure 2: (a), (b) are allowed and excluded ranges of \(\Gamma\), \(\xi_{\rm N}\) and \(\zeta_{n}\) in _np_ decoupling mechanism. The blank around lower \(\Gamma\) and higher \(\xi_{\rm N}\) is the _np_ coupling region where \(\Gamma<\Gamma_{np}\). The red dashed line denotes that of GRB 221009A. The gray vertical lines denote the \(\Gamma=400\) for (a) and \(\zeta_{n}=2.3\) and \(5.7\) for (b). (c) The allowed and excluded ranges of \(\xi_{\rm N}\) in the _np_ collision mechanism. The vertical line denotes \(\tau_{np}=1\).
where \(\chi\) is the ratio of Lorentz Factor of proton to that of neutron and \(\chi_{\pi}\simeq 2.15\) is the threshold for pion production (e.g. Landau and Lifshitz, 1971; Koers and Giannios, 2007). For the Poynting-flux-dominated (PFD) case, \(m=1/3\) and the saturation radius is much larger than the pion creation radius, \(\tau_{np}\simeq 0.01\zeta_{n}^{1/2}\) is several times smaller that that of fireball (\(\tau_{np}\simeq 0.05\)-0.1). Thus, the corresponding U.L. of \(\xi_{N}\) is even much larger than that obtained in the fireball scenario, which seems too large for the assumption that the Poynting flux is dominated. Therefore, the U.L. can not provide a stringent constraint in the PFD scenario.
A third scenario of a hybrid jet with \(1\lesssim\sigma_{0}<10\), could be considered as well. If the decoupling occurs during the rapid acceleration, the case is similar to the fireball scenario; similar to the above discussion, the U.L. of \(\xi_{N}\sim\) a few could be also obtained, which seems consistent with the hybrid scenario if we assume there exists a Poynting flux of which the energy is comparable with \(E_{\gamma,\rm iso}\). Otherwise, it is similar to the PFD scenario.
#### 2.2.2 collisions in neutron-loaded flows
The \(pn\) collisions occur not only in the case of \(np\) decoupling before the coasting regime, but also in the case of internal collisions between the compound flow if \(\Gamma_{np}\) is larger than \(\eta\) and the coasting occurs earlier than the decoupling (e.g. Murase et al., 2013). As discussed in Murase et al. (2013), the energy of the produced neutrinos is
\[E_{\nu}^{\rm qt}\approx 0.1\Gamma\Gamma_{\rm rel}^{\prime}m_{p}c^{2}/(1+z), \tag{32}\]
where \(\Gamma_{\rm rel}^{\prime}\sim 2\) is the relative Lorentz factor of the interacting flow and sub-TeV neutrinos are expected for \(\Gamma\sim 10^{2}\)-\(10^{3}\). The neutrino energy fluence is
\[E_{\nu}^{2}\phi_{\nu_{\mu}}\approx\frac{1}{12}\frac{(1+z)}{4\pi d_{L}^{2}}\tau _{pn}\xi_{N}E_{\gamma,\rm iso}. \tag{33}\]
Internal collisions between neutron-loaded flows may be relevant for sub-photospheric dissipation (e.g. Beloborodov, 2010). \(\xi_{N}\sim 4-20\) is predicted in this scenario (e.g. Vurm et al., 2011). As shown in Figure 2 (c), the U.L. of \(\xi_{\rm N}\) could reach up to \(\sim 5\) with \(\tau_{np}=1\), which could be consistent with the scenario. The U.L. of \(\xi_{N}\sim 0.5\) is obtained within the smaller constraint of U.L=0.1 GeV cm\({}^{-2}\), seems too small. However, it may be reasonable for a hybrid jet in which there exists an appreciable Poynting flux, because a considerable part of \(E_{\gamma,\rm iso}\) could be contributed by the dissipation of the Poynting flux via e.g. ICMART mechanism.
Other processes: \(pp\) collisions, multi-pions and Kaons production in \(pp\) collisions and \(p\gamma\) interactions
Other processes, such as \(pp\) collisions, multi-pions and kaons production in \(pp\) collisions and \(p\gamma\) interactions (e.g. Wang and Dai, 2009; Gao et al., 2012) could also contribute to the neutrino emission. However, compared with the estimations above, their contributions are not much larger, and do not affect the conclusion.
### Implications
The U.L. (1 GeV cm\({}^{2}\)) of the neutrino fluence for GRB 230307A seems too loose to constrain a specific GRB emission mechanism, if the non-thermal neutrino production via \(p\gamma\) interactions is the only consideration. For QT neutrinos, the U.L. of \(\xi_{N}\sim\) a few, which seems higher than that (\(\xi_{N}\lesssim 0.1\)) of GRB 221009A. If the baryon loading is really appreciable, the jet is not PFD as that in GRB 221009A. Therefore, it may be necessary to directly investigate the prompt emission mechanism from spectra in the prompt phase to see if there is something different.
## 3 Basic spectral properties of GRB 230307A
GBM NaI detector (NaI 10), and GBM BGO (BGO 1) detector recorded the brightest flux, however, dead time (DT) and pile-up instrumental (PUI) effects occur in \([T_{0}+2.5,\,T_{0}+11.0]\) s (GCN 33551, Dalessi and Fermi GBM Team, 2023), as shown in the red shadow in the light curve in Figure 3 (a). Fortunately, data from GECAM-B are not distorted by PUI effect. Therefore, data analysis is performed with data from the brightest detector (GRD 4) of GECAM-B from \([T_{0}+2.2,\,T_{0}+11.7]\) s, while in other time bins, data from the two brightest detectors (NaI 10 and BGO 1) of GBM are used. A polynomial is applied to fit all the energy channels and then interpolated into the signal interval to yield the background photon count estimate for GRB data. The Markov Chain Monte Carlo (MCMC) fitting is performed to find the parameters with the maximum Poisson likelihood.
From Figure 3 (a), the values of \(\alpha\gtrsim-2/3\) (synchrotron death line, Preece et al., 1998) concentrate in the first 7 s, which implies that the emission at the beginning may have a photospheric origin. The fit results are shown in Table A1 in Appendix A. After about 7 s, \(\alpha\) gradually decreases to about -1.6, which means that the non-thermal emission may become dominant gradually afterwards. The typical QT spectra in the first few seconds are shown in Figure 3 (c)-(e), which is narrow with the maximum of \(\alpha\sim-0.3\) and more like a broadened Planck spectrum after subtracting the non-thermal emission denoted by a power-law (PL) function. The photospheric emission is a natural consequence of
the fireball (e.g., Cavallo & Rees, 1978; Goodman, 1986; Paczynski, 1986). The Planck spectrum related to the photospheric emission could be broadened by geometrical broadening (Pe'er, 2008; Lundman et al., 2013), as well as by dissipations below the photosphere (e.g., Rees & Meszaros, 2005), such as magnetic reconnection (e.g., Giannios & Spruit, 2005), and hadronic collision shocks (e.g., Vurm et al., 2011).
From Figure 3 (f)-(h) and the decreasing \(\alpha\) with time, it is found that the so-called QT spectral shape becomes broader. This may be caused by the enhanced dissipation below the photosphere with time. The details of the spectral evolution are not the main goal in this letter and will not be further discussed. In summary, an significant existence of the photospheric emission is observed in the prompt phase of GRB 230307A, which is greatly different from that in GRB 221009A.
## 4 Discussion and Summary
_I. Constraints from the possible neutrino emission on GRB 230307A_
\(E_{\rm iso,7}\) for GRB 230307A is not large enough, compared with the nearby extremely bright GRB 130427A and 221009A. The U.L. seems so loose that the emission mechanisms for this burst are not well constrained, with assuming that the neutrinos production from \(p\gamma\) interactions is the only consideration. The case is similar even if the U.L. is one order smaller. It is proposed that the QT neutrinos from hadronuclear processes may contribute to the neutrino emission. The U.L. of allowed nucleon loading factor is about a few in this neutron-rich post-merger environment of GRB 230307A.
_II. The nucleon loading and emission mechanism in GRB 230307A_
The neutrino emission is relevant to the emission mechanism and the jet composition for GRBs. Three different jet properties are considered in the estimation for the QT neutrino fluence. The U.L. of the nucleon loading \(\xi_{\rm N}\sim\) a few for GRB 230307A, while the U.L. of \(\xi_{\rm N}\lesssim 0.1\) for GRB 221009A, which may imply that a heavier nucleon loading is allowed in GRB 230307A than that in GRB 221009A.
For GRB 221009A, the U.L. of allowed \(\xi_{N}\lesssim 0.1\) is very small, which is consistent very well with its jet composition and emission properties inferred from the spectral modeling (Yang et al., 2023): a PFD jet (with a synchrotron-dominated prompt emission). The prompt emission of GRB 230307A seems quite different from that of GRB 221009A. A significant photospheric emission dominates the beginning of the prompt phase; the spectrum becomes broader and broader with time, which could be interpreted by enhanced sub-photosphere dissipations via magnetic reconnection, or hadronic collisions. The dissipation via hadronic collisions could occur in a fireball or a hybrid jet, while the magnetic dissipation may happen in a hybrid or PFD jet. If the jet is PFD, it is inferred that there must be a thermalization via the magnetic dissipation below the photosphere, to interpret the observed photospheric emission.
GRB 211211A is another nearby GRB (\(z=0.076\)) and has a same origin as GRB 230307A (Yang et al., 2022). There is no report about the relevant neutrino emission, however, \(\sigma_{0}\) tells some information about the nucleon loading. \(\sigma_{0}\sim 10\)-\(100\) is estimated for GRB 211211A (Chang et al., 2023). This means that the jet for GRB 211211A is PFD and nucleon loading is small. Besides, there exist neither significant photospheric emissions, nor evident spectral evolution (from photospheric to non-thermal) in the prompt phase. It may be inferred that the jet composition of GRB 230307A is somewhat different from those of GRB 211211A and GRB 221009A, and a heavier baryon loading may account for it.
_III. Assuming a more stringent constraint from neutrinos detection_
A data set of high-energy track events are used in the analysis by IceCube to extract the U.L. of neutrino fluence, and the sensitivity can vary greatly over different declinations (\(\delta\)) (Abbasi et al., 2021, 2023). GRB 230307A is located at the southern hemisphere with \(\delta=-75.4^{\circ}\), while \(\delta=+19.8^{\circ}\) for GRB 221009A. The U.L. for non-detection in GRB 230307A might be about one order of magnitude lower if it was in northern hemisphere with a reduced background. Thus, with assuming that it has the same sensitivity as that in the northern hemisphere, \(0.1\) GeV cm\({}^{-2}\) as a U.L. is also used in the analysis as shown in Section 2. However, this smaller U.L. still can not provide a stringent constraint on GRB emission mechanisms via \(p\gamma\) interactions, as shown in Figure 2. For the QT neutrino emission, a smaller U.L. is obtained, however, it is still much larger than that in GRB 221009A or GRB 211211A. The conclusion is similar to that with 1 GeV cm\({}^{-2}\) as a U.L..
**We note that till now no GRB-neutrinos have been ever detected, even for the two brightest nearby GRBs ever observed (GRB 221009A and GRB 230307A), which have different representative dissipation mechanisms. There may be some hints that: only a brighter and/or more nearby GRB might be expected for the detection for the candidate of the source of GRB-neutrinos; for the future neutrino detectors, sub-TeV channels and relevant dedicated searches with considering
Figure 3: (a) The light curves (logarithm scale) of GBM na detector and \(\alpha\), \(E_{\rm p}\) values, with a sub-plot (b) for the first 1 s. Red circles and blue triangles denote \(\alpha\) values from the fit results with BAND and CPL functions (the same below), respectively. Green diamonds denote the \(E_{\rm p}\) values from the fit results. The sub-plot shows \(\alpha\) and \(E_{\rm p}\) values of the first 1 s. (c)-(h) Time resolved spectra with modeling with multi-blackbody (mBB) or mBB+PL functions.
the neutrino spectrum shape are of great importance.**
The authors thank supports from the National Program on Key Research and Development Project (2021YFA0718500) and National Natural Science Foundation of China (grant Nos. 12303052). This work is partially supported by International Partnership Program of Chinese Academy of Sciences (Grant No.113111KYSB20190020). The author is very grateful to the suggestions from Jessie Thwaites and the IceCube Collaboration. Dr. Xin-Ying Song thanks public GRB data of Fermi/GBM and GECAM-B data group. I am very grateful for the comments and suggestions from the anonymous referees. I thank Dr. Rui Qiao for his suggestions on the response matrix generating for GECAM-B data. I thank Dr. Michael S. Briggers and Dr. Michelle Hui for their suggestions on GBM data. I thank Dr. Xi-Lu Wang and Prof. Shuang-Nan Zhang for their suggestions.
## Appendix A The fit results of the prompt emission of GRB 230307A
The fit results modeling with mBB or mBB+PL models and empirical functions are listed in Table. A1.
|
2309.09444 | Investigating Zero- and Few-shot Generalization in Fact Verification | In this paper, we explore zero- and few-shot generalization for fact
verification (FV), which aims to generalize the FV model trained on
well-resourced domains (e.g., Wikipedia) to low-resourced domains that lack
human annotations. To this end, we first construct a benchmark dataset
collection which contains 11 FV datasets representing 6 domains. We conduct an
empirical analysis of generalization across these FV datasets, finding that
current models generalize poorly. Our analysis reveals that several factors
affect generalization, including dataset size, length of evidence, and the type
of claims. Finally, we show that two directions of work improve generalization:
1) incorporating domain knowledge via pretraining on specialized domains, and
2) automatically generating training data via claim generation. | Liangming Pan, Yunxiang Zhang, Min-Yen Kan | 2023-09-18T02:53:12Z | http://arxiv.org/abs/2309.09444v1 | # Investigating Zero- and Few-shot Generalization in Fact Verification
###### Abstract
We explore _zero- and few-shot generalization for fact verification (FV)_, which aims to generalize the FV model trained on well-resourced domains (_e.g._, Wikipedia) to low-resourced domains that lack human annotations. To this end, we first construct a benchmark dataset collection which contains 11 FV datasets representing 6 domains. We conduct an empirical analysis of generalization across these FV datasets, finding that current models generalize poorly. Our analysis reveals that several factors affect generalization, including dataset size, length of evidence, and the type of claims. Finally, we show that two directions of work improve generalization: 1) incorporating domain knowledge via pretraining on specialized domains, and 2) automatically generating training data via claim generation.
## 1 Introduction
With a rise in deliberate disinformation, _Fact Verification (FV)_ has become an important NLP application. FV aims to verify given claims with the evidence retrieved from plain text. Rapid progress has been made by training large neural models (Zhou et al., 2019; Liu et al., 2020; Zhong et al., 2020) on the FEVER dataset (Thorne et al., 2018), containing more than 100K human-crafted (evidence, claim) pairs based on Wikipedia. Fact verification is also needed in other domains, including news, social media, and scientific documents. This has spurred the creation of a large number of FV datasets, such as COVID-Fact (Saakyan et al., 2021), SciFact (Wadden et al., 2020), and Climate-FEVER (Diggelmann et al., 2020).
However, considering that human annotation is time-consuming, costly, and often biased, it is difficult to collect reliable human-labeled data in every domain that demands fact verification. We need to investigate how to build a generalizable fact verification system that adapts to new domains with zero or few samples. Critically, how can we leverage valuable (evidence, claim, label) annotations from rich-resourced domains (_e.g._, Wikipedia) to aid fact verification in the low-resourced ones (_e.g._, scholarly documents, and social media)? Although FV datasets have been recently created in different domains, little analysis has shown whether FV models generalize across them and to what extent existing datasets can be leveraged to improve performance in these new domains.
In this paper, we bridge this gap by conducting a comprehensive investigation of _zero- and few-shot generalization in fact verification_. By conducting a holistic study of FV datasets to date, we first carefully select 8 datasets that have artificial or natural claims, human-annotated evidence, and two or three-class labels for our study. We then standardize their data formats as (evidence, claim, label) pairs and create dataset variants with different granularity of evidence, which gives us a total of 11 datasets. We then conduct a thorough empirical study of generalization and transfer across these 11 datasets. We train models on a source dataset, and then evaluate their performance on a target dataset, either without any additional target training examples (_zero-shot setting_) or with a few additional target examples (_few-shot setting_).
We find that RoBERTa-based FV models tend to overfit the particular training set, generalizing poorly to other datasets. Our in-depth analysis shows generalization is related to several key factors, including dataset size, length of evidence, and the claim type. In particular, we find that Wikipedia-based artificial claims (_e.g._, FEVER) generalize well to natural claims in real-world domains with the growth of dataset size, in contrast to prior work that criticized crowd-sourced claims as having strong annotation bias and being unrepresentative of real-world misinformation (Schuster
et al., 2019). Our few-shot generalization experiment further shows that fine-tuning on a small amount of target training data can substantially improve performance.
Armed with the above insights, we explore two ways to improve the generalization of fact verification models. 1) _Domain-specific Pretraining_: initializing the FV model with language models pretrained on specialized domains, and 2) _Data Augmentation_: automatically generating training data for the target domain. Results show that these methods can noticeably improve generalization but still leave unsolved challenges such as inflexibility, high cost, and label consistency.
To the best of our knowledge, this is the first work to perform a thorough investigation of generalization and transfer in fact verification. We open-sourced our dataset collection and codes to support future research towards a universal and robust fact verification system1.
Footnote 1: [https://github.com/teacherpeterpan/Fact-Checking-Generalization](https://github.com/teacherpeterpan/Fact-Checking-Generalization)
## 2 Dataset Curation
In this section, we describe the 11 fact verification datasets included in our study. We first describe the criteria for dataset selection (SS 2.1), and then we introduce the dataset processing (SS 2.2). We show the key characteristics of the datasets in Table 1.
### Dataset Selection
A large number of datasets have recently been introduced to study various tasks for fact-checking, _e.g._, claim detection, evidence retrieval, fact verification, justification production, etc. Our focus, fact verification, in particular, takes a textual _claim_ and a piece of _evidence_ as input to predict the _label_ for the claim. Let's define these aspects:
\(\bullet\)**Claim**: Claims for fact verification are often textual, sentence-level statements, which are categorized into: 1) _real-world natural claims_ crawled from dedicated websites, textbooks, forums, etc. 2) _artificial claims_ written by crowd-workers.
\(\bullet\)**Evidence**: Evidence is the relevant information source for validating the claim. Textual sources, such as news articles, academic papers, and Wikipedia documents, are one of the most commonly used types of evidence. Based on the granularity, we categorize the evidence in existing datasets into: 1) _document-level evidence_ such as the Wikipedia page (Thorne et al., 2018), news articles (Hanselowski et al., 2019), and scientific papers (Wadden et al., 2020). 2) _sentence-level evidence_ annotated by human experts in the relevant documents to support or refute each claim. 3) _no evidence_ is given for each claim; the model needs to retrieve evidence from a large knowledge source.
\(\bullet\)**Label**: The label definition for the claim also varies across datasets. The most common definition is the binary label with supports and refutes, and the three-class label, _i.e._, supports/refutes/not enough info.
Some works (Wang, 2017; Augenstein et al., 2019) also employ multi-class labels for more fine-grained degrees of truthfulness (_e.g._ true, mostly-true, mixture, etc), where the number of labels vary greatly, ranging from 4 to 27.
Selection Criteria.We employ the following criteria to select the datasets for our study.
\(\bullet\) We consider both natural and artificial claims in various domains.
\(\bullet\) We consider the datasets with human-annotated document-level and sentence-level evidence. We exclude datasets without evidence or which provide only non-textual evidence; _i.e._, tables, knowledge bases, etc.
\(\bullet\) We only consider datasets with the binary or the three-class label annotation due to the difficulty of canonicalizing such multi-class labels.
By conducting a holistic study of fact-checking datasets to date, eight different data sources meet our requirements. The full list of candidate datasets we investigate is given in Appendix A.
### Dataset Processing
We then process the selected datasets as follows. 1) We convert each dataset to the unified format of claim-evidence-label triples \((c_{i},e_{i},l_{i})_{i=1}^{N}\). The simplicity of this format allows us to focus on out-of-domain generalization, instead of other orthogonal challenges of fact-checking. 2) We create separate dataset variants by pairing each claim with the evidence in different granularity. This enables us to study the impact of evidence length on generalization. After processing, we obtain the final selection of 11 datasets used. We now briefly introduce the nature of each dataset and its specific processing.
Group I: datasets with _artificial claims_.These are based on Wikipedia articles and are often large in size. However, crowd-sourced claims are often written with minimal edits to reference sentences,
leading to lexical biases such as the overuse of explicit negation Schuster et al. (2019).
\(\bullet\)**FEVER**Thorne et al. (2018) asks crowd-workers to mutate sentences from Wikipedia articles to create claims. We use the Wikipedia paragraph associated with each claim as its document-level evidence to construct the _FEVER-para_ dataset. We then use the sentence-level gold evidence for the supports and refutes claims to build the _FEVER-sent_ dataset. However, since sentence-level evidence is not available for NEI claims, we use the system of Malon (2018) to retrieve the evidence sentences, following Atanasova et al. (2020) and Pan et al. (2021).
\(\bullet\)**VitaminC**Schuster et al. (2021) creates contrastive evidence pairs for each claim, in which evidence pairs are nearly identical in language and content, with the exception that one supports a claim while the other does not.
\(\bullet\)**FoolMeTwice**Eisenschlos et al. (2021) designs a multi-player game that leads to diverse strategies for crafting claims (_e.g._, temporal inference) based on Wikipedia, resulting in more complex claims with less lexical overlap with the evidence.
Group II: datasets with _natural_ claims.These claims are collected from the Internet and then manually verified by professional fact checkers. They represent real-world claims, and originate from diverse domains, such as scholarly documents, news articles, forums, etc. However, due to the difficulty and high cost of manually verifying real-world claims, these datasets are limited in scale.
\(\bullet\)**Climate-FEVER**Diggelmann et al. (2020) consists of 1,535 real-life claims regarding climate-change collected from the Internet. The top five most relevant sentences from Wikipedia are retrieved as the evidence. Humans then annotate each sentence as supporting, refuting, or not enough information to validate the claim. We use the sentence-level annotation as the evidence for each claim to build the _Climate-FEVER-sent_. We construct the document-level evidence for each claim by putting together all of its evidence sentences, which gives us the _Climate-FEVER-para_ version.
\(\bullet\)**Sci-Fact**Wadden et al. (2020) consists of 1.4K expert-written scientific claims paired with evidence-containing abstracts annotated with labels and sentence-level rationale. We use the annotated rationale as the sentence-level evidence to build the _Sci-Fact-sent_. We construct the _Sci-Fact-para_ version by using the evidence-containing abstract as the document-level evidence for each claim.
\(\bullet\)**PubMed**Kotonya and Toni (2020) contains 11.8K claims accompanied by journalist crafted, gold standard judgments to support/refute the claims. The claims are collected from five fact-checking websites, news headlines, and news reviews. We use the judgment texts as evidence to pair with each claim.
\(\bullet\)**COVID-Fact**Saakyan et al. (2021) consists of 4,086 claims concerning the COVID19 pandemic crawled from the _r/COVID19_ subreddit. We use their sentence-level evidence annotated by crowd-workers as the evidence.
\(\bullet\)**FAVIQ**Park et al. (2022) contains 26k claims converted from natural ambiguous questions posed by real users. The answer-containing Wikipedia paragraph is provided as the document-level evidence for each claim.
Many of the original datasets do not release their test set. Therefore, we use their original split of train/dev sets as our training and evalua
\begin{table}
\begin{tabular}{l l|l|l|l|l|l|l|l l} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Domain} & \multirow{2}{*}{Claim} & \multirow{2}{*}{Evidence} & \multirow{2}{*}{Label} & \multicolumn{2}{c|}{\# Claims} & \multicolumn{2}{c}{Avg. \# tokens} \\ \cline{5-10} & & & & & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c|}{Test} & \multicolumn{2}{c}{Claim} & \multicolumn{2}{c}{Evid.} \\ \hline \multirow{4}{*}{1} & FEVER-sent & Wikipedia & artificial & sent-level & S (52\%), R (22\%), N (26\%) & 145,327 & 19,972 & 9.4 & 35.9 \\ & FEVER-para & Wikipedia & artificial & doc-level & S (52\%), R (22\%), N (26\%) & 145,327 & 19,972 & 9.4 & 368.7 \\ & VitaminC & Wikipedia & artificial & sent-level & S (50\%), R (35\%), N (15\%) & 370,653 & 63,054 & 12.6 & 29.5 \\ & FoolMeTwice & Wikipedia & artificial & sent-level & S (49\%), R (51\%) & 10,419 & 1,169 & 15.3 & 37.0 \\ \hline \multirow{4}{*}{2} & Climate-FEVER-sent & Climate & natural & sent-level & S (52\%), R (11\%), N (64\%) & 6,140 & 1,535 & 22.8 & 33.8 \\ & Climate-FEVER-para & Climate & natural & doc-level & S (47\%), R (19\%), N (34\%) & 1,103 & 278 & 22.9 & 168.9 \\ & Sci-Fact-sent & Science & natural & sent-level & S (43\%), R (22\%), N (35\%) & 868 & 321 & 13.8 & 61.9 \\ \multirow{4}{*}{II} & Sci-Fact-para & Science & natural & doc-level & S (43\%), R (22\%), N (35\%) & 868 & 321 & 13.8 & 257.3 \\ & PubHealth & Health & natural & sent-level & S (60\%), R (36\%), N (4\%) & 8,370 & 1,050 & 15.7 & 137.6 \\ \cline{1-1} & COVID-Fact & Forum & natural & sent-level & S (32\%), R (68\%) & 3,268 & 818 & 12.4 & 82.5 \\ \cline{1-1} & FAVIQ & Question & natural & doc-level & S (50\%), R (50\%) & 17,008 & 4,260 & 15.2 & 304.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of the 11 fact verification datasets for our study and their characteristics.
tion sets. We also standardize the naming of labels as supports, refutes, and NEI. We visualize the global structure of the datasets with tSNE (van der Maaten and Hinton, 2008) and analyze the domain divergence in Appendix B.
## 3 Zero/Few-shot Generalization
We now explore the generalization ability of fact verification models across the 11 datasets. We first formulate the task of zero/few-shot generalization.
Task Formulation.Given a claim \(\mathcal{C}\) and a piece of evidence \(\mathcal{P}\) as inputs, a _fact verification_ model \(\mathcal{F}\) predicts a label \(\mathcal{Y}\) to verify whether \(\mathcal{C}\) is supported, refuted, or can not be verified by the information in \(\mathcal{P}\). In the _zero-shot generalization_ setting, we train models on one source FV dataset, and then evaluate its performance on a target test set, without any additional training data in the target dataset. In the _few-shot generalization_ setting, we assume we have a small amount of target training examples.
Fact Verification Model.We use the RoBERTa-large (Liu et al., 2019) as the benchmark model for our study since it has achieved state-of-the-art results in many FV datasets. We concatenate the claim and evidence ([CLS] claim [SEP] evidence) and use it as input for a classification task to predict the label of the claim. We use the roberta-large (355M parameters) model provided by the HuggingFace library2.
Footnote 2: [https://huggingface.co/roberta-large](https://huggingface.co/roberta-large)
### Zero-shot generalization results
Table 2 shows the zero-shot generalization results in macro-averaged F1 for the 3-class fact verification task on all the datasets that have supports/refutes/NEI labels, where we partition by dataset group: _Group I, top_ (datasets with artificial claims); _Group II, bottom_ (datasets with natural claims). In general, the RoBERTa model generalizes poorly in this zero-shot setup. Compared with the in-domain performance (training and testing on the same dataset), the best zero-shot generalization performance shows a large drop of 20.80% on average. This shows that the FV model overfits to the particular dataset and generalizes poorly to unseen datasets. This validates prior work that shows the neural models are brittle when encountering out-of-distribution data. Taking a closer look, we further explore several research questions specific to fact verification behind this general trend.
Do artificial claims and natural claims generalize to each other?The bottom left of Table 2 shows that the model trained on natural claims generalizes badly to datasets with artificial claims, with an average F1 drop of 72% relative to the in-domain performance on the three artificial datasets. In contrast, with natural claims3, the model generalizes better, with an average F1 drop of 56% (bottom right). This observation supports the argument that artificial claims and natural claims have substantial differences, _e.g._, Wikipedia vs. real-world domains, high vs. less lexical overlap, and simple vs. diverse reasoning types, as discussed in SS 2.2 and related works (Wadden et al., 2020; Saakyan et al., 2021).
Footnote 3: For fair comparison, we don’t count the dataset pairs with the same data source, _e.g._, (SciFact-sent, SciFact-para)
However, a surprising and counter-intuitive observation is that the model trained on artificial claims generalizes quite well to natural claims. As shown by the top right section, the average F1 drop narrows to 36.9% when generalizing from artificial to natural claims, markedly better than when generalizing between natural claim datasets (56% average drop). In particular, when trained on _FEVERsent_, the model achieves the best generalization results on 3 out of 5 datasets with natural claims. However, we will show in the following that the large size of artificial claims contributes a lot to its good generalization performance.
Does generalization improve with more data?To examine whether good generalization on the FEVER and VitaminC datasets comes from their large dataset size, we conducted an experiment controlling for data size. Here, we only take 800 examples for each dataset to train the model. We show the zero-shot generalization results between the five datasets with sentence-level evidence in Table 4. The results on all datasets are shown in Table 10 in Appendix C.
We find that the model trained on natural claim datasets (Group I) can generalize to other natural claims slightly better than the model trained on artificial claim datasets (Group II) in this controlled setting. This confirms that dataset size contributes a lot to generalization ability. Tables 2 and 4 together show that Wikipedia-based artificial claims still generalize well to natural claims in real-world domains with the growth of dataset size, although
crowd-sourced claims have been criticized to have strong annotation bias and cannot represent real-life misinformation Schuster et al. (2019).
Which type of label is more difficult to verify?Table 5 shows the breakdown of the class-wise F1 score. For each dataset, we show the average class-wise F1 when training the model on other datasets (zero-shot) and the class-wise F1 for training on the same dataset (in-domain). The results show that the refutes claim has the worst prediction score (in bold) almost for all datasets, in both the zero-shot and the in-domain setting. The in-domain results are in line with the empirical observation that Jiang et al. (2020) it is often ambiguous to differentiate between refutes and NEI claims even for trained human annotators. This difficulty still maintains in the zero-shot setting and harms the generalization results.
What is the impact of evidence length?From Table 2, we find that fact verification in a dataset with document-level evidence is more difficult than in the same dataset with sentence-level evidence (an average of 13.29% drop of in-domain F1). This is understandable since document-level evidence requires the model to additionally filter out
\begin{table}
\begin{tabular}{l|c c|c c c} \hline \hline Train\(\downarrow\) Test\(\rightarrow\) & \begin{tabular}{c} FEVER \\ -sent \\ \end{tabular} & \begin{tabular}{c} Vita \\ minC \\ \end{tabular} & \begin{tabular}{c} C-FEVER \\ -sent \\ \end{tabular} & \begin{tabular}{c} SciFact \\ -sent \\ \end{tabular} &
\begin{tabular}{c} Pub \\ Health \\ \end{tabular} \\ \hline FEVER-sent & — & 22.20 & 13.66 & 19.64 & 24.57 \\ VitaminC & 16.93 & — & 13.78 & 20.04 & **24.98** \\ \hline C-FEVER-sent & 16.63 & 8.36 & — & 17.24 & 2.51 \\ SciFact-sent & 27.43 & **26.80** & **30.50** & — & 13.06 \\ PubHealth & **28.60** & 26.69 & 18.04 & **22.22** & — \\ \hline \hline \end{tabular}
\end{table}
Table 4: Macro-F1 of 3-class fact verification for all datasets with sentence-level evidence in a zero-shot generalization setup. _The size of training data is controlled to 800 samples for all datasets._
\begin{table}
\begin{tabular}{l|c c c|c c c c c c} \hline \hline Train\(\downarrow\) Test\(\rightarrow\) & \begin{tabular}{c} FEVER \\ -para \\ \end{tabular} & \begin{tabular}{c} FEVER \\ -sent \\ \end{tabular} & VitaminC & \begin{tabular}{c} C-FEVER \\ -para \\ \end{tabular} & \begin{tabular}{c} C-FEVER \\ -sent \\ \end{tabular} & \begin{tabular}{c} SciFact \\ -para \\ \end{tabular} & \begin{tabular}{c} SciFact \\ -sent \\ \end{tabular} &
\begin{tabular}{c} SciFact \\ -sent \\ \end{tabular} & PubHealth \\ \hline FEVER-para & — & **72.81** & 43.87 & 20.83 & 40.90 & 22.09 & 28.10 & 9.05 \\ FEVER-sent & **55.57** & — & **62.11** & 44.98 & **48.70** & **44.98** & **56.15** & 21.61 \\ VitaminC & 52.04 & 65.32 & — & 42.32 & 44.40 & 44.14 & 50.55 & **21.97** \\ \hline C-FEVER-para & 17.86 & 20.04 & 10.59 & — & 42.02 & 29.93 & 31.62 & 5.29 \\ C-FEVER-sent & 17.87 & 24.47 & 20.25 & **54.59** & — & 25.84 & 39.39 & 8.47 \\ SciFact-para & 23.96 & 27.09 & 28.37 & 29.85 & 28.63 & — & 44.68 & 6.78 \\ SciFact-sent & 16.86 & 24.50 & 29.22 & 20.61 & 32.50 & 29.00 & — & 4.49 \\ PubHealth & 35.21 & 34.41 & 30.67 & 34.12 & 24.18 & 40.34 & 42.03 & — \\ \hline SELF & 85.58 & 89.28 & 86.76 & 44.61 & 62.54 & 52.25 & 54.27 & 72.10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Macro-F1 of **3-class fact verification** on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset.
\begin{table}
\begin{tabular}{l|c c c|c c c c c c} \hline \hline Train\(\downarrow\) Test\(\rightarrow\) & \begin{tabular}{c} FEVER \\ -para \\ \end{tabular} & \begin{tabular}{c} FEVER \\ -sent \\ \end{tabular} & VitaminC & \begin{tabular}{c} FoodMe \\ Twice \\ \end{tabular} & \begin{tabular}{c} C-FEVER \\ -para \\ \end{tabular} & \begin{tabular}{c} C-FEVER \\ -sent \\ \end{tabular} & \begin{tabular}{c} SciFact \\ -para \\ \end{tabular} & \begin{tabular}{c} SciFact \\ -sent \\ \end{tabular} & PubHealth &
\begin{tabular}{c} COVID \\ -Fact \\ \end{tabular} & FAVIQ \\ \hline FEVER-para & — & **94.91** & 71.56 & 72.50 & **77.40** & 76.04 & 72.29 & 75.92 & 44.75 & 56.82 & 55.09 \\ FEVER-sent & **89.08** & — & **79.79** & 84.02 & 74.71 & **80.21** & **75.12** & **87.37** & 58.25 & 63.99 & 61.64 \\ VitaminC & 84.62 & 94.46 & — & **84.57** & 62.80 & 54.59 & 62.37 & 69.31 & 55.32 & **70.32** & **62.98** \\ FoolMeTwice & 82.58 & 91.46 & 78.56 & — & 71.38 & 78.24 & 69.22 & 84.19 & 56.81 & 58.68 & 59.23 \\ \hline C-FEVER-para & 33.37 & 33.72 & 52.56 & 34.15 & — & 56.66 & 39.77 & 40.89 & 38.61 & 25.50 & 33.52 \\ C-FEVER-sent & 51.33 & 62.00 & 55.91 & 50.69 & 75.85 & — & 66.72 & 72.08 & 55.54 & 43.04 & 36.53 \\ SciFact-para & 33.38 & 33.57 & 46.73 & 34.42 & 43.35 & 49.23 & — & 41.27 & 38.69 & 26.64 & 33.69 \\ SciFact-sent & 33.40 & 33.64 & 36.91 & 33.77 & 42.63 & 42.46 & 44.02 & — & 43.35 & 26.64 & 33.51 \\ PubHealth & 65.68 & 64.69 & 53.57 & 53.55 & 53.92 & 61.78 & 68.75 & 71.01 & — & 40.95 & 50.89 \\ COVID-Fact & 70.94 & 76.16 & 37.22 & 63.02 & 44.13 & 51.71 & 63.60 & 76.29 & **60.06** & — & 46.93 \\ FAVIQ & 74.57 & 73.80 & 59.14 & 59.67 & 64.92 & 60.49 & 59.08 & 52.64 & 40.15 & 50.25 & — \\ \hline \hline \end{tabular}
\end{table}
Table 3: F1 of **binary fact verification** on the evaluation set for all datasets in a zero-shot generalization setup. Rows correspond to the training dataset and columns to the evaluated dataset.
irrelevant information. Climate-FEVER suffers the largest F1 drop of 31.86%, compared with the slight performance drop on FEVER (4.3%) and SciFact (3.72%). A possible reason is that Climate-FEVER's document-level evidence consists of different (even contradictory) evidence sentences, which requires the model to reason over multiple sentences instead of just selecting the most relevant one.
In terms of generalization, the datasets with sentence-level evidence in general achieve better generalization results than other datasets compared to their doc-level versions. For example, C-FEVER-sent generalizes better than C-FEVER-para on 5 of the 6 datasets excluding themselves. Models trained on sentence-level datasets generalize well to other document-level datasets, but the converse is not true. These results indicate that training the FV model on more fine-grained evidence yields better generalization. This is consistent with the intuition that providing fine-grained evidence eases models' learning in FV, showing the importance of accurate evidence retrieval.
### Zero-shot generalization for binary FV
Many works Jiang et al. (2020); Saakyan et al. (2021) do not consider NEI claims due to their ambiguity. To explore whether our previous observations also hold for the task of _binary fact verification_, we evaluate the generalization results for all 11 datasets using only the supports and refutes claims for training and evaluation, shown in Table 3. In this setting, artificial claims also generalize well to natural claims in other domains. In 6 of the 7 datasets with natural claims, the best generalization score is from a model trained on artificial claims. This also holds for the evidence length: datasets with sentence-level evidence tend to generalize better than document-level datasets. Finally, compared with the three-class result in Table 2, generalization improves a lot on Climate-FEVER, SciFact, and PubHealth. The reason is that the model struggles in distinguishing between refutes and NEI claims in these datasets, as reflected by Table 5. Therefore, they benefit a lot from removing the NEI label.
### Few-shot generalization results
We now consider the few-shot generalization setting, assuming access to a small number of examples from a target dataset (50 for each class in our experiment). We pre-train a model on a source dataset and then fine-tune it on the target dataset. Our goal is to analyze whether pre-training improves performance compared to training on the target alone.
Table 6 shows the macro-F1 on the evaluation set of all datasets. The rows "SELF-few-shot" and "SELF-full" show the performance of direct training on the 150 samples of the target dataset and the full target training set, respectively (without pre-training on the source dataset). Generally, pre-training on a source FV dataset and fine-tuning to the target outperform "SELF-few-shot" on all 5 datasets and "SELF-full" on 3 out of 5 datasets. This shows that pre-training on a related FV dataset helps to reduce the demand for human-annotated training data in the target domain.
Second, _FEVER-sent_ obtains good generalization performance in all evaluation datasets. This strengthens our finding in Section 3.1 that FEVER generalizes well to datasets with natural claims in real-world domains. Last, after finetuning, we see a dramatic improvement in performance compared to Table 2. This highlights that current models over-fit the data they are trained on, and small amounts of data from the target distribution can overcome this generalization gap.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Train\(\downarrow\) Test\(\rightarrow\)} & C-fever & C-fever & SciFact & SciFact & Pub \\ & -para & -sent & -para & -sent & Health \\ \hline FEVER-para & 50.04 & 45.99 & 59.91 & 68.18 & 42.81 \\ FEVER-sent & **55.13** & 51.84 & **66.12** & **76.39** & 40.90 \\ VitaminC & 50.41 & 49.80 & 58.27 & 68.59 & 37.84 \\ \hline SELF-few-shot & 22.74 & 10.75 & 17.24 & 33.38 & 43.62 \\ SELF-full & 44.61 & **62.54** & 52.25 & 54.27 & **72.10** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Macro-F1 of three-class fact verification for all datasets in a **few-shot generalization setup**.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{Zero-shot} & \multicolumn{3}{c}{In-domain} \\ \cline{2-7} & S & R & N & S & R & N \\ \hline FEVER-para & 33.75 & **25.85** & 34.41 & 87.05 & 85.89 & **83.81** \\ FEVER-sent & 42.52 & **28.24** & 44.38 & 91.32 & **89.42** & 87.10 \\ VitaminC & 50.06 & **20.44** & 25.96 & 94.44 & 89.42 & **76.42** \\ \hline C-FEVER-para & 34.14 & **24.09** & 47.76 & 68.50 & **13.56** & 51.76 \\ C-FEVER-sent & 28.29 & **19.72** & 64.00 & 62.27 & **46.40** & 78.96 \\ SciFact-para & 34.33 & **15.35** & 51.60 & 59.29 & **23.02** & 74.44 \\ SciFact-sent & 47.43 & **22.23** & 55.70 & 61.87 & **18.49** & 82.45 \\ PubHealth & 20.96 & **4.54** & 7.79 & 91.07 & 82.00 & **43.24** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Class-wise F1 of 3-class fact verification for the zero-shot generalization setup (left) and the in-domain training setup (right). S: supports; R: refutes; N: NEI.
## 4 Improving Generalization
We then investigate two ways to improve the generalization ability of fact verification: 1) incorporating domain knowledge via pretraining on specialized domains, and 2) automatically generating training data via data augmentation.
### Pretraining on Specialized Domains
In-domain knowledge is essential for fact-checking in specialized domains. For example, virology background knowledge is required to verify scientific claims regarding COVID19 (Wadden et al., 2020). When generalizing an FV model from one domain to another, how to endow the model with such in-domain knowledge is a challenging subject worthy of long-term study. Here we explore one simple solution: initializing the model with language models pretrained on specialized domains.
In Table 7, we show the zero-shot generalization performance when initializing the FV model with BioBERT (Lee et al., 2020) (pretrained on biology literature) and SciBERT (Beltagy et al., 2019) (pretrained on scholarly documents). Our goal is to explore whether pretraining on specialized domains helps the generalization. To eliminate the impact of other factors such as the model size, we use the BERT model (Devlin et al., 2019) as the baseline, since BioBERT and SciBERT are both based on the BERT model.
We find that BioBERT and SciBERT both outperform the BERT on the generalization scores in Climate-FEVER, SciFact, and PubMed, with an average improvement of 21.39% and 12.69% in F1, respectively. However, their performance on Wikipedia-based datasets (FEVER and VitaminC) is relatively worse with BERT (-2.6% and -17.7% for BioBERT and SciBERT, respectively). This confirms the generalization of FV in certain domains (_e.g._, science) can be improved with the language models pretrained on relevant domains (_e.g._, scientific papers). We have similar observations for the few-shot generalization setting, shown in Table 11 in Appendix D. Despite the positive results, a suitable pretraining model in certain domains (_e.g._, tweets) is often unavailable. Moreover, this requires re-training the FV model during domain transfer. Therefore, how to develop a more accessible and less expensive way to incorporate in-domain knowledge required for fact-checking still requires further investigation.
### Data Augmentation
Another direction we explore is improving generalization via data augmentation, which has recently shown promising results in other NLP tasks such as question answering (Yue et al., 2021) and machine translation (Cheng et al., 2020). We first train a _claim generation_ model based on the BART (Lewis et al., 2020), using the (evidence, claim, label) triples _in the source domain_ as training data. We use the format [LABEL]_label_[NER]_NERS_[EVIDENCE]_evidence as the input, and use the _claim_ as the target output for training, where _NERS_ are the entities appearing in the claim (we add _NERS_ to guide the model to generate more specific claims). We then apply the trained model to generate claims with different labels in the target domain by separately assigning supports, refutes, NEI as the label prefix of the evidence, and we randomly assign an entity from the evidence as the _NERS4_ to guide the claim generation. We name this method as **BART-gen**.
Footnote 4: Since no ground-truth claim is available for the target domain, the entity cannot come from the claim.
We train BART-gen on the _FEVER-sent_ dataset
\begin{table}
\begin{tabular}{c c|c c c|c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Train\(\downarrow\) Test\(\rightarrow\)} & FEVER & FEVER & \multirow{2}{*}{VitaminC} & C-FEVER & C-FEVER & SciFact & SciFact \\ & & -para & -sent & & -para & -sent & -para & -sent \\ \hline \multirow{3}{*}{BERT} & FEVER-para & — & 64.04 & 33.82 & 18.15 & 29.71 & 18.53 & 18.19 & 3.20 \\ & FEVER-sent & **66.97** & — & **54.75** & 35.39 & 26.49 & 39.27 & 39.72 & 25.95 \\ & VitaminC & 54.12 & 63.28 & — & 39.57 & 34.93 & 40.80 & 45.51 & 22.21 \\ \hline \multirow{3}{*}{BioBERT} & FEVER-para & — & 67.89 & 42.41 & 24.22 & 38.94 & 37.69 & 35.85 & 8.24 \\ & FEVER-sent & 57.18 & — & 51.95 & 40.58 & 39.01 & 36.83 & 38.36 & 37.61 \\ & VitaminC & 51.03 & 60.34 & — & **40.60** & **39.72** & 43.38 & **50.71** & 19.44 \\ \hline \multirow{3}{*}{SciBERT} & FEVER-para & — & **68.49** & 39.73 & 20.43 & 33.84 & 28.90 & 35.53 & 6.37 \\ & FEVER-sent & 52.95 & — & 51.84 & 35.50 & 35.68 & 34.24 & 39.46 & **36.46** \\ \cline{1-1} & VitaminC & 50.20 & 58.74 & — & 37.99 & 38.79 & **43.55** & 45.69 & 20.66 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Zero-shot generalization performance (macro-F1) when **initialized with different pretraining models**.
and generate synthetic training (evidence, claim, label) triples for other datasets. For each piece of evidence in the target domain, we generate six claims with different (_label_, _NERS_) combinations. Table 8 shows the zero- and few-shot generalization results for BART-gen and other baselines: 1) _FEVER-full_: the model is trained on the original FEVER-sent dataset; 2) _FEVER-control_: the model is trained on a random subset of FEVER-sent which has the same amount of data with the generated data; 3) _BART-gen_: the model is trained on the generated data. For the few-shot setting, the model is further fine-tuned with 150 in-domain samples.
For the zero-shot setting results in Table 8, _BART-gen_ consistently improves the generalization performance compared with _FEVER-full_ (+24.9% in average) and _FEVER-control_ (+47.4% in average). The results show that training with generated target data is in general more effective than directly generalizing a model trained on the source data. This is better reflected by comparing _BART-gen_ with _FEVER-control_ in which the data amount is the same. The improvement is especially noticeable for _PubHealth_, probably because it lacks the NEI claims in its original training set. Data augmentation addresses this by generating a sufficiently balanced number of claims for each label.
However, our human evaluation in Appendix E shows that around 30% of generated claims suffer the _label inconsistency_ problem, _i.e._, the BART-gen often generates a fluent claim that does not match our desired label (for example, we want to generate a refutes claim, but the generated claim is actually NEI). Label inconsistency may introduce conflicting information between the pretraining and fine-tuning stages, which we hypothesize is the cause for the lower level of improvement in fine-tuning the model on the generated data, compared with fine-tuning the model on FEVER. Therefore, although data augmentation is a promising direction to improve generalization, it remains a challenging problem regarding how to generate high-quality claims with consistent labels.
## 5 Related Work
To overcome the proliferation of misinformation, a great amount of progress has been made in the area of automated fact verification. For modeling, pretraining-based models Nie et al. (2019); Stammbach and Neumann (2019); Zhao et al. (2020); Soleimani et al. (2020) have been used for better text representation and have achieved promising performance. Graph-based models Zhou et al. (2019); Liu et al. (2020); Zhong et al. (2020) are used to facilitate the reasoning over multiple pieces of evidence. However, most existing models rely on large-scale in-domain training data, which is often unrealistic for every domain that demand fact checking. In this paper, we aim to address this by working towards a generalizable fact verification system that can adapt to different domains with zero or few samples in the target domain.
For datasets, various fact-checking datasets representing different real-world domains are proposed, including both naturally occurring Augenstein et al. (2019); Gupta and Srikumar (2021); Saakyan et al. (2021); Lu et al. (2023) and human-crafted Thorne et al. (2018); Sathe et al. (2020); Schuster et al. (2021); Atanasova et al. (2022) fact-checking claims. While these FV datasets focus on different domains, there is still a substantial overlap in the abilities required to verify claims across these datasets. However, little analysis has been done on whether they generalize to one another, and the extent to which existing datasets can be leveraged for improving performance on new ones. Similar studies have been done in other NLP tasks Talmor and Berant (2019); Hardalov et al. (2021), while it is less investigated in fact verification. In this paper, we bridge this gap by conducting a comprehensive study of generalization and transfer across existing FV datasets, revealing several key factors for better generalization.
## 6 Conclusion and Future Work
In this work, we perform a thorough empirical investigation of zero- and few-shot generalization
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method\(\downarrow\) Test\(\rightarrow\) & \begin{tabular}{c} C-Fever \\ -para \\ \end{tabular} & \begin{tabular}{c} C-fever \\ -sent \\ \end{tabular} & \begin{tabular}{c} SciFact \\ -para \\ \end{tabular} &
\begin{tabular}{c} Pub \\ -sent \\ \end{tabular} & Health \\ \hline \multicolumn{5}{l}{**Zero-shot setting**} \\ FEVER-full & 44.98 & 48.70 & 44.98 & **56.15** & 21.61 \\ FEVER-control & 37.14 & 45.64 & 32.42 & 47.57 & 20.69 \\ BART-gen & **46.51** & **51.67** & **47.80** & 54.63 & **69.82** \\ \hline \multicolumn{5}{l}{**Few-shot setting**} \\ FEVER-full & **55.13** & 51.84 & **66.12** & 76.39 & 40.90 \\ FEVER-control & 37.17 & 48.13 & 62.72 & **77.41** & 47.03 \\ BART-gen & 46.94 & **52.80** & 50.10 & 59.00 & **70.45** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Macro-F1 of three-class fact verification **with data augmentation** (BART-gen) and other baselines. The (top, bottom) shows results for (zero-, few-)shot generalization, respectively.
over 11 fact verification datasets. Moreover, we conduct an exhaustive analysis and highlight the most important factors influencing the generalization performance. We further empirically explore two ways to improve generalization in fact verification. We highlight several practical takeaways:
\(\bullet\) Overall, the FV model generalizes poorly to unseen datasets compared with in-domain evaluation. However, performance is largely improved by fine-tuning on the target data.
\(\bullet\) Artificial claims can also generalize well to natural claims with an increase of dataset size.
\(\bullet\) Model trained on sentence-level evidence generalize better than document-level evidence.
\(\bullet\) The refutes claims are the most difficult to verify among the three labels.
\(\bullet\) Domain-specific pretraining and data augmentation consistently improves generalization performance, but they also leave unsolved challenges.
In future work, we plan to experiment with more datasets, including non-English ones. We will also explore the generalization of other sub-tasks in fact-checking, _e.g._, claim detection, evidence retrieval.
## Limitations
In our study, we primarily focused on assessing the generalization capabilities of Transformer-based models, such as RoBERTa. However, we did not extend our evaluation to include zero- and few-shot learning performance on large language models (LLMs) like InstructGPT (Ouyang et al., 2022) and GPT-4 (OpenAI, 2023), due to the high experimental costs. Recently, these LLMs have demonstrated impressive few-shot learning capacities across a variety of natural language processing tasks, including few-shot fact-checking (Pan et al., 2023). However, they are API-based and function as black-box models. This restricts our ability to delve deeper into their behavior, given that we cannot access their model weights or fine-tune them directly. On the contrary, Transformer-based models are open-sourced and replicable, providing a wealth of opportunity for more profound insights into our study and paving the way for future research.
## Acknowledgements
This research / project is supported by the Ministry of Education, Singapore, under its MOE AcRF TIER 3 Grant (MOE-MOET32022-0001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
|
2309.06777 | Quantum Optical Induced-Coherence Tomography by a Hybrid Interferometer | Quantum interferometry based on induced-coherence phenomena has demonstrated
the possibility of undetected-photon measurements. Perturbation in the optical
path of probe photons can be detected by interference signals generated by
quantum mechanically correlated twin photons propagating through a different
path, possibly at a different wavelength. To the best of our knowledge, this
work demonstrates for the first time a hybrid-type induced-coherence
interferometer that incorporates a Mach-Zehnder-type interferometer for visible
photons and a Michelson-type interferometer for infrared photons, based on
double-pass pumped spontaneous parametric down-conversion. This configuration
enables infrared optical measurements via the detection of near-visible photons
and provides methods for characterizing the quality of measurements by
identifying photon pairs of different origins. The results verify that the
induced-coherence interference visibility is approximately the same as the
heralding efficiencies between twin photons along the relevant spatial modes.
Applications to both time-domain and frequency-domain quantum-optical
induced-coherence tomography for three-dimensional test structures are
demonstrated. The results prove the feasibility of practical undetected-photon
sensing and imaging techniques based on the presented structure. | Eun Mi Kim, Sun Kyung Lee, Sang Min Lee, Myeong Soo Kang, Hee Su Park | 2023-09-13T07:54:31Z | http://arxiv.org/abs/2309.06777v1 | # Quantum Optical Induced-Coherence Tomography by a Hybrid Interferometer
###### Abstract
Quantum interferometry based on induced-coherence phenomena has demonstrated the possibility of undetected-photon measurements. Perturbation in the optical path of probe photons can be detected by interference signals generated by quantum mechanically correlated twin photons propagating through a different path, possibly at a different wavelength. To the best of our knowledge, this work demonstrates for the first time a hybrid-type induced-coherence interferometer that incorporates a Mach-Zehnder-type interferometer for visible photons and a Michelson-type interferometer for infrared photons, based on double-pass pumped spontaneous parametric down-conversion. This configuration enables infrared optical measurements via the detection of near-visible photons and provides methods for characterizing the quality of measurements by identifying photon pairs of different origins. The results verify that the induced-coherence interference visibility is approximately the same as the heralding efficiencies between twin photons along the relevant spatial modes. Applications to both time-domain and frequency-domain quantum-optical induced-coherence tomography for three-dimensional test structures are demonstrated. The results prove the feasibility of practical undetected-photon sensing and imaging techniques based on the presented structure.
Keywords: quantum optics, induced coherence, entangled photons, ghost imaging +
Footnote †: journal: Journal of Quantum Optics
## Introduction
Photonic quantum sensing and imaging technologies based on entanglement offer unique possibilities [1-5]. Examples include spatial super-resolution beating the classical limit [3] and photo-detection sensitivity below the standard quantum limit [4]. Moreover, a set of schemes including this work, classified as ghost imaging in a broad sense [2], can separate the probe light in contact with samples and the detection light carrying the measurement information to a detector. In cases where the mere existence of detection photons can infer the correlation between probe-detection photon pairs, the detection of probe photons can even be neglected [5]. Applying this technique to non-degenerate entangled pairs can overcome the optical measurement limitation caused by the intrinsic quantum efficiency of detector materials in specific wavelength bands because the detection wavelength of light can be shifted to fit the available high-quality detectors.
These undetected-photon sensing and imaging techniques rely on the quantum correlation between photon pairs generated by a coherent combination of multiple photon-pair sources. Their archetypal implementation includes probe (_idler_) photons lying on the overlapped path of two sources where the photons can be generated only by the constructive interference of the two generation possibilities. Their twin detection (_signal_) photons are projected onto a coherent superposition state of distinct signal modes from the two sources [6,7]. Therefore, changes in the probe photon path by a target are measurable with interferometry for the signal photons, and the probe photons need not be measured eventually because their existence is guaranteed through
energy conservation. This induced-coherence phenomenon, observed since 1991 [8, 9], has led to the demonstration of optical measurement techniques with ever-increasing complexity, such as optical on-off imaging [5], spectroscopy [10, 11], optical coherence tomography (OCT) [12, 13], microscopy [14, 15], Fourier-transform infrared spectroscopy [16, 17] and holography [18].
A major advantage of applying the induced-coherence technique to OCT is that the non-destructive testing of structures requiring low-energy photons is facilitated by the detection of higher-energy photons that minimize noise. Infrared (IR) optical depth profiling based on time-domain (TD) OCT [12] and wavelength-domain OCT [13, 19] has been demonstrated based on classical types of measurements in the visible wavelength range through induced coherence. These previous quantum optical induced-coherence tomography (QICT) experiments used Michelson-type interferometers, where all the pump, signal, and idler paths passed twice through a single spontaneous parametric down-conversion (SPDC) crystal by the respective back-reflecting mirrors. In contrast, our work constructs a hybrid-type interferometer in which a pump laser and a probe photon propagate through similar Michelson interferometers, while a detection photon path comprises a Mach-Zehnder-type interferometer. We show that this structure, explained theoretically in a review article [20] and realized for the first time to our knowledge, allows us to characterize the heralding efficiencies between the twin photons and hence correctly anticipate the interference visibility. We experimentally verify that the interference visibility, which is critical in maximizing the signal-to-noise ratio (SNR) of QICT, is approximately the average of the heralding efficiencies from signal to idler photons.
The interferometer setup is applied to frequency-domain (FD) QICT in the IR wavelength range of approximately 1.55 \(\upmu\)m, through spectral measurement of visible photons near wavelength of 810 nm. The depth profiles of reference samples made of sapphire, silicon, and glass plates are reconstructed by Fourier transform of the spectral interference fringes with all the optical parts fixed. The test patterns of a metal-coated glass plate under a silicon plate cover that generally blocks the detection wavelength (\(\sim\) 810 nm) are imaged by relative transverse scanning of the probe and sample.
## Principle
We analyze induced-coherence interferometry using a simplified schematic model shown in figure 1. Two SPDC crystals (DC\({}_{1}\) and DC\({}_{2}\)) produce a major part of the signal and idler photons along the spatial modes denoted by \(s_{n}\) and \(i_{n}\) (\(n\)\(=\) 1,2). Modes \(t_{n}\)'s and \(j_{n}\)'s are loss channels containing the photons that are not coupled to the major modes. Although such unpaired or stray photons generally scatter into multiple modes, regarding them as single modes is sufficient for the current analysis. The generated photon pairs are initially in the following state:
\[\left|\psi\right\rangle\!=\!\left[C_{1}(p_{x}\hat{s}_{1,2}^{\dagger}+q_{x}\hat {s}_{1,2}^{\dagger}+r_{1,2}^{\dagger}\hat{s}_{1,2}^{\dagger})+C_{2}(e^{i }\varphi_{p_{z}}\hat{s}_{2,2}^{\dagger}+q_{x}\hat{s}_{2,2}^{\dagger})\right] \!\left|0\right\rangle, \tag{1}\]
where \(C_{1,2}\), \(p_{1,2}\), \(q_{1,2}\), and \(r_{1,2}\) are complex constants, and \(\hat{x}_{1,2}\) is the creation operator for mode \(x_{1,2}\): \(\hat{x}_{1,2}=\hat{a}_{\omega_{\omega}}^{\dagger}\) (\(x=s\), \(i\), \(t\), \(j\)). \(\phi\) is the phase added by the phase shifter to the \(s_{2}\) path in figure 1. The phase accumulated by the idler path between the two crystals can be considered as an offset of \(\phi\) without loss of generality. The heralding efficiency \(\mu_{s\to i,n}\) (\(\mu_{s\to i,n}\)) that a signal (idler) photon heralds an idler (signal) photon in the major modes by DC\({}_{n}\) (\(n=\) 1,2) is expressed as:
\[\mu_{s\to i,n}\!=\!\frac{\left|p_{n}\right|^{2}}{\left|p_{n}\right|^{2}+\left| q_{n}\right|^{2}}\,,\,\,\,\mu_{i\to s,n}\!=\!\frac{\left|p_{n}\right|^{2}}{\left|p_{n} \right|^{2}+\left|r_{n}\right|^{2}}\,. \tag{2}\]
Two lossless beam splitters (BS\({}_{2}\) and BS\({}_{3}\)) in the middle of the interferometer arms have variable transmittances \(\eta_{i}\) and \(\eta_{i}\). These BSs are sets of neutral-density filters used in the experiments to verify the validity of the calculations below. Although equation (1) assumes a single spectral/temporal mode for each of the pump, signal, and idler photons, the general conclusions are still valid for photon pairs in frequency-entangled states from cw-laser-pumped SPDC, as shown from early experiments [8]. This is because the interfering signal paths cannot be distinguished by the spectral/temporal measurement of the idler photons when the idler photon paths are overlapped. Unlike other quantum interferometers using multiple photon-pair sources, photon-wise indistinguishability or purity based on a factorable joint spectrum is not required, and moreover, a high level of frequency entanglement is rather desirable for practical sensing applications.
The final state after propagation through all the components is collected as:
Figure 1: Model of the induced-coherence interferometer. Major correlated modes participating in induced-coherence interference are \(s_{n}\) (signal) and \(i_{n}\) (idler), and modes \(t_{n}\) and \(j_{n}\) are occupied by unpaired photons (\(n=\) 1,2). DC: down-conversion crystal, BS: beam splitter (1-\(\eta_{i}\), 1-\(\eta_{i}\): reflectance corresponding to added attenuation in experiments), M: mirror.
\[\left|\psi\right\rangle=\left[C_{1}\left[\begin{array}{c}p_{1}\left\{\sqrt{{{\eta_{s }}}\left({{\hat{s}}_{1}+{\hat{s}}_{2}}\right)/\sqrt[]{2}+\sqrt[]{1-{\eta_{s}}}{{ \hat{s}}_{3}}\right)\left(\sqrt[]{{\eta_{s}}}{{\hat{h}}_{1}}+\sqrt[]{1-{\eta_{s }}}{{\hat{s}}_{3}}\right)}\right]\\ +q_{1}\left\{\sqrt[]{{\eta_{s}}}\left({{\hat{s}}_{1}+{\hat{s}}_{2}}\right)/ \sqrt[]{2}+\sqrt[]{1-{\eta_{s}}}{{\hat{s}}_{3}}\right\}\hat{j}_{1}\\ +q_{1}\left\{\sqrt[]{{\eta_{s}}}\left({\sqrt[]{{\eta_{s}}}+\sqrt[]{{\eta_{s}} }{{\hat{s}}_{2}}}\right)-\sqrt[]{2}\right\}\\ +C_{2}\left[\begin{array}{c}p_{2}e^{i\phi}\left({{\hat{s}}_{1}-{\hat{s}}_{2} }\right){\hat{i}}_{2}/\sqrt[]{2}+q_{2}e^{i\phi}\left({{\hat{s}}_{1}-{\hat{s}}_ {2}}\right){\hat{j}}_{2}/\sqrt[]{2}\\ +q_{2}^{\prime}{\hat{i}}_{2}^{\prime}{\hat{i}}_{2}^{\prime}}\\ \end{array}\right]\right]\right] \tag{3}\]
Tracing out the idler states from the above state after equating the \(i_{1}\) and \(i_{2}\) modes yields the observed signal-photon state. Therefore, the single count rate at the output detector is proportional to:
\[R_{s} =\] \[= \frac{1}{2}\left(\left|C_{1}p_{1}\right|^{2}\eta_{s}+\left|C_{2} p_{2}\right|^{2}+\left|C_{4}q_{1}\right|^{2}\eta_{s}+\left|C_{2}q_{2}\right|^{2}\right)\] \[+\left|C_{1}C_{2}p_{1}p_{2}\right|\sqrt{{\eta_{s}}}\sqrt{{\eta_{i }}}\sin\left(\phi+\phi_{0}\right)\]
where \(\phi_{0}\) is a constant phase offset. The visibility \(\gamma\) of the final induced-coherence interference fringes is
\[\gamma=\frac{2\left|C_{1}C_{2}p_{1}\right|^{2}\sqrt[]{{\eta_{s}}}\sqrt{{\eta _{\eta_{s}}}}}{{{\left|{{\left|{{p_{1}}}\right|}^{2}}+{\left|{{\eta_{1}}} \right|}^{2}}}}\left|C_{1}^{2}\right|^{2}{{\eta_{s}}}+\left(\left|{{\left|{{ \left|{{p_{2}}}\right|}^{2}}+{\left|{{\left|{{\eta_{2}}}\right|}^{2}}}\right|^ {2}}}\right)\left|C_{2}\right|^{2}\right.. \tag{5}\]
When the two SPDC processes are symmetric (\(\left|C_{1}\right|=\left|C_{2}\right|\), \(\left|p_{1}\right|\) = \(\left|p_{2}\right|\), \(\left|q_{1}\right|=\left|q_{2}\right|\), \(\left|r_{1}\right|=\left|r_{2}\right|\)), the visibility formula simplifies to \(\gamma={{\mu_{s-i}}}\cdot{\lambda_{2}}{{\eta_{s}}}\cdot{{\eta_{s}}}/{\left({1 +{\eta_{s}}}\right)}\). This result signifies the major implication of our analysis that visibility is the same as signal-to-idler heralding efficiency.
For QICT, samples with multiple reflection surfaces are placed in the idler path between the two SPDC crystals shown in figure 1. Each photon has a finite coherence time or bandwidth, and interference appears only when the relevant group delays of signal and idler photons are matched at the detectors. When the pump laser is monochromatic, this group delay matching condition states that the propagation time of the \(i_{1}\)-mode photon from DC\({}_{1}\) to DC\({}_{2}\) (\(\pi_{0}\)) must be the same as the propagation time difference between the \(s_{1}\)-mode photon from DC\({}_{1}\) to BS\({}_{4}\) (\(\tau_{1}\)) and the \(s_{2}\)-mode photon from DC\({}_{2}\) to BS\({}_{4}\) (\(\tau_{2}\)) in figure 1, that is, \(\pi_{0}-(\pi_{1}-\pi_{2})=0\)[9]. When the idler photon path through the sample is fixed and the signal path length is varied, the interference fringes in equation (4) appear only within a delay smaller than the coherence length of the signal photons. Resolving the depth of the layers by scanning the variable delay embodies TD QICT.
The reflection from a sample surface can be modeled as placing \({{}_{i}}^{\sin\left({\Delta t}\right)/c}\) at the place of \(\sqrt[]{{\eta_{i}}}\) in equation (3), where \(r_{1}\) is the amplitude reflection coefficient, \(\omega_{i}\) is the angular frequency of idler photons, and \(d\) is the optical thickness (geometrical thicknesses multiplied by refractive indices for relevant sections) with respect to a reference plane. With fixing the lengths of the interferometer arms, interference fringes according to the wavelength (\(=2\pi c/\omega_{0}\)) are the measurement results of the FD QICT. Retaining all the contributions of \(s_{1}\), \(s_{2}\), and \(i_{1}\) modes to the phase of the interference and using the relation that the sum of signal frequency and idler frequency is constant, the magnitude of interference fringe can be expressed as:
\[\gamma\sin\left({{\omega_{s}}\left({{\tau_{1}}-{\tau_{2}}}\right)+\omega_{2}{ \tau_{0}}+\phi_{0}}\right)=\gamma\sin\left({{\omega_{s}}\left({{\tau_{1}}-{ \tau_{2}}-{\tau_{0}}}\right)+\phi_{0}^{\prime}}\right) \tag{6}\]
where \(\lambda_{s0}\) and \(\Delta\lambda_{s}\) are the center wavelength and the relative wavelength of signal photons, respectively, and \(\phi_{0}\)' is a phase constant. \(n_{gi}\) and \(d_{i}\) are the group index and the thickness of the sample, respectively. \(\Delta(2n_{gi}d_{i})\) is the relative optical path length through the idler path, calculated as twice the thickness multiplied by the group index summed over all the sections, and becomes zero at the perfect time delay matching condition (\(\pi_{0}-\pi_{1}+\pi_{2}=0\)). When the sample has multiple reflection surfaces, the spectrum of signal photons contains multiple frequency components given by equation (6), and the Fourier transform of the spectrum reconstructs the position and magnitude of the reflection peaks.
### Experimental setup
The experimental configuration used in our work is shown in figure 2. A periodically poled lithium niobate (PPLN) crystal (length 5 mm) is pumped by a continuous-wave laser
Figure 2: Schematic of the experimental setup. A combination of a flat mirror and a shutter is used only for characterization of the interferometer. SMF: single-mode fiber, DM: dichroic mirror, PPLN: periodically poled lithium niobate crystal, CM: concave mirror, L: lens, NPBS: non-polarizing beam splitter, SPAD: single-photon avalanche diode, QICT: quantum-optical induced-coherence tomography, FD: frequency-domain, CW: continuous wave.
(wavelength 532 nm) for non-degenerate type-0 SPDC. Photon pairs with signal and idler wavelengths at \(\sim\)810 nm and \(\sim\)1550 nm, respectively, are generated along both the forward and backward directions by a pump beam (1/\(e^{2}\)-diameter 300 \(\upmu\)m) that is focused by a lens (L1, focal length 1000 mm) and refocused by a concave mirror (focal length 200 mm). The forward-propagating signal photons pass through a variable delay line and are superposed with the backward-propagating signal photons at a non-polarizing beam splitter (NPBS), completing a Mach-Zehnder-type interferometer. The forward-propagating idler photons are back-reflected at the sample surface and co-propagate with the backward-propagating idler photons to form a Michelson-type interferometer.
The signal photons exiting the NPBS are finally coupled to a single-mode fiber (SMF; Thorlabs 780HP) and detected by a silicon single-photon avalanche diode (SPAD). Although the measurement of idler photons is unnecessary for induced-coherence interferometry, we add a detector for the idler photons to characterize the setup. The spatial intensity distributions of the idler photons heralded by the signal photons are first measured on the transverse planes (planes 1 and 2 in figure 2). An SMF (Corning SMF-28) that best fits the measured profiles is inserted together with a relevant coupling aspheric lens (focal length 11 mm) and connected to an InGaAs SPAD. This geometry enables us to measure all the heralding efficiencies between the relevant signal and idler photons by comparing the coincidence counts and the single counts of the two SPADs by blocking the relevant signal, idler, and pump paths by optical shutters. This feature differentiates this study from the previous studies. Final measurements, including QICT experiments, record only single counts using the silicon SPAD, selectively incorporating bandpass filtering when necessary.
The collimation lenses (L2 and L3 in figure 2) for signal and idler beams have focal lengths of 300 mm and 250 mm, respectively. The aspheric lens collecting the signal photons into the SMF has a focal length of 6.2 mm, therefore the detected signal photons nominally have a 1/\(e^{2}\) mode field diameter of 240 \(\upmu\)m at the SPDC crystal. To verify the spectral and spatial overlaps of the two photon-pair generation possibilities, we measure the spectrum of signal photons and the spatial profiles of idler photons. Overlapping the spatial modes of light along a shared path inside the SPDC crystal achieves spectral identity determined by the phase-matching condition. Figure 3(a) shows the measured spectra of the forward- and backward-generated signal photons, which are indistinguishable within the experimental uncertainty.
At planes 1 and 2 in figure 2, which were 600 mm apart from each other, the spatial distribution was measured by transverse scanning by a step-index multimode fiber tip (Thorlabs M42L02, core diameter 50 \(\upmu\)m) to collect the idler photons, as shown in figure 3(b). The unit step for the scanning was 60 \(\upmu\)m along both the x- and y-directions. The measured 1/\(e^{2}\) beam diameters were 2.2 mm and 2.9 mm for forward- and backward-pumped idler photons, respectively, and the beam divergences were \(<\) 0.1 mrad. The SMF and lens collecting the idler photons nominally match with an incoming beam diameter of 2.5 mm and project a 210-\(\upmu\)m diameter spot on the SPDC crystal. Both the forward- and backward-pumped idler photons are aligned individually to maximize the coupling efficiencies to the SMF. Note again that this SMF coupling is independent of the QICT experiments and is used only to verify the relations regarding interference visibility.
The heralding efficiencies among photon pairs are measured with signal and idler photons coupled to the respective SMF cables. After compensating for the detection efficiencies of the SPADs and the back-reflection losses at the fiber end faces, the signal-to-idler (idler-to-signal) heralding efficiency \(\eta_{r\to i}\) (\(\eta_{r\to s}\)) was 63% (43%) for the forward-pumped SPDC. The backward-pumped SPDC showed \(\eta_{r\to i}\) (\(\eta_{r\to s}\)) of 60% (49%). Departure from the ideal 100% efficiency can be attributed to imperfect spatial mode matching, possibly by phase nonuniformity that is not resolvable in the intensity profiles in figure 3(b) and transmission losses of other optical components. Although there is no evidence that the SMF collects the main common idler mode (\(i_{1}\) and \(i_{2}\) in figure 1), we expect that the spatial mode defined by the SMF is a good approximation because the mode field diameter (2.1 mm) is close to that of heralded idler photons (2.2 mm and 2.9 mm), and the heralding efficiencies for forward and backward pumped photon pairs are close to each other.
### Induced-coherence interference visibility
To investigate the interference visibility, we first place a mirror at the sample position, as shown in figure 2. An interference fringe burst appears according to the position of the signal delay line, as shown in figure 4(a). Figure 4(b) shows the magnified fringes observed by fine-tuning the idler reference mirror position with a piezoelectric actuator with a
Figure 3: Spectral and spatial characteristics of photons. (a) Spectra of forward (red) and backward (blue) pumped signal photons. (b) Spatial intensity distributions of heralded idler photons measured at planes 1 and 2 in figure 2. The blue lines in the cross-sections are Gaussian fit (with constant background) to the measured data (red dots).
signal delay fixed at the center of the interference fringe profile. The measured interference visibility was 64%.
The spectral-domain interference fringes in equation (6), with an offset delay of 1 mm in the signal path, are shown in figure 4(c). Here, the average counts in the absence of interference were measured blocking one idler path and subtracted from the measurement data. This subtraction removes an unwanted zero-delay (dc-component) artifact peak in the following results. Figure 4(d) is the Fourier transform of the spectrum in figure 4(c), where the optical path length \(\mathcal{A}d_{s}\) is calculated by equating the phase factor in equation (6) to \(2\pi\mathcal{A}d_{s}/(\lambda_{\mathcal{A}d}\lambda_{s})\). The calculated value clearly follows the actual position of the signal delay, as shown in figure 4(e).
We note that the measured visibility (64%) is close to the heralding efficiencies \(\eta_{\nu\to\nu}\)'s (63% and 60%) and verifies the relation in equation (5). We further confirm the relation experimentally by placing neutral-density filters in one of the signal or idler interferometer arms (adjacent to the signal delay line or the sample). Figures 5(a) and 5(b) show the visibility according to the transmission, and confirm the theoretical calculation in equation (5). The linear dependence of the visibility in figure 5(a) can be interpreted as follows: among the detected signal photons, the unpaired signal photons, not heralding idler photons into the major spatial mode, constitute a constant background noise. The ratio of the paired signal photons to the total signal photons is approximately the same as the visibility. Figure 5(b) shows the typical behavior of conventional two-beam interferometers for the variable transmittance of one arm.
## Applications to tomography and imaging
We describe the QICT and imaging configuration. The basic characteristics of the current interferometer setup for optical depth profiling are shown in figure 6. The axial resolution in figure 6(a) is the full width at half-maximum of the peaks in figure 4(d) and inversely proportional to the spectral width of photons. For FD QICT, peaks in the positive and negative optical delays are overlapped and not distinguished. This aliasing effect narrows the peaks with near-zero optical delay, as shown in figure 6(a). During measurements of real samples, we avoid the aliasing ambiguity by locating a reference position such that all the reflection surfaces lie at positive or negative delay positions. The average half-maximum resolution was 0.194 mm, which agrees with the theoretical estimation (0.198 mm) from the spectrum in figure 3(a) [21]. The range of measurable optical depth was determined by the resolution (0.07 nm) of spectral measurements. The sensitivity measured by the amplitude of the spectral peak in figure 4(d) decreases according to the
Figure 4: Quantum optical induced-coherence interference fringes in the time and frequency domains. (a) Interference fringes according to the optical delay for signal photons without (red) and with (blue) blocking the idler path between the two crystals. (b) Fine-scanning results using piezoelectric control of the mirror position at the idler path. (c) Interference fringe in the spectral domain with the optical delay for signal photons fixed at 1 mm that is greater than the coherence length. (d) Optical depth profile derived from the Fourier-transform of (c). (The depth resolution defined as the overall profile width in (a) corresponds to the width of amplitude profile, hence an additional factor of 1/\(\sqrt{2}\).) (e) Measured depth (green dots) of the variable delay with respect to the zero-delay position determined in (d). The lines plot the absolute values.
Figure 5: Interference visibility according to the transmission loss of (a) one idler arm and (b) one signal arm shown in figure 1. The red dots are experimental data, and the blue lines are theoretical predictions.
Figure 6: Tomography characteristics of the induced-coherence interferometer. (a) Depth resolution according to the optical delay. Small reduction at the zero delay is caused by the aliasing effect of Fourier transform. (b) Relative sensitivity according to the optical delay. (c) Signal-to-noise ratio with the integration time. Red dots: experiments, blue lines: theory.
optical delay, as shown in figure 6(b) [22]. The SNR is proportional to the integration time for unit measurement, as shown in figure 6(c), which indicates that the noise is mainly governed by the Poissonian photon number fluctuation (shot noise) of photon counting.
Two types of reference samples were applied for depth profiling. Sample 1 has three reflection planes from sapphire-air-silicon layers within the detection range, as shown in figure 7(a). The TD and FD QICT results are shown in figures 7(b) and 7(c), respectively. Considering the group index of sapphire (1.77) at the idler wavelength (1550 nm), the thicknesses of the air and sapphire layers corresponding to the results in figure 7(c) are 0.431 mm and 0.442 mm, respectively, and are within 1% of the values measured from the microscope image in figure 7(a). Sample 2 is a combination of a double-side-polished silicon plate and a sapphire plate and has four major reflection components including boundaries with air and double reflection within the highly reflective silicon, as shown in figure 7(d). The TD and FD QICT results in figures 7(e) and 7(f) clearly resolve these four peaks. The group index of silicon is 3.61, and the thicknesses of silicon and sapphire layers estimated in figure 7(f) were 0.251 mm and 0.489 mm, respectively, which agreed within 8% with the microscope image. The additional peaks appearing in the FD QICT can be attributed to unidentified multiple reflections or interference among them. For example, the peak at near-zero depth corresponds to a path composed of double roundtrips in both the silicon and sapphire plates. The peak between cases c and case d (near an optical depth of 1 mm) can be attributed to double roundtrips inside the silicon plate plus triple roundtrips inside the sapphire plate. Other peaks can also appear owing to interference between different reflection components within the sample path when the sample surfaces are highly reflective. Such peaks can be removed from the detection range by placing the reference plane on the other side of the sample. (The current reference position was chosen to better resolve the weakly reflecting cases.)
To demonstrate the imaging capability for targets that are normally invisible at visible wavelengths, we prepared a resolution test target (1951 USAF, Thorlabs R3L3S1N) patterned on a chrome-coated soda-line glass substrate covered with a polished silicon wafer, as shown in figure 8(a). A lens (focal length of 35 mm) focuses the idler photons onto the sample and re-collimates the reflected photons. The half-maximum spot diameter was estimated to be 17 \(\upmu\)m (20 \(\upmu\)m) for the x (y) direction considering the measured idler mode size (figure 3(b)). Measuring the edge response at the target yields the line spread function, from which the transverse resolution was determined as 11.2 \(\upmu\)m (13.2 \(\upmu\)m) along the x (y) direction, as shown in figure 8(b). The resolution is smaller than the spot size by approximately \(\sqrt{z}\) because when a sample reflects a fraction \(f\) of the incident transverse beam profile, only \(f\) of the beam returns to the collecting mode because of reduced mode overlap. The slight asymmetry can be attributed to the residual asymmetry of our setup along the horizontal and vertical directions of the SPDC configuration.
Figure 8: Transverse imaging with FD QICT. (a) Optical micrograph of a 1951 USAF resolution target with negative patterns on a chrome-coated glass. The target is covered with a silicon plate that is normally opaque for visible light, as shown in the inset photograph. (b) Transverse resolution for FD QICT. Edge response functions are measured (red dots) and fitted to error functions (blue lines). The line spread functions as their derivatives lead to transverse resolution along the x- (y-) direction of 11.2 (13.2) \(\upmu\)m. (c) Optical depth profile of the reflected idler photons. Peaks 1 and 2 show reflections at the outer surface of the silicon plate and at the chrome coating, respectively. Peak 3 arises from multiple reflections between the silicon plate and the chrome coating. (d) Transverse image measured through the silicon cover plate.
Figure 7: Depth profiling of test samples. (a) Structure of sample 1. (b) TD QICT and (c) FD QICT measurements for sample 1. (d) Structure of sample 2. (e) TD QICT and (f) FD QICT measurements for sample 2.
FD QICT obtains images with relative transverse scanning of the lens and the sample. Figure 8(c) compares the axial optical-depth profiles for the cases where the focused idler photons impinge on either the uncoated patterns or the coated background. Peaks 1 and 2 correspond to the major reflections at the silicon plate and the chrome coating, respectively, and peak 3 is caused by multiple reflections at the two surfaces. The uncoated area shows a smaller reflection at the silicon-glass interface at the peak 2 position. Extracting the signal intensity distribution at the peak 2 position generates the images shown in figure 8(d).
## Conclusion and outlook
In summary, this work has verified the feasibility of a hybrid-type induced-coherence interferometer for applications to IR optical tomography based on the detection of near-visible photons. The demonstrated configuration provides versatile independent control over the signal and idler paths, thereby matching any idler-path geometry designed for samples by adjusting the signal delay length. We believe that this feature opens up possibilities for applications to other sensing technologies, such as spectroscopic gas sensing, retaining the advantage of undetected-photon measurements. The theoretical analysis and the experimental verification have emphasized the importance of heralding efficiencies between photon pairs in terms of the induced-coherence interference visibility. Both the TD and FD QICT experiments have successfully reconstructed the three-dimensional structures of test samples.
Regarding further improvement of the performances of optical tomography, first, the depth-profiling resolution can be sharpened by increasing the wavelength bandwidth satisfying the phase matching condition. A shorter-length SPDC crystal and/or tighter focusing of a pump beam are straightforward directions because they reduce the magnitude of phase mismatch for a given wavelength and increase the chance of phase matching by widening the range of longitudinal wavenumber. Photon-pair generation efficiencies and heralding efficiencies must be considered for trade-offs. An aperiodically poled crystal has also been used to realize ultra-broadband SPDC in a highly non-degenerate configuration [23]. Secondly, the data acquisition time and the overall range of depth profiling depend on the measurement capabilities. Single-photon-level spectrometers based on an SPAD array [24] instead of a single SPAD with a scanning bandpass filter can reduce the measurement time. Narrowing the bandwidth of unit spectral measurement increases the depth range of tomography. As the induced-coherence technique is generally compatible with various existing optical sensing technologies, incorporating advanced OCT techniques for wavelength-swept light sources and signal processing routines is worth considering to further enhance the practical advantage of QICT.
## Acknowledgments
This research was supported by the National Research Council of Science & Technology(NST) grant (CAP22051-000), the KRISS project (GP2023-0013-07), and the education and training program of the Quantum Information Research Support Center funded through the NRF, all by the Korean government (MSIT). We thank Dongha Kim of KAIST for the help in preparing the measurement samples.
|
2309.16564 | Augment to Interpret: Unsupervised and Inherently Interpretable Graph
Embeddings | Unsupervised learning allows us to leverage unlabelled data, which has become
abundantly available, and to create embeddings that are usable on a variety of
downstream tasks. However, the typical lack of interpretability of unsupervised
representation learning has become a limiting factor with regard to recent
transparent-AI regulations. In this paper, we study graph representation
learning and we show that data augmentation that preserves semantics can be
learned and used to produce interpretations. Our framework, which we named
INGENIOUS, creates inherently interpretable embeddings and eliminates the need
for costly additional post-hoc analysis. We also introduce additional metrics
addressing the lack of formalism and metrics in the understudied area of
unsupervised-representation learning interpretability. Our results are
supported by an experimental study applied to both graph-level and node-level
tasks and show that interpretable embeddings provide state-of-the-art
performance on subsequent downstream tasks. | Gregory Scafarto, Madalina Ciortan, Simon Tihon, Quentin Ferre | 2023-09-28T16:21:40Z | http://arxiv.org/abs/2309.16564v1 | # Augment to Interpret: Unsupervised and Inherently Interpretable Graph Embeddings
###### Abstract
Unsupervised learning allows us to leverage unlabelled data, which has become abundantly available, and to create embeddings that are usable on a variety of downstream tasks. However, the typical lack of interpretability of unsupervised representation learning has become a limiting factor with regard to recent transparent-AI regulations. In this paper, we study graph representation learning and we show that data augmentation that preserves semantics can be learned and used to produce interpretations. Our framework, which we named INGENIOUS, creates inherently interpretable embeddings and eliminates the need for costly additional post-hoc analysis. We also introduce additional metrics addressing the lack of formalism and metrics in the understudied area of unsupervised-representation-learning interpretability. Our results are supported by an experimental study applied to both graph-level and node-level tasks and show that interpretable embeddings provide state-of-the-art performance on subsequent downstream tasks.1
Footnote 1: Our code is available at [https://github.com/euranova/Augment_to_Interpret](https://github.com/euranova/Augment_to_Interpret).
interpretability; unsupervised learning; graph embedding; contrastive loss
## 1 Introduction
Graphs are powerful data representations that model objects as well as the relationships between them. Much like images, they are often complex to process. A popular solution is to project such data into a latent space, thus creating vector embeddings, which can later be used by any classical machine learning training and inference pipeline. These embeddings could be learned with supervision, but labelled data is often difficult to obtain. This has resulted in an increased interest in the learning of semantic-preserving embeddings, which can later be used for a variety of downstream tasks. There are numerous ways to learn graph representations (Hamilton et al., 2017) without label supervision, but using graph neural networks (GNN) has become the go-to approach. In particular, graph contrastive learning (GCL) aims at creating graph embeddings without supervision by leveraging augmented views of available samples to extract meaningful information. This approach provided state-of-the-art results when applied to diverse domains ranging from chemistry (Duvenaud et al., 2015) to social sciences (Fan et al., 2019).
Chen et al. (2020) demonstrated that the use of data augmentation in self-supervised contrastive learning can lead to the creation of semantic-preserving embeddings. Similar re
sults have been achieved on graph data by leveraging carefully crafted augmentation (You et al., 2020). Despite promising results, the underlying GNNs act as black-boxes. They are difficult to interpret, debug and trust, which raises the question: _What input features do GNNs focus on when generating representations?_ Using edge and node dropping as graph augmentation can lead to the creation of sub-graphs that retain discriminative semantic information while reaching better results (You et al., 2020) than other augmentation schemes. Interestingly, this objective correlates with the aim of GNN interpretability, which is to identify highly influential sparse subsets of nodes and edges with the largest impact on the model's behaviour (Luo et al., 2020). This observation leads us to the hypothesis that a well-learned augmentation can serve to produce interpretations of the information embedded in GNN representations. Our main contributions follow.
* We propose INGENIOUS (INherently INterpretable Graph and Node Unsupervised embeddings), a framework that generalises over existing approaches based on learned augmentation (Suresh et al., 2021; Gao et al., 2022; Miao et al., 2022) and we introduce new losses and a module to produce useful and interpretable embeddings.
* We show that embeddings produced by INGENIOUS are relevant to a variety of downstream tasks, matching or exceeding results obtained by state-of-the-art approaches.
* We show that carefully designed learned graph augmentation can be used to produce interpretations. We assess their quality along different axes introduced by Nauta et al. (2023): correctness, completeness, continuity, compactness and coherence. To this end, we introduce new metrics.
* We conduct a hyperparameter study, highlighting the importance of sparsity when using augmented views as interpretations.
To our knowledge, this is the first attempt at assessing the link between learned augmentation and interpretability. We characterise it on graph-level and node-level tasks. The structure of the paper is as follows: We first position our work in Section 2 and present our framework in detail in Section 3. In Section 4, we assess the utility and interpretability of the framework by introducing necessary metrics and we perform an in-depth study of the properties of the framework. Finally, we discuss limitations and conclude.
## 2 Related Work
Previous works on other modalities have shown the superiority of augmentation-based contrastive learning (Chen et al., 2020). However, adapting these works so that they work reliably on graph data raised new challenges, such as defining the right augmentation techniques amongst existing augmentation techniques (Ding et al., 2022). Early methods, including GraphCL (You et al., 2020) and InfoCL (Wang et al., 2022), focus on random perturbations produced by dropping edges or nodes. Other methods like graphMVP (Liu et al., 2022) and MICRO-Graph (Subramonian, 2021) use domain knowledge to perform better than random augmentation. More recent approaches such as AD-GCL (Suresh et al., 2021) and MEGA (Gao et al., 2022) use a multilayer perceptron (MLP) as an augmentation
learner. They create augmented views by learning to drop low-relevance edges. Similarly, RGCL (Li et al., 2022) adds a repulsive dynamic to the augmented views.
In parallel, considerable work has been done to interpret graph neural networks post training. A backpropagation of gradient-like signals (Simonyan et al., 2014), perturbations of the input graph (Ying et al., 2019; Luo et al., 2020), and the training of a surrogate model (Vu and Thai, 2020) are examples of such techniques. These post-hoc techniques have been shown to be biased and unrepresentative of the true interpretations (Rudin, 2019), leading to an increased interest in inherently interpretable models (Miao et al., 2022; Feng et al., 2022). Furthermore, the aforementioned methods focus on interpreting predictions of models learned with supervision. Zheng et al. (2022) have proposed USIB, the first post-hoc method to interpret unsupervised graph representations, but USIB faces the same faithfulness limitation as other post-hoc techniques.
Given existing limitations and the power of augmentation-based learning, we investigate how augmentation learning can faithfully and intrinsically provide interpretability.
## 3 Materials and Methods
In this section, we present INGENIOUS. We begin with a general overview and then provide a description of the augmentation and learning processes.
### Framework
We propose an inherently interpretable framework that produces graph representations in an unsupervised manner by leveraging learned graph augmentation, as shown in Figure 1. Learned edge-dropping augmentation has been used both in contrastive learning by AD-GCL (Suresh et al., 2021) and MEGA (Gao et al., 2022), two recent state-of-the-art approaches, and in interpretability by GSAT (Miao et al., 2022) and PGExplainer (Luo et al., 2020). Inspired by said contrastive approaches, INGENIOUS is trained with a contrastive loss, following a dual-branch approach as recommended by Chen et al. (2020). INGENIOUS is based on two modules: an embedding module that produces node embeddings (\(\phi_{u}\)) and graph embeddings (\(\phi\)), and an edge-selection module (\(\theta\)) that produces semantic-preserving sub-graphs by stochastically dropping uninformative edges. In this study, the embedding module is implemented as a graph neural network (GNN) encoder and the edge-selection module as an MLP, as described in Section 4.1.
The embedding module first generates an embedding \(\phi_{u}\) for each node \(u\) of the input graph. Using these node embeddings, the edge-selection module \(\theta\) produces a keeping probability \(p_{uv}\in[0,1]\) for each edge \((u,v)\) of the graph. Then, the Gumbel-max reparameterisation trick is used to partially remove edges according to their keeping probability \(p_{uv}\) in a differentiable manner. Each edge of an augmented view therefore has a weight \(w_{uv}\in[0,1]\).
This edge-selection process is applied twice to obtain two positively augmented views. A negatively augmented view is also produced by giving each edge the flipped weight \(1-w_{uv}\) from the first positively augmented view. The stochastic aspect of this data augmentation increases the robustness of the model, as it shows more diverse data to the model. Additionally, this aspect is needed to apply the dual-branch contrastive loss, since this loss necessitates at least two distinct augmented views for each graph.
Each augmented view of the input graph is then embedded using once again2 the embedding module \(\phi\). Finally, a simclr loss is back-propagated to pull the embeddings of corresponding positively augmented views together while keeping the embeddings of other graphs' positively augmented views distant in the embedding space. The negatively augmented view is also used as a part of the final loss, as described below.
Footnote 2: When a module includes batch normalisation layers, it is tricky to use this same module for distinct steps in a model, because the input distribution of the batch normalisation layers may be different for each step the module is used for. This aspect was not considered in previous approaches and may have perturbed their results and conclusions. More details are given in supplementary materials.
At inference time, the Gumbel-max sampling is no longer used. Keeping probabilities \(p_{uv}\) are therefore not used as probabilities anymore but are used directly as weights on edges of the original graph, i.e. \(w_{uv}=p_{uv}\), in order for the augmentation to be deterministic. The full-graph embedding is defined as the embedding of the augmented view, to be used in any downstream task. This augmented view serves as the interpretation of the embedding.
#### 3.1.1 Embedding Module and Augmented-Graph Generation
In this work, we consider attributed graphs \(G=(V,E)\) where \(V\) is the set of nodes and \(E\subset\{(u,v)|u\in V,v\in V\}\) is the set of edges of the graph. By giving an arbitrary order to \(V\), each node can be represented by a single integer \(u\in\mathbb{N}\). \(G\) can then be described by its node-feature matrix \(X\in\mathbb{R}^{|V|\times d}\) and its adjacency matrix \(A\in\mathbb{R}^{|V|\times|V|}\), with \(|V|\) the total number of nodes in the graph, \(d\) the number of node features, \(X_{ui}\) the value of the \(i^{th}\) feature of the \(u^{th}\) node \(u\), and \(A_{uv}=1\) if \((u,v)\in E\) else \(0\). Node embeddings can be computed by the embedding module as \(z_{u}=\phi_{u}\left(A,X\right)\). Graph embeddings can be computed as \(z=\phi\left(A,X\right)=Pooling(\{z_{u}\ \forall\ u\in V\})\), with \(Pooling\) an aggregation function on sets of vectors, such as the mean or the sum.
Augmented views of a graph \(G\) can be produced in two different ways given the embedding \(z_{u}\) of each of its nodes \(u\). The first way, which was described briefly above, is this: The edge-selection module \(\theta\) produces a keeping probability \(p_{uv}=\theta([z_{u},z_{v}])\) for each edge \((u,v)\), with \([\cdot,\cdot]\) the concatenation operator. Then, edges are sampled following the Gumbel-max reparameterization (Gumbel, 1954), giving each edge its weight
\[w_{uv}=Sigmoid\left((Logit(p_{uv})+\epsilon_{uv})/\tau\right) \tag{1}\]
Figure 1: Schema of the INGENIOUS framework, as described in the text.
with \(Logit\) the inverse function of the \(Sigmoid\) function, \(\epsilon_{uv}=\log(\alpha_{uv})-\log(1-\alpha_{uv})\) a random noise defined using \(\alpha_{uv}\sim Uniform(0,1)\), and \(\tau\in\mathbb{R}^{+}\) a temperature hyperparameter we set to 1. The higher the value of \(\tau\), the more uniform the weight distribution.
The second way is to produce a keeping probability \(p_{u}=\theta(z_{u})\) for each node, sample nodes by giving them a weight \(w_{u}=Gumbel(p_{u})\) with \(Gumbel\) the Gumbel-max trick described above, and then lift the weights from nodes to edges following \(w_{uv}=w_{u}\cdot w_{v}\).
Although both could be used for both graph and node embeddings, in our approach we use the first way for graph embeddings and the second way for node embeddings, since it works best empirically. These edge weights are then used to weigh messages during the message-passing operation of the GNN encoder. At train time, independent samplings are used to generate two distinct positively augmented views. The negatively augmented view is obtained by flipping the first positively augmented view, that is, each edge of the negatively augmented view has a weight \(w_{uv}=1-w_{uv}^{+}\), with \(w_{uv}^{+}\) the weight of the edge \((u,v)\) in the first positively augmented view. Finally, the two positively augmented views and the negatively augmented view are embedded by the embedding module, which produces embeddings \(z_{1}^{+}\), \(z_{2}^{+}\) and \(z^{-}\). At inference time, the deterministic process described previously is used.
#### 3.1.2 Losses
In this section, we define the INGENIOUS loss. It is composed of four losses with diverse objectives.
The first loss is defined as the dual-branch approach of the simclr loss (Chen et al., 2020), which leverages two augmented views rather than an anchor and an augmented view, as proposed by Zhu et al. (2021). This is the main loss of the framework as it structures the embedding space. Specifically, given augmented-view-embedding triplets coming from \(N\) distinct graphs and represented as \(\{Z_{1}^{+},Z_{2}^{+},Z^{-}\}\subset\mathbb{R}^{N\times D}\), with \(D\) the embedding dimension, we first normalise each embedding so that it has an L2-norm of 1. Then, the loss is defined as
\[\mathcal{L}_{simclr}(Z_{1}^{+},Z_{2}^{+})=-\frac{1}{2N}\sum_{i=1}^{N}\Big{(} \log(l_{i}(Z_{1}^{+},Z_{2}^{+}))+\log(l_{i}(Z_{2}^{+},Z_{1}^{+}))\Big{)} \tag{2}\]
\[l_{i}(Z_{1}^{+},Z_{2}^{+})=\frac{\exp(\sin(Z_{1i}^{+},Z_{2i}^{+})/\beta)}{\sum _{k=1}^{N}\Big{(}\exp(\sin(Z_{1i}^{+},Z_{2k}^{+})/\beta)+\mathbb{I}_{[k\neq i] }\exp(\sin(Z_{1i}^{+},Z_{1k}^{+})/\beta)\Big{)}} \tag{3}\]
with \(\sin(\cdot,\cdot)\) a similarity operation such as the dot product, \(\mathbb{I}_{[\text{condition}]}\) an indicator function with a value of 1 when the condition is met, else 0, and \(\beta\) a hyperparameter we set to 0.07 as in Chen et al. (2020).
The second loss, which we call the \(negative\) loss, complements the first loss in structuring the latent space. It leverages the negatively augmented views of the normalised triplets:
\[\mathcal{L}_{-}(Z_{1}^{+},Z^{-})=\sum_{i=1}^{N}\mathcal{L}_{simclr}(Z_{i}^{*},Z_{i}^{*}) \tag{4}\]
with \(Z_{i}^{*}=\{Z_{1i}^{+},Z_{i}^{-}\}\in\mathbb{R}^{2\times D}\). The motivation behind this idea stems from the observation that the simclr loss has an attracting effect between corresponding augmented-view embeddings and a repelling effect between everything else. If corresponding augmented-view
embeddings are always an embedding and itself, there is no attracting effect anymore as something cannot attract itself. Thus, only the repelling effect of the simclr loss has an effect, which is a repelling effect between the positively-augmented-view embedding and the negatively-augmented-view embedding.
The third loss is the information loss from GSAT. It is used as regularisation. This loss minimises the mutual information between the positively augmented view and its original graph anchor following the graph information bottleneck (GIB) principle (Miao et al., 2022). It favours sparser augmented views as it prevents the model from collapsing and selecting all edges as important. The information loss is defined as:
\[\mathcal{L}_{info}(G,P)=\frac{1}{|E|}\sum_{(u,v)\in E}p_{uv}\log\frac{p_{uv}}{r }+(1-p_{uv})\log\frac{1-p_{uv}}{1-r} \tag{5}\]
with \(P=\{p_{uv}\forall(u,v)\in E\}\) the keeping probabilities of all edges, \(G=(V,E)\) a batch of graphs as one big graph, \(|E|\) the total number of edges and \(r\in[0,1]\) a hyperparameter that represents the prior of the probability to keep any given edge. Similarly to GSAT, the hyperparameter \(r\) is initially set to 0.9 and gradually decays to the tuned value of 0.7. As proposed in GSAT, the weights \(w_{uv}\) are used in practice instead of the probabilities \(p_{uv}\).
The fourth and last loss is the watchman loss. Wang et al. (2022) have shown that edge-dropping augmentation can degrade the quality of the latent space. Additionally, we have observed that learned-augmentation-based methods sometimes have unstable training. Preliminary experiments supporting this claim are given in supplementary materials. The watchman loss mitigates these issues by forcing the embeddings to contain graph-level information. In detail, a watchman module \(\psi\) uses augmented-view embeddings \(Z_{1}^{+}\) to predict \(\Lambda_{k}(G_{i})\forall i\), the top \(k\) eigenvalues of the Laplacian matrix of each graph \(G_{i}\) composing the batch \(G\). The loss is defined as
\[\mathcal{L}_{\text{Wm}}(G,Z_{1}^{+})=\frac{1}{N}\sum_{i=1}^{N}MSE(\psi(Z_{1i}^ {+}),\Lambda_{k}(G_{i})) \tag{6}\]
with \(MSE(\cdot,\cdot)\) the squared euclidean distance between two vectors. This loss is meaningful for graph-embedding tasks only, as node embeddings have no reason to contain information about the full graph.
Finally, the INGENIOUS loss is defined on a batch \(G\) of \(N\) graphs as:
\[\mathcal{L}=\mathcal{L}_{simclr}(Z_{1}^{+},Z_{2}^{+})+\mathcal{L}_{-}(Z_{1}^ {+},Z^{-})+\mathcal{L}_{info}(G,W)+\lambda_{\text{Wm}}\cdot\mathcal{L}_{ \text{Wm}}(G,Z_{1}^{+}) \tag{7}\]
with \(W=\{w_{uv}\forall(u,v)\in E\}\) the edge weights of the first positively augmented view and \(\lambda_{\text{Wm}}\) a hyperparameter we set to 0 for node-embedding tasks and to 0.3 for graph-embedding tasks, based on preliminary empirical results.
## 4 Experiments
In this section, we evaluate INGENIOUS both in terms of utility and interpretability by comparing its performance to the performance of its unsupervised competitors AD-GCL and MEGA on graph-classification tasks and we conduct an ablation study on the loss terms.
However, these competitors have not been extended to handle node-classification tasks, and to the best of our knowledge, there is no competitor that uses learned augmentation for these tasks. For both graph and node classification tasks, we use GSAT not as a direct competitor, as it is a supervised model, but as an upper bound on the obtainable utility.
### Experimental Setup
DatasetsFor graph classification, we use two synthetic datasets (_BA2Motifs_(Luo et al., 2020), _SPMotifs.5_(Wu et al., 2022)) and one real-world dataset (_Mutag_(Morris et al., 2020)). For node classification, we use two synthetic datasets (_Tree-grid_, _Tree-cycle_(Ying et al., 2019)) and one real-world dataset (_Cora_(McCallum et al., 2000)). All of them are used with a batch size of 256 except for _Cora_ which is used with a batch size of 128. For _Mutag_ and _BA2Motifs_, we randomly split the data into train (80%), validation (10%) and test (10%) sets. For the other datasets, we use the same split as in their original papers.
ModelFor the embedding module \(\phi\), we use a GIN (Xu et al., 2019) model with 3 layers. For graph-classification tasks, a final pooling layer is added to the model. For the edge-selection module \(\theta\), we use a 2-layer MLP. For the watchman module \(\psi\), we use a 3-layer MLP. We use a learning rate of 1e-3 and a dropout ratio of 0.3, and we train the whole on 150 epochs. For all approaches, the metrics defined in Sections 4.2 and 4.3 are measured on the model that obtains the best downstream ACC (see Section 4.2) measured on the validation set, amongst all epochs, as in GSAT. All results are averaged on 3 random seeds.
### Utility
The utility of embeddings learned without supervision is traditionally (Suresh et al., 2021) approximated by the performance of simple models trained on these embeddings and their labels across various downstream tasks. Similarly, we feed the representations to be evaluated to a linear model and we train the latter on the classification task associated with the available labels. The final metric is the accuracy of the model, which we call **downstream ACC**. As visible in Figure 2, the accuracy of INGENIOUS is similar to the accuracy of its competitors (AD-GCL and MEGA) in the case of graph classification. Furthermore, it is close to the supervised upper bound (GSAT) despite being unsupervised.
Figure 2: Downstream ACC and interpretability AUC. INGENIOUS outperforms its unsupervised competitors, AD-GCL and MEGA.
### Interpretability
The idea of interpreting representations can be defined as finding informative parts of the input that led to its representation. Like most graph-interpretability methods (Yuan et al., 2020), INGENIOUS focuses on topology and produces sub-graphs as interpretations. Examples of such interpretations are visible in Figure 3. Interpretability and how to measure it has been extensively studied in the supervised setting (Nauta et al., 2023), but little work has focused on interpretability for unsupervised representation learning. Monotony and localisation metrics have been introduced by (Wickstrom et al., 2023; Zheng et al., 2022), but most of the major categories introduced by the taxonomy proposed by Nauta et al. (2023) have not been covered yet. Building on previous work, we introduce five metrics - _Faithfulness_, _Opposite faithfulness_, _Wasserstein distance_, _Bi-modality_ and _Interpretability AUC_ -, all based on desirable characteristics for interpretations.
Correctness and CompletenessCorrectness (how much the interpretation truly describes the behaviour of the model) and completeness (how much of the model behaviour is described by the interpretation) (Nauta et al., 2023) are studied by evaluating whether an interpretation aligns with the edges that truly impact the embedding. The canonical approach to evaluating it is the deletion (resp. insertion) experiment introduced by RISE (Petsiuk et al., 2018), which involves removing elements in order (resp. opposite order) of importance and studying the impact on model outputs. Usually, this impact is measured as a difference in predicted scores. However, in the unsupervised setting, outputs are not predicted scores, but embeddings. Therefore, we propose to extend the metric by measuring this impact as the Euclidean distance between the embedding of the original graph and the embeddings of the perturbed graphs.
To evaluate correctness, we use an adapted faithfulness metric. Given a graph, we delete its edges in decreasing order of importance weights given by the edge-selection module. To achieve this, we group edges of similar importance into bins. In our experiments, we use 30 bins. Then, we successively remove bins of edges from the graph in decreasing order of importance, going from the full graph to a graph without any edge. The perturbed graphs are then embedded using the embedding module on its own, taking the importance weights \(w_{uv}\) of the remaining edges into account during the message-passing operation of the module. The final metric is the area under the curve (AUC) of dissimilarity scores across different percentages of edges removed. For graph classification, the dissimilarity score is
Figure 3: Illustrative examples of interpretations produced by INGENIOUS on four _BA2Motifs_ graphs selected randomly. Nodes included in the class-specific motifs are shown in red. Edge importance weights produced by INGENIOUS are represented through edge thickness. Therefore, thick edges between red nodes indicate that INGENIOUS focuses on the informative motifs.
the Euclidean distance between the embeddings of the pruned graphs and the embedding of the initial graph. For node classification, we perform this experiment for each node, taking only the \(k\)-neighbourhood (\(k\) being the depth of the model) of that node into consideration, and we take the average dissimilarity score. In order to ease comparisons between models, these dissimilarity scores are normalised by scaling, to be equal to 1 when all edges are removed. Removing important edges should change the embedding. Hence, removing them first should result in a larger AUC.
To evaluate completeness, we use the opposite faithfulness. It seeks to assess the complementary statement to faithfulness: Are the edges marked as unimportant truly unimportant? To measure it, we apply the same procedure as for the faithfulness metric, except edges are removed in increasing order of predicted importance. In theory, removing unimportant edges should not impact the embedding. The AUC should therefore be small.
To summarise the results intuitively, we focus on the faithfulness gap, that is, the difference in value between the faithfulness and the opposite faithfulness. A larger difference is better. Results are presented in Table 2. Faithfulness and opposite faithfulness results are presented in supplementary material. The model we propose has a clear advantage, achieving a high faithfulness gap in all cases, even higher than the supervised GSAT model. These results demonstrate the superior correctness and completeness of INGENIOUS.
ContinuityContinuity refers to the smoothness of the interpretability function (Nauta et al., 2023). We seek to evaluate whether inputs for which the model response is similar, that is, inputs with similar embeddings, also have similar interpretations. However, much like faithfulness, no metrics have been defined in the literature to evaluate this in an unsupervised setting. To evaluate it anyway, we need to define _(dis)similarity_, both between embeddings and between interpretations. While embeddings lie in a Euclidean space, where the L2-norm can be used as a dissimilarity measure, the interpretations lie in the space of graphs. There is no consensual method to compare any two graphs or sub-graphs. Indeed, a large variety of methods have been proposed (McCabe et al., 2021). Amongst them, we choose to use the node-degree-based Wasserstein distance as a dissimilarity measure, as it is amongst the simplest to compute and the most intuitive. In line with the literature on graph interpretability and augmentation, which focuses solely on graph topology, this
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & \multicolumn{3}{c}{Faithfulness Gap} & \multicolumn{3}{c}{Wasserstein Gap} \\ & BA2Motifs & Mutag & SPMotifs.5 & BA2Motifs & Mutag & SPMotifs.5 \\ \hline GSAT & \(.34\pm.00\) & \(.21\pm.08\) & \(.06\pm.02\) & \(.04\pm.00\) & \(\mathbf{.02\pm.00}\) & \(.04\pm.00\) \\ AD-GCL & \(.01\pm.13\) & \(.03\pm.09\) & \(.08\pm.06\) & \(\mathbf{.10\pm.01}\) & \(.01\pm.00\) & \(.03\pm.00\) \\ MEGA & \(.06\pm.02\) & \(.39\pm.12\) & \(\mathbf{.68\pm.03}\) & \(.04\pm.01\) & \(.01\pm.00\) & \(.01\pm.00\) \\ INGENIOUS & \(.60\pm.02\) & \(.41\pm.18\) & \(.45\pm.03\) & \(.06\pm.01\) & \(.01\pm.00\) & \(\mathbf{.05\pm.01}\) \\ INGENIOUS - Info & \(\mathbf{.68\pm.02}\) & \(\mathbf{.48\pm.03}\) & \(.48\pm.06\) & \(.05\pm.02\) & \(.01\pm.00\) & \(\mathbf{.05\pm.01}\) \\ INGENIOUS - Negative & \(.13\pm.02\) & \(.18\pm.00\) & \(.22\pm.06\) & \(.02\pm.01\) & \(.01\pm.00\) & \(.02\pm.01\) \\ INGENIOUS - Negative - Info & \(.61\pm.10\) & \(.15\pm.09\) & \(.32\pm.00\) & \(.03\pm.01\) & \(.01\pm.00\) & \(.03\pm.00\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Faithfulness and Wasserstein results. A higher faithfulness gap means the interpretations are more faithful, and a higher Wasserstein gap means the embedding space is more continuous in terms of interpretations.
measure disregards node features. It also has the advantage of enabling its use with any graph dataset. The continuity metric we propose could be adapted with any other similarity metric with minor changes, but the adaptation would not significantly alter its theoretical analysis. The study of possible adaptations is therefore left for future work.
Formally, given a graph \(G\), we obtain its interpretation \(G^{exp}\) by applying a hard-threshold. We keep a fixed percentage of the most important edges, arbitrarily set to 10%. However, for datasets for which there is a prior on the number of important edges for a downstream task, we use that prior number instead of 10%.
The Wasserstein distance (WD) between two graphs \(G_{1}\) and \(G_{2}\) serves as the dissimilarity between their interpretations and is defined as \(d_{W}(G_{1},G_{2})=W_{1}(\nu(G_{1}^{exp}),\nu(G_{2}^{exp}))\), with \(\nu(G)\) the distribution of node degrees in \(G\) and \(W_{1}\) the 1-Wasserstein metric. Let \(\pi(G)\) be the local neighbourhood of a graph \(G\), defined as the 10% closest graphs of the dataset in the embedding space according to the Euclidean distance. To evaluate the continuity of interpretations, we compare two values:
* \(W_{1}^{local}=\mathbb{E}_{i}\left[\mathbb{E}_{G_{j}\in\pi(G_{i})}\left[d_{W}(G _{i},G_{j})\right]\right]\), the expected WD between a graph and another graph in the local neighbourhood of the first one;
* \(W_{1}^{global}=\mathbb{E}_{i}\left[\mathbb{E}_{j}\left[d_{W}(G_{i},G_{j}) \right]\right]\), the expected WD between any two graphs.
To save time, the expected values are computed on a subset of 6 batches of graphs, i.e. 1536 graphs. A big difference between these two values would be a good indicator of continuity, as it would mean that graphs that are similar in the embedding space have interpretations that are more similar to each other than to the interpretations of other graphs. To summarise the results intuitively, we focus on the difference \(W_{1}^{global}-W_{1}^{local}\); a larger gap indicates a stronger continuity. We report these differences in Table 2. We see the advantage provided by the negative term of the INGENIOUS loss to produce an embedding space with continuous interpretations, as visible by the higher scores of INGENIOUS and INGENIOUS - Info compared to INGENIOUS - Negative and INGENIOUS - Negative - Info, on _BA2Motifs_ and _SPMotifs.5_. As opposed to _Mutag_, these datasets have clear distinct motifs by class and therefore most benefit from the repulsive effect of the negative loss.
Compactness and readabilityPrevious work has shown that sparsity is an important factor in human understanding (Nauta et al., 2023), as it is hindered by human cognitive capacity limitations, and that interpretations should be sparse to not overwhelm the user. Moreover, for inherently interpretable methods as INGENIOUS, sparsity is related to faithfulness as non-sparse interpretations would mean that all features are used by the model to produce the embeddings. Nevertheless, we argue that sparsity, alone, can be misleading, since it can favour interpretations with globally lower weights. An ideal interpretation should be bi-modal, with weights of 1 for important edges and 0 for unimportant ones. We, however, relax this assumption and consider that an interpretation is easy to read if the distribution of weights has two modes. To estimate this, we approximate the density function of edge weights by using a histogram with ten bins between 0 and 1. We also define the **sparsity index** as the average of the edge importance weights.
Figure 4 shows that on simple datasets such as _BA2Motifs_, all methods produce sparse and bimodal interpretations, even without regularisation. However, more complex datasets
such as _Mutag_ and _SPMotifs.5_ highlight the need for regularisation. On _Mutag_ for example, adding the regularisation lowers the proportion of scores between 0.9 and 1 from 90% to 30%. Both AD-GCL and GSAT tend to collapse on high value and narrow-support densities, as opposed to MEGA which has importance scores closer to extreme values.
CoherenceCoherence assesses the alignment with domain knowledge. Sometimes, in the supervised setting, human prior can be used to define expected interpretations that can be compared with generated interpretations. In the literature, this is referred to as _the clicking game_(Wickstrom et al., 2023). However, it relies on the prior that the model focuses on the same features as the domain expert would, which is a strong assumption (Petsiuk et al., 2018). In an unsupervised setting, it is even harder to define relevant expected interpretations for embeddings. A way to approach this is to use synthetic datasets only; we use _BA2Motifs_, _SPMotifs.5_, _Tree-grid_ and _Tree-cycle_. In these datasets, base graphs are generated from the same distribution. Then, different motifs are added to each random graph (Wu et al., 2022; Ying et al., 2019; Luo et al., 2020) depending on the dataset. For graph classification datasets, the motif also depends on the label of the graph. Therefore, we hypothesise that the base graph can be considered as noise and that all the distinctive information of each graph is in the motifs.
Under this hypothesis, any good unsupervised embedding method should rely on the motifs only. This is partially validated by the good accuracy shown in Section 4.2, as it is impossible to have good accuracy on the synthetic datasets if the embeddings contain no information about the motifs. The motifs can therefore be used as expected interpretations.
To measure how close a produced interpretation is to the expected one, we consider an edge-classification problem. An edge is labelled as important if it is part of the expected interpretation. The predicted probability of an edge being important is given by the produced importance weight of that edge. The area under the ROC curve is reported in Figure 2. In this paper, we call this metric the **interpretability AUC**.
In terms of interpretability AUC, INGENIOUS outperforms its unsupervised competitors. The supervised approach GSAT is better as expected. The ablation study shows that both regularisation losses are needed to reach a better interpretability AUC, although their advantage is less significant on node classification tasks. This could find an explanation in the link between interpretability and sparsity. We study this link in Section 4.4.
Figure 4: Density function of edge weights. Optimal interpretations should have two density peaks and maximal separation. INGENIOUS results in importance values that are much sparser and closer to extreme values.
### In-Depth Analysis of the Sparsity
Impact of the sparsity on faithfulnessIn addition to being a desirable characteristic for interpretations, sparsity impacts contrastive learning by increasing the difference between the augmented views. Figure 5 shows a significant positive correlation between opposite faithfulness (red dots) and sparsity, while there is a negative correlation between faithfulness (blue dots) and sparsity. This suggests there may be a link between higher sparsity and better faithfulness gap. This conclusion applies to all graph-classification tasks. For node-classification tasks, the sparsity has a smaller range of values and presents no correlation with the faithfulness but still a small correlation with the opposite faithfulness. Further results with a random baseline are presented in supplementary materials. Overall, the faithfulness gap is improved by the sparsity.
Impact of hyperparameters on sparsityThis section deals with the impact of the temperature of the Gumbel-max sampling and the impact of the edge-keeping-probability prior \(r\). A high temperature drives the sampling probability closer to a uniform distribution, which increases the diversity of augmented views. Figure 13 shows that sparsity decreases with an increasing temperature value until it reaches a plateau. On the synthetic dataset _BA2Motifs_, a temperature sweet-spot for interpretability AUC is visible at a temperature equal to 20; too high or too low of a temperature is harmful. However, for the other experiments of this paper, to ensure the comparability of our results with those of GSAT, we use the same temperature value as in GSAT, i.e. \(\tau=1\), even though this choice might not result in optimal performances.
The first column of Figure 7 shows that \(r\) is positively correlated with the sparsity of INGENIOUS. Lower \(r\) leads to a better faithfulness gap, meaning more complete and correct interpretations. Sparser augmented views lead the model to use fewer edges to produce embeddings, which results in a slight drop in downstream ACC. This phenomenon is less marked on simple datasets where all methods reach a perfect downstream ACC. This trade-off between downstream ACC and interpretability has already been documented in previous work (Jacovi and Goldberg, 2020). Similarly, the sparsity positively impacts the
Figure 5: (Opposite) faithfulness with respect to the sparsity index. A lower sparsity results in a lower opposite faithfulness (lower is better) and a marginally higher faithfulness, which means a better faithfulness gap. The grey area represents the 95% confidence interval for a smoothed conditional mean.
continuity of the interpretability function, as can be seen by the increasing Wasserstein gap. This impact, however, is less strong on _Mutag._ In INGENIOUS, the sparsity can be controlled to an extent through regularisation and hyperparameter tuning.
## 5 Limitations and Future Work
Like most GNN-interpretability methods (Luo et al., 2020; Ying et al., 2019; Yuan et al., 2020), we limit our interpretability analysis to topology. However, early results in supplementary materials show the possibility of extending INGENIOUS to node features. As defined by Doshi-Velez and Kim (2018), we limit our study to a functionally grounded evaluation (which does not require human subjects). Both human-grounded and application-grounded evaluations, which are more specific and costly, are left as future work.
## 6 Conclusion
We unify previous works on augmentation-based graph representation learning to introduce a new unsupervised embedding approach. Our method, INGENIOUS, introduces new losses for better regularisation. We show their positive impact through an ablation study. Aiming for interpretability, we introduce new metrics to evaluate the quality of interpretations of graph representations learned without supervision, building upon previous work in the supervised setting. Thanks to the approach and metrics we propose, we show that it is
Figure 6: Sparsity and interpretability AUC on _BA2Motifs_ with different temperatures. A sweet spot is visible at 20; too high or too low temperatures worsen the results.
Figure 7: Analysis of the hyperparameter \(r\). A lower \(r\) hyperparameter can improve the sparsity, resulting in interpretations that are more correct, complete and robust, with only a slight impact on the downstream ACC.
possible to produce inherently interpretable embeddings that are also useful for various downstream tasks and that augmented views, when sparse, can be used as interpretations. INGENIOUS compares favourably with state-of-the-art unsupervised models in terms of utility of the embeddings and correctness, completeness, continuity and readability of the interpretations. By tuning a reduced set of two hyperparameters, the user can manage the sparsity/utility trade-off. Our work paves the way for novel approaches of intrinsically interpretable unsupervised embeddings for modalities as complex as graph data. |
2301.13558 | Lidar Upsampling with Sliced Wasserstein Distance | Lidar became an important component of the perception systems in autonomous
driving. But challenges of training data acquisition and annotation made
emphasized the role of the sensor to sensor domain adaptation. In this work, we
address the problem of lidar upsampling. Learning on lidar point clouds is
rather a challenging task due to their irregular and sparse structure. Here we
propose a method for lidar point cloud upsampling which can reconstruct
fine-grained lidar scan patterns. The key idea is to utilize edge-aware dense
convolutions for both feature extraction and feature expansion. Additionally
applying a more accurate Sliced Wasserstein Distance facilitates learning of
the fine lidar sweep structures. This in turn enables our method to employ a
one-stage upsampling paradigm without the need for coarse and fine
reconstruction. We conduct several experiments to evaluate our method and
demonstrate that it provides better upsampling. | Artem Savkin, Yida Wang, Sebastian Wirkert, Nassir Navab, Federico Tombar | 2023-01-31T11:16:21Z | http://arxiv.org/abs/2301.13558v1 | # Lidar Upsampling with Sliced Wasserstein Distance
###### Abstract
Lidar became an important component of the perception systems in autonomous driving. But challenges of training data acquisition and annotation made emphasized the role of the sensor to sensor domain adaptation. In this work, we address the problem of lidar upsampling. Learning on lidar point clouds is rather a challenging task due to their irregular and sparse structure. Here we propose a method for lidar point cloud upsampling which can reconstruct fine-grained lidar scan patterns. The key idea is to utilize edge-aware dense convolutions for both feature extraction and feature expansion. Additionally applying a more accurate Sliced Wasserstein Distance facilitates learning of the fine lidar sweep structures. This in turn enables our method to employ a one-stage upsampling paradigm without the need for coarse and fine reconstruction. We conduct several experiments to evaluate our method and demonstrate that it provides better upsampling.
## I Introduction
Lidar can significantly improve the quality and reliability of the perception systems of autonomous vehicles. As an active sensor, lidar takes accurate measurements of the surrounding environment and provides spatial information about the traffic scene in form of point clouds. It is robust to challenging light conditions and can significantly contribute to the functional safety of fully autonomous systems. Also, the development and manufacturing of lidar have been democratized considerably in recent years. Democratization enabled the equipment of the serial vehicles with this sensor. As a result, lidar became a crucial component of state-of-the-art autonomous driving.
At the same time, machine perception on point clouds is still challenging since this kind of data is unstructured and irregular. However, in computer vision, neural networks have already proved superior to traditional algorithms.
Their advantage is especially evident in the tasks of segmentation or detection where _understanding_ of the semantics of the underlying scene is involved. A wide variety of methods dedicated to 3D object detection, tracking, and segmentation successfully adapted deep learning techniques to point clouds to tackle the challenges mentioned above [26, 34, 29, 43].
In practical applications such as autonomous driving, data acquisition and annotation required for training such perception methods is a expensive and inflexible process. Such rigidness is characterized by a model's performance degradation attributed to data modality change e.g., sensor viewpoint, resolution, etc. [7]. For example, [3] demonstrates that lidar segmentation methods reveal significant performance degradation when evaluated on the samples from a lidar in which mounting position, number of scan lines, or intensity calibration differs from the training data. Table II shows similar behavior, demonstrated by the object detection networks operating on lidar samples with different resolutions. The phenomenon which causes degradation is commonly referred to as domain shift [31] and addressed through transfer learning. They aim to address the domain shift problem by learning the domain-invariant features or by learning the direct mapping between source and target domains. Still, models are supposed to be compatible with novel sensor generations and, thus, with data modalities or domains. In this work, we address the problem of such sensor-to-sensor domain adaptation as a mechanism to avoid actual data re-acquiring or re-annotating the data. In particular, we focus here on the task with low- and high-resolution lidars. The latter is distinguished by a higher number of scan lines in the vertical field of view. Thus, the overarching goal is to sample the data points from the target distribution or, in other words, such point clouds which resemble characteristics of data samples from the target domain. Our task is to generate a point cloud sample with more scan lines given a low-resolution lidar scan sample. We consider both modalities separate domains and employ a point cloud transfer technique to solve them.
Existing methods for lidar generation mainly rely on distance measures such as Chamfer Distance (CD) and Earth Mover's Distance (EMD) [44, 47]. Although several works [11, 1] establish the superiority of EMD over CD as more discriminative for point cloud learning, the CD simply nearest neighbor based is still more efficient. Yet, from our observations, both measures do not su
Fig. 1: Results of \(2\times\) upsampling for \(8129\) points patches from KITTI dataset.
structure lidar point clouds. CD fails to equally distribute additional points across the lidar scan, thus erroneously favoring the known or source point locations. Standard EMD approximation based on auction algorithm [11] fails to reconstruct the fine scan pattern of lidar sweeps accurately. This phenomenon is also confirmed by experiments in [40]. Multiple works have addressed the computational complexity of EMD and the whole family of Wasserstein distances (WD) [17, 15, 39, 22] in various tasks.
The contribution of this work is twofold. Firstly, it provides an extensive analysis of applicability of established distance metrics such as Chamfer distance, approximated EMD, and SWD for lidar scan generation and upsampling. Secondly, based on this analysis, it proposes a new lidar upsampling method, which is dedicated to the problem of cross-sensor domain adaptation between low-resolution and high-resolution lidars. We argue that, unlike existing point cloud upsampling methods, our edge-aware one-stage (no coarse reconstruction) approach based on Sliced-Wasserstein distance can reconstruct fine details of lidar scans and surpass existing upsampling techniques.
## II Related work
The lidar super-resolution or lidar upsampling task has been addressed by the recent learning-based methods, which can be categorized into grid-based methods [28] and point-based [18] methods.
### _Grid-based lidar generation_
Given an input condition, e.g., vector, the generation task is to produce a \(N\times d\) point cloud of size \(N\) and dimensionality \(d\). Grid-based methods utilize the range image representation of point clouds projected into the cylindrical coordinate system. [4] adapts generative adversarial models applied on such range images to produce large-scale lidar scans. Unlike the lidar GAN approach, which uses real lidar scans for the discriminator training, [28] uses training exclusively from simulated data to produce scans comparable with real high-resolution lidar. [16] improves range image upscaling by utilizing a widely acclaimed attention mechanism. Finally, [14] approaches integrating an uncertainty estimation mechanism into the upsampling process to meet real-time constraints. In this work, we aim to develop a rather generic domain adaptation technique for lidar upsampling, which falls into the latter category of point-set-based methods.
### _Point-based lidar generation_
Typical point-based generation methods such as AtlasNet [13], and FoldingNet [41] operate directly on the point sets where they approach the problem by creating 2D patches and aligning them to the underlying point cloud shape. With the emergence of generative neural nets, several works like PCGAN [1], GCN-GAN [35], and TreeGAN [30] applied adversarial training frameworks to the point clouds. Luo _et al._[21] employed a concept similar to image diffusion for the point cloud generation. More recent generative methods utilize normalizing flows - PointFlow [40], gradient fields - ShapeGF [5] and autoregression - PointGrow [32].
Compared to generation, the upsampling aims to compute a \(rN\times d\) output given a \(N\times d\) point cloud, with upsampling ratio \(r\) so that it resembles the spatial characteristics of the underlying 3d shape. Methods introduced by Yu _et al._[45] and Zhang _et al._[47] were pioneering deep learning-based approaches for that task. PU-Net [45] relied on Point-Net++ encoder [26] for feature extraction from patches then applied two sets of linear layers onto features replicated from \(N\times C\) to \(N\times rC\) which are then merely reshaped into \(rN\times C\). PU-Net is followed by EC-net [44], which introduced a loss function that aimed to minimize the point-to-edge distance to make the network more edge-aware. Later 3PU [42] explored multi-scale progressive patch-based upsampling by breaking down a \(r\)-factor upsampling network into several \(2\times\) upsampling networks. It introduces dense connections or blocks for the feature extraction and defines local neighborhoods later in a feature space via kNN based on feature similarity. PU-GAN [18] adopts an adversarial framework for point cloud upsampling by formulating a discriminator network that learns to evaluate generated high-resolution point clouds. It introduces a _up-down-up_ expansion unit, which enables a so-called self-correction of expanded features. More recent methods include PU-GCN [27] and Dis-PU[19].
Another task worth mentioning is point cloud completion, whose goal is to generate complete point clouds and create dense scans from incomplete sparse ones. Here pioneering learning method was PCN [46]. PCN first learns the global feature of the underlying shape from partial observation and then reconstructs a complete but coarse point cloud of lower resolution, which is up-sampled with folding operations in the second phase. TopNet [33] method utilizes a tree structure for generating complete dense point clouds. Following the
Fig. 2: Overview of the network with feature extraction (_Ext_), feature expansion (_Exp_) and coordinate reconstruction (_Rec_).
two-stage coarse-fine point cloud generation MSN [20] uses the morphing principle and joint CD and EMD loss. Two-staged ECG [24] improves on the generation of the edges by exploiting the edge-preserved pooling. Finally, CDN [37] extends a traditional GAN framework to guarantee that local areas follow the same pattern as ground truth.
Compared to previous point cloud generation works, our method is dedicated to precisely reconstructing fine-grained lidar point clouds. Discussed point-based methods reveal a relatively restricted applicability area that embraces simplistic, ShapeNet-alike [6] data. Point clouds generated by a lidar sensor demonstrate significantly more complex underlying geometry, scan pattern, and data variance. Typically methods trained on PCN dataset [46] derived from synthetic ShapeNet CAD models embrace eight categories of objects (e.g., plane, car, chair) whereas lidar scans of the KITTI dataset [12] in turn were obtained during drives in the real world and contain multiple classes of objects in a single scan. Thus, the task of point cloud generation in the lidar domain remains under-explored. To our best knowledge, the proposed method is the only point-based method able to reconstruct complex lidar point clouds in the upsampling task accurately.
## III Approach
Our method relies on the edge-aware feature extraction, and extension mechanism [42] and projection-based 1-Wasserstein distance. This training objective reveals significantly improved sensitivity to fine-grained lidar scan patterns as opposed to conventional loss functions.
### _Model_
The proposed model is built upon established upsampling carcass with three steps: feature extraction, feature expansion, and coordinate reconstruction. The feature extractor is implemented as a plain encoder-decoder architecture and similarly to 3PU [42] employs dense per point feature learning using _Conv1d_ and _EdgeConv_[38] layers. An _EdgeConv_ layer applies the kNN algorithm to find clusters of points and preserves characteristic attributes like edges. Following the edge-preserving principle, the feature expansion module also takes advantage of _EdgeConv_ layer for generating the \(N\times 2C\) upsampled feature vector. In the final step, a set of linear layers reconstruct the resulting \(N\times 3\) coordinates.
_Feature Extraction._ As mentioned before, the encoder first takes the \(N\times d\) point cloud as input and translates it into \(N/r\times C^{\prime}\) feature vector by applying several \(Conv1d\) plus \(EdgeConv\) blocks. Here, \(N\) is the original number of points, \(r\) is downsampling ratio, \(d\) - point cloud dimensionality (in our experiments \(d=3\)) and \(C^{\prime}\) is the resulting size of the feature vector. The common farthest point sampling technique achieves downsampling from \(N\) to \(N/r\). This feature vector is transformed to \(C^{\prime}\)-sized global feature, which will be upscaled into \(N\times C\) by replication and interpolation. For that, replicated \(C^{\prime}\) feature vectors are just stacked together and interpolated using inverse distance weighted average from [26].
_Feature Expansion._ Inspired by [24], we design the feature expansion module along the lines of edge awareness by utilizing the _EdgeConv_ layer. This layer employs the kNN algorithm and enables the module to find clusters based on similarities in the feature space. It takes a \(N\times C\) learned feature vector and transforms it into \(N\times 2C\) upsampled feature vector and then reshaped to \(2N\times C\) tensor.
_Coordinates Reconstruction._ the resulting tensor will be then processed by a set of linear layers to reconstruct the point dimensions \(2N\times d\). Fig. 2 visualizes the overall architecture.
### _Loss Function_
We argue that conventional loss functions based on Chamfer Distance (CD) and approximations of Earth Mover's Distance (EMD) [20] suffer from the lack of _sensitivity_ to the fine-grained scan patterns of lidar sensors. As mentioned before, CD unequally distributes generated points across the lidar scan in point cloud reconstruction and favors the known or source point locations. A possible reason for that could be the nature of a generative task where produced point clouds at the initial state represent unstructured point blobs so that CD fails to find correspondences. On the other hand, the reason for EMD to fail inaccurate reconstruction of the fine scan patterns could lie in commonly used approximations which lead to biased or noisy gradients [40] and herewith affect the ability of a model to distinguish the fine details.
Fig. 3: Point cloud reconstruction using PointNetFCAE with conventional loss functions.
Following the Optimal Transport (OT) perspective, we consider the distance between point clouds as the distance between two distributions with a natural way to measure it by a metric function defined between probability distributions - Wasserstein distance. Yet accurate, exact computation of this metric is computationally intractable. The assumption is that more accurate distance calculation is a key to precise lidar scan reconstruction and upsampling. We argue that higher accuracy of point cloud reconstruction, which is sensitive to fine scan patterns, can be achieved by applying the Sliced Wasserstein (SW) distance as it possesses equivalent statistical properties to WD [15, 23]. The intuition behind SW distance resides in the idea of the existence of a closed-form OT solution in a uni-dimensional space. For two point sets \(X\) and \(Y\) of equal size such as \(X,Y\subset\mathbb{R}^{3}:|X|=|Y|\), EMD between \(X\) and \(Y\) can be calculated as:
\[EMD(X,Y)=\min_{\phi:X\to Y}\sum_{x\in X}\|x-\phi(x)\|_{2} \tag{1}\]
where \(\phi:X\to Y\) is a bijection [11]. The more general form of EMD in the case of \(|X|=|Y|\):
\[EMD(X,Y)=\inf_{P\in\Pi(X,Y)}\mathbb{E}_{(x,y)\sim P}[d(x,y)] \tag{2}\]
is equivalent to Wasserstein distance with \(p=1\):
\[W_{p}(X,Y)=\inf_{P\in\Pi(X,Y)}\mathbb{E}_{(x,y)\sim P}[d^{p}(x,y)]^{1/p} \tag{3}\]
where \(d\) is a metric and \(\Pi(X,Y)\) is a set of all joint distributions of \(P(x,y)\) of random variables \(x\) and \(y\) with marginal distributions \(X\) and \(Y\):
\[\Pi(X,Y)=\{P_{X,Y}(x,y)\}\]
Fig. 4: Lidar upsampling using proposed network with conventional loss functions.
Fig. 5: Distance metrics on jitter (normalized).
Fig. 6: Distance metrics on rotation (normalized).
. In practice, computation of 1 or 3 is intractable, so an approximation scheme such as an auction algorithm must be applied [20] yet it cannot guarantee the required bijection assignment. As opposed to such approximation, the Sliced Wasserstein approach exploits the fact that \(W_{p}\) has a closed-form solution for univariate probability distributions [15]. Thus several works utilize this fact to calculate \(W_{p}\)[39, 9]:
\[SW_{p}(X,Y)=\int_{\mathbb{S}^{d-1}}W_{p}(X^{s},Y^{s})ds \tag{4}\]
where \(X^{s}\) and \(Y^{s}\) are projections of \(X\) and \(Y\) onto the direction \(s\) of all possible directions of a unit sphere \(\mathbb{S}^{d-1}\). In practice, \(SW_{p}\) is approximated by only taking a finite number of random directions from \(\mathbb{S}^{d-1}\). Moreover, in a uni-dimensional case, \(d\) can be obtained by just sorting [9]. Hence our final objective can be defined as follows:
\[\hat{SW}_{p}(X,Y)=\frac{1}{|\mathbb{S}|}\sum_{s\subset\mathbb{S}}\frac{1}{|X|} \sum_{i}\|X^{s}_{i}-Y^{s}_{i}\|_{2} \tag{5}\]
Thus, the final optimization problem can be formulated as follows:
\[\operatorname*{argmin}_{\theta}\mathbb{E}_{X\sim D_{X}}(\hat{SW}_{p}(X,f_{ \theta}(X))) \tag{6}\]
## IV Experiments
### _Datasets_
Our experiments are based on the popular public datasets SemanticKITTI [2] and Waymo Open Dataset [10]. SemanticKITTI is a large-scale computer vision dataset that provides 22 sequences of lidar scans with overall around \(43000\) scans alongside point cloud segmentation labels. The dataset provides measurements in German inner cities, residential areas, and highway traffic scenes during various daylight and weather conditions. In our experiments we operate with original KITTI [12] dataset split of the sequences divided into _train_ and _test_ sets of the sizes \(7481\) and \(7518\) scans respectively. Semantic labels of the dataset contain information about \(28\) classes such as _road, sidewalk, person, car_. The Waymo dataset, in turn, provides \(1150\) sequences of \(20\) seconds each. In total, it offers around \(230K\) annotated samples with the ground truth for object detection and tracking tasks. For our purposes, every \(10\)th lidar scan has been sampled from all data sequences, resulting in \(15947\) samples training dataset and \(4038\) validation dataset. Also, from the five lidars, we only use the scans provided by the top-mounted one.
### _Loss_
We propose to train lidar generation models using loss function based on Sliced-Wasserstein distance as opposed to commonly utilized CD, EMD-based losses. Our benchmark experiments suggest that these metrics are sufficient for a model to learn the shapes of point clouds but not the fine structures. Described behavior can be observed in Fig. 3, which shows the results of the reconstruction of the point cloud patches provided by a _vanilla auto-encoder_ network PointNetFCAE trained respectively with CD and EMD loss. This auto-encoder model consists of a Pointnet [25] based encoder and a fully connected decoder. Even though the sizes of the visualized patches are equal (\(2048\) points), CD looks underpopulated in certain regions. [1] refers to this phenomenon as _CD blindness_, which manifests
Fig. 7: Qualitative comparison of the \(2\times\) lidar upsampling results on KITTI _test_.
in point placement where the average mass locates and fails to distinguish such poor allocation from the true one. EMD, in turn, appears to reconstruct rough point distribution but is considerably distorted. That finding is aligned with observations in [11] and mirrors its mean-shape experiments.
Additionally, we compare the proposed loss function with existing error metrics concerning their _sensitivity_. The results of this analysis are shown in Fig. 5 and Fig. 6. The comparison is conducted between Sinkhorn loss [8], Sliced Gromov-Wasserstein [36], Sliced Wasserstein Distance [15], Chamfer loss, and EMD loss. Here we execute two experiments by gradually applying a corresponding transformation on the point cloud and measuring the distances between transformed and original samples. In the first experiment, Gaussian noise is introduced into a sample, whereas during the second experiment, the sample is rotated along the vertical axis. In both cases, analyzed metrics behaved consistently by changing to non-zero values as the transformation became more significant in the first experiment and turned back to zero after full rotation in the second experiment. Despite that, Sliced Wasserstein Distance could identify the slightest point cloud changes by indicating higher discrepancy values in the early transformation iterations. The influence of different loss functions CD, EMD, and SWD on the lidar upsampling performed by our model will be discussed in the ablation study in the section IV-C
### _Upsampling_
The training occurs supervised where input patches of size \(2048\) were obtained by decimating the ground truth samples by a factor of two. Decimation is performed based on rasterized input point clouds. For the Waymo dataset, every second scan line of the original cylindrical projection of size \(W\times H\times 3\) of the lidar scans was dropped/rejected before projecting the points into the lidar coordinate system. Similarly, we processed the cylindrical projections of the KITTI dataset. As the KITTI dataset provides only raw point clouds, they are required to be rasterized before decimation. Therefore, KITTI scans were arranged into \(64\times 2048\) grid based on sensor vertical resolution and azimuth angle for rasterization. This preprocessing is shown in Fig. 1. With that data, the network is trained for \(2000\) consecutive epochs without pre-trained weights initialization and using Adam optimizer. The result after the training memory footprint is \(22\) Mb. Training is accomplished on NVIDIA Quadro RTX 6000 GPU(\(24\)GB) with a single forward pass that takes,
\begin{table}
\begin{tabular}{l|c c c c} \hline & CD & HD & EMD & SWD \\ \hline PU-Net [44] & 0.5272 & 1.2627 & 0.3851 & 15.4002 \\
3PU [42] & 0.1172 & 0.4151 & 0.2740 & 5.2078 \\ PU-GAN [19] & **0.0426** & 0.1892 & 0.2504 & 1.3553 \\ Ours & 0.0435 & **0.1694** & **0.0775** & **1.1742** \\ \hline \end{tabular}
\end{table} TABLE I: Quantitative comparison for \(2\times\) upsampling of \(8192\) points on KITTI _val_ dataset.
Fig. 8: Lidar upsampling results (middle) on KITTI _val_ patches with input samples (top) and ground truth (bottom).
on average \(0.2\) sec. To evaluate the generated results, we conducted several experiments and assessed our results both qualitatively and quantitatively. The results of the lidar scan upsampling on the KITTI dataset are shown in Fig. 7. To visualize the effects introduced by our approach, we display both sources and upsampled scans together with point clouds generated by the state-of-the-art upsampling methods 3PU, PU-Net, and PU-GAN. We employ original training protocols from respective reports to train those models on lidar point clouds. As reported in the original publication, PU-GAN was trained with adversarial and EMD-based reconstruction loss, 3PU was trained with modified counter-outlier CD loss, and PU-Net used plain EMD loss with repulsion loss. First, we want to draw attention to the fact that the proposed method recovers the scan lines missing in the input samples and, in consequence, effectively reconstructs the high-resolution target scans. In comparison with the results of the existing methods, samples provided by our approach reveal no point scattering in the local neighborhood but rather provide distinguished scan lines. Another characteristic of the generated scans worth considering is the actual shape of the scans. In the upsampled lidar sweeps, recovered scan lines indeed follow the spatial characteristics of the underlying geometry.
Additionally, we evaluate our method of upsampling the point cloud patches on a larger scale. The results of \(2\times\) upsampling for patches of size \(8192\) for both datasets KITTI and Waymo are depicted in Fig. 8 and 9. Here, we also want to emphasize the reconstruction quality of the upsampled scans.
To perform a quantitative evaluation of the results, we assess the generated data using traditional metrics such as CD, Hausdorff distance (HD), EMD, and introduced SWD. For the best-performing network snapshots, the metrics mentioned above are reported in table I. Our generated data reveals substantial performance improvement concerning HD and SWD.
Furthermore, to verify our hypothesis regarding the SWD loss, we study the effects of different losses CD, EMD, and SWD on the upsampling performance of our model. Here we train our network on the KITTI dataset with the patches of size \(1024\) and upsampling ratio \(2\) with a similar setup as described before but with varying loss functions. The results of this study are demonstrated in Fig. 4. They confirm our initial assumption and are aligned with the observations made before. Similarly to the reconstruction experiment here, CD leads to sparseness areas, and EMD leads to distortions. That
\begin{table}
\begin{tabular}{l|c c c c c c c c c} \hline \hline Validation & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\alpha}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\beta}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\beta}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\beta}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\alpha}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\beta}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\beta}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\alpha}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\beta}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\mathbf{\alpha}\)}} \\ \cline{1-1} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{6-11} \cline{
observation also confirms the findings of previous works, which state that CD is a weaker metric than 1-Wasserstein and SWD approximation is more accurate than approximated EMD.
## V Conclusion
In this work, we provide a network for lidar point cloud upsampling, which aims to reconstruct the fine-grained lidar scan patterns accurately. Our method reveals the required sensitivity to lidar scan lines and shows convincing results on lidar upsampling on a smaller scale of scan patches. Our deep network utilizes the edge-aware dense convolutions for feature extraction and expansion together with Sliced Wasserstein Distance. Both key features enable our method to perform lidar upsampling in a direct, lightweight one-stage fashion, excluding the coarse and fine reconstruction. The proposed method improves on both qualitatively and quantitatively measured baseline methods with HD, EMD, and SWD metrics. One of the limitations of our approach, though, is the scale of processed samples. The extension of our method to large-scale lidar scans (\(\geq 100K\) points) remains for future work.
## VI Acknowledgment
The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Energy within the project KI Delta Learning. The authors would like to thank the consortium for the successful cooperation.
|
2309.09322 | Fission Fragment Mass and Kinetic Energy Yields of Fermium Isotopes | A rapidly converging 4-dimensional Fourier shape parametrization is used to
model the fission process of heavy nuclei. Potential energy landscapes are
computed within the macroscopic-microscopic approach, on top of which the
multi-dimensional Langevin equation is solved to describe the fission dynamics.
Charge equilibration at scission and de-excitation by neutron evaporation of
the primary fragments after scission is investigated. The model describes
various observables, including fission-fragment mass, charge, and kinetic
energy yields, as well as post-scission neutron multiplicities and, most
importantly, their correlations, which are crucial to unravel the complexity of
the fission process. The parameters of the dynamical model were tuned to
reproduce experimental data obtained from thermal neutron-induced fission of
$^{235}$U, which allows us to discuss the transition from asymmetric to
symmetric fission along the Fm isotopic chain. | K. Pomorski, A. Dobrowolski, B. Nerlo-Pomorska, M. Warda, A. Zdeb, J. Bartel, H. Molique, C. Schmitt, Z. G. Xiao, Y. J. Chen, L. L. Liu | 2023-09-17T17:00:08Z | http://arxiv.org/abs/2309.09322v1 | # Fission Fragment Mass and Kinetic Energy Yields of Fermium Isotopes +
###### Abstract
A rapidly converging 4-dimensional Fourier shape parametrization is used to model the fission process of heavy nuclei. Potential energy landscapes are computed within the macroscopic-microscopic approach, on top of which the multi-dimensional Langevin equation is solved to describe the fission dynamics. Charge equilibration at scission and de-excitation by neutron evaporation of the primary fragments after scission is investigated. The model describes various observables, including fission-fragment mass, charge, and kinetic energy yields, as well as post-scission neutron multiplicities and, most importantly, their correlations, which are crucial to unravel the complexity of the fission process. The parameters of the dynamical model were tuned to reproduce experimental data obtained from thermal neutron-induced fission of \({}^{235}\)U, which allows us to discuss the transition from asymmetric to symmetric fission along the Fm isotopic chain.
24.75.+i, 25.85.-w,28.41.Ak
Since the discovery of the fission phenomenon in 1938, it has been commonly accepted that the most probable mass of the heaviest fragment produced in spontaneous or low-energy fission is located at \(A\approx 140\). However, a series of experiments by Hulet and co-workers [1] have shown that this is not always the case. In the spontaneous fission of Fermium isotopes, in particular, one notices a rapid change of the fragment mass-yield systematics with growing neutron number. For light Fm isotopes, one observes a mass
asymmetry of the fission fragments typical for actinides. Yet, at \(A=258\), the fission fragment mass distribution changes rapidly and becomes symmetric and narrow. Also, the fragments' total kinetic energy (TKE) yield becomes significantly larger than in the lighter Fm isotopes. This new trend in the fragment distribution is also observed for No isotopes. This discovery of Hulet and co-workers constitutes a challenge for nuclear theoreticians. Several attempts have been made to explain this phenomenon, an excellent review of which is presented in the work of Albertsson et al. [2]. In the present paper, we will show that this rapid change of the fission yield systematics in the Fermium isotopes can be well characterized when using a new, very efficient Fourier-type parametrization [3] of the shapes of fissioning nuclei, combined with the WKB method to describe the penetration of the fission barrier and a Langevin type calculation [4] of the fragment yields.
Our "_Fourier-over-Spheroid_" (FoS) parametrization, which describes the shape of the nucleus relative to a spheroidal deformation, was recently introduced in Refs. [5, 4] through the shape function:
\[\rho_{s}^{2}(z)=\frac{R_{0}^{2}}{c}\,f\left(\frac{z-z_{\rm sh}}{z_{0}}\right) \tag{1}\]
which describes, in cylindrical coordinates, the location of a surface point of an axially symmetric shape as function of the symmetry \(z\) coordinate. In this expression, \(R_{0}\) corresponds to the radius of the spherical nucleus having the same volume, and the parameter \(c\) determines the elongation of the shape with half-axis \(z_{0}=cR_{0}\). The shift parameter \(z_{\rm sh}\) guarantees that the center of mass of the shape is located at the origin of the coordinate system \((z_{\rm sh}-z_{0}\leq z\leq z_{\rm sh}+z_{0})\). The function \(f(u)\) defines a shape having half-length \(c=1\):
\[\begin{array}{rl}f(u)&=1-u^{2}-[\frac{a_{4}}{3}-\frac{a_{6}}{5}+\ldots]\mbox {cos}\left(\frac{\pi}{2}u\right)-a_{3}\sin(\pi\,u)\\ &-a_{4}\cos\left(\frac{3\pi}{2}u\right)-a_{5}\sin(2\pi\,u)-a_{6}\cos\left( \frac{5\pi}{2}u\right)-\ldots\,\end{array} \tag{2}\]
where \(-1\leq u\leq 1\). The first two terms in \(f(u)\) describe a circle, and the third ensures volume conservation for arbitrary deformation parameters \(\{a_{3},\ a_{4},\ \ldots\}\). The parameters \(a_{3}\) and \(a_{4}\) allow for reflection asymmetric and necked-in shapes, with higher-order parameters \(a_{n},\ n\geq 5\) responsible mostly for the fission fragments deformations. Shapes breaking axial symmetry can easily be described through the shape function
\[\varrho_{s}^{2}(z,\varphi)=\rho_{s}^{2}(z)\,\frac{1-\eta^{2}}{1+\eta^{2}+2\eta \cos(2\varphi)}. \tag{3}\]
with the parameter \(\eta=(b-a)/(b+a)\), where \(a\) and \(b\) are the half-axis of the ellipsoid obtained as the cross-section of the non-axial shape for constant
(see e.g. [3]). The parametrization (3) is similar but more general than the \(\gamma\)-deformation of Age Bohr. Equation (3) describes the same class of shapes as the original Fourier deformation parameter set of Ref. [3] but is better adapted for performing numerical calculations around the scission point of fissioning nuclei.
Using the above FoS parametrization, we have performed extensive macroscopic-microscopic calculations of the PESs for even-even nuclei with \(90\leq\) Z \(\leq 122\)[4] in the 4D \(\{\eta,c,a_{3},a_{4}\}\) deformation space. The Lublin-Strasbourg Drop (LSD) formula [6] was used to evaluate the macroscopic part of the energy, while the microscopic one are obtained using the single-particle spectra of a Yukawa-folded mean-field Hamiltonian [7] and the Strutinsky shell and BCS pairing corrections. An example of the obtained potential energy surface (PES) is presented in Fig. 1, where the energy of \({}^{258}\)Fm (renormalized to the LSD energy of the spherical nucleus) is presented as function of the elongation \(c\) and the neck-parameter \(a_{4}\). Each point of the surface is minimized with respect to the non-axial (\(\eta\)) and reflectional asymmetry (\(a_{3}\)) deformations. The ground-state of \({}^{258}\)Fm is found to be located at \(c=1.13\) and \(a_{4}=0.03\), with \(a_{3}=\eta=0\). Interestingly, two-second saddles are found: one at \(c=1.41\) and \(a_{4}=0.27\) leading to compact symmetric fission and a second one, around 1 MeV higher, at \(c=1.55\) and \(a_{4}=0.12\), which opens a path leading both to asymmetric mass frag
Figure 1: Potential energy surface of \({}^{258}\)Fm on the \((c,\,a_{4})\) plane. Each point is minimized with respect to the non-axial (\(\eta\)) and the reflectional (\(a_{3}\)) deformations.
ments, but bifurcates also into another valley characterized by a symmetric mass split with very elongated fragments. The shapes of the forming fission fragments corresponding to these three valleys are characterized on top of Fig. 1. The FoS parametrization scission line is found around \(a_{4}\approx 0.72\).
Figure 3: Fission fragment mass yields of Fermium isotopes. The experimental data (*) are taken from Refs [1, 11, 12, 13].
Figure 2: Second fission barrier heights of \({}^{254-262}\)Fm corresponding to the asymmetric (a) and symmetric (s) saddle points.
The competition in energy between the different second saddles and subsequent fission valleys in Fm isotopes decides which fission path will be more populated. The second barrier height, defined as the energy difference between the \(2^{\rm nd}\) saddle and the ground state of the Fm isotopes, is shown in Fig. 2.
Using the sets of Langevin equations defined in the FoS deformation space, one obtains the mass and total kinetic energy of the fission fragments [8, 9]. We have chosen the corresponding symmetric or asymmetric exit point from the fission barrier as a starting point of the Langevin trajectories.
The final mass and TKE yields (\(Y_{\rm th}\)) were obtained by weighting the yields obtained for both starting points (\(Y_{a}\) and \(Y_{s}\)) using the penetration probability (\(P_{a}\) and \(P_{s}\)) of the corresponding fission barrier:
\[Y_{\rm th}(A_{f})=P_{a}\cdot Y_{a}(A_{f})+P_{s}\cdot Y_{s}(A_{f}). \tag{4}\]
The relative probabilities \(P_{i}\) are given through the corresponding action integral \(S_{i}\) determined in the WKB approximation:
\[P_{i}=\exp(-S_{i})/[\exp(-S_{a})+\exp(-S_{s})] \tag{5}\]
evaluated along the path \(i\) (confer e.g. to Ref [10]).
Our estimates of the mass yields of \({}^{246-262}\)Fm isotopes are compared with the experimental distributions in Fig. 3. The corresponding fission fragment TKE yield of \({}^{258}\)Fm is shown in Fig. 4. The predicted yield (thick solid line) reproduces the general trend observed in the measured TKE distribution [1].
Figure 4: Fission fragment yield (thick solid line) of \({}^{258}\)Fm as function of the fragment TKE. The experimental data (histogram) are taken from Ref. [1]. A thin solid line shows the TKE distribution corresponding to the asymmetric path, and the dotted line the one for the compact symmetric path.
Taking into account the charge equilibration effect, the mass and deformation, and the excitation energy of the fragments, one can estimate the post-fission neutron multiplicities as described in Ref. [4]. The multiplicities of neutrons accompanying the spontaneous fission of \({}^{258}\)Fm, the corresponding isotope and TKE yields and the elongation of nucleus at scission are shown in Fig. 5 as function of the fragment neutron \(N_{f}\) and proton \(Z_{f}\) numbers, which illustrates the possibilities of our numerical code to account for the different aspects of the nuclear fission process.
## Conclusions
The following conclusions can be drawn from our investigation:
* The Fourier expansion of nuclear shapes offers a very effective way of describing the deformation of fissioning nuclei in the vicinity of the ground state and the scission point.
* Potential energy surfaces are evaluated in the macro-micro model using the LSD mass formula for the liquid-drop type energy and a Yukawa-folded single-particle potential to obtain the microscopic energy correction.
* A 3D Langevin model that couples fission, neck, and mass asymmetry
Figure 5: Maps of the post-fission neutron multiplicities (top l.h.s.), fission fragment yield (top r.h.s), TKE (bottom l.h.s.), and elongation at scission (bottom r.h.s.) for \({}^{258}\)Fm.
modes was used to describe the fragment mass and kinetic energy yields.
* The transition with increasing mass from asymmetric to compact symmetric fission, as observed in Fermium isotopes, is well reproduced.
* The multiplicity of post-scission neutrons and the charge of the fission fragments are estimated within our model.
* The influence of the inclusion of higher-multipolarity deformation parameters \(a_{5}\) and \(a_{6}\) is on our agenda.
Further calculations for a wider mass and charge region are in progress.
## Acknowledgements
This work was supported by the Polish National Science Centre, project No. 2018/30/Q/ST2/00185, the Natural Science Foundation of China (Grant No.11961131010 and 11790325), and the COPIN-IN2P3 collaboration (project number 08-131) between PL-FR labs.
|
2309.05518 | STAR-loc: Dataset for STereo And Range-based localization | This document contains a detailed description of the STAR-loc dataset. For a
quick starting guide please refer to the associated Github repository
(https://github.com/utiasASRL/starloc). The dataset consists of stereo camera
data (rectified/raw images and inertial measurement unit measurements) and
ultra-wideband (UWB) data (range measurements) collected on a sensor rig in a
Vicon motion capture arena. The UWB anchors and visual landmarks (Apriltags)
are of known position, so the dataset can be used for both localization and
Simultaneous Localization and Mapping (SLAM). | Frederike Dümbgen, Mohammed A. Shalaby, Connor Holmes, Charles C. Cossette, James R. Forbes, Jerome Le Ny, Timothy D. Barfoot | 2023-09-11T15:01:54Z | http://arxiv.org/abs/2309.05518v1 | # STAR-Ioc: Dataset for STereo And Range-based localization
###### Abstract.
This document contains a detailed description of the STAR-loc dataset. For a quick starting guide please refer to the associated Github repository ([https://github.com/utiasASRL/starloc](https://github.com/utiasASRL/starloc)). The dataset consists of stereo camera data (rectified/raw images and inertial measurement unit measurements) and ultra-wideband (UWB) data (range measurements) collected on a sensor rig in a Vicon motion capture arena. The UWB anchors and visual landmarks (Apriltags) are of known position, so the dataset can be used for both localization and Simultaneous Localization and Mapping (SLAM).
## 1. Summary
This dataset contains multiple trajectories of a custom sensor rig inside a motion capture (mocap) arena. Attached to the sensor rig are a stereo camera, providing image streams and inertial measurement unit (IMU) data, and one or two ultra-wideband (UWB) tags, providing distance measurements to up to 8 fixed and known UWB anchors. Also fixed in the mocap arena are 55 _Apriltag_[(2)] landmarks of known position, which are detected online by the stereo camera. A depiction of the experimental setup can be found in Figure 1.
A compact version of the dataset can be found on Github and is the recommended starting point for using the dataset. It contains all pre-processed _csv_ files of the data, as well as some convenience functions for reading the data. If required, the full dataset can be found on Google Drive. In addition to the aforementioned _csv_ files, the full dataset also includes the raw _bag files_ and many analysis plots, making it easier to choose which part of the dataset to use for the given application. Both the Github repository and drive are structured as follows:
* data/<dataset-name>/: folder containing all data of the run of name <dataset-name> (see Section 3 for an overview of the different runs).
* csv files of processed data (apriltag.csv, imu.csv, uwb.csv, gt.csv. See data/README.md for information on the data fields.
* calib.json: file with calibration information of this run.
* (Drive only) bag files of this run (<dataset-name>_0.db3 and metadata.yaml)
* (Drive only) some analysis plots of the data, see Appendix A for information on the plots.
* (Drive only) the video of the left camera with overlaid Apriltag detections.
* mocap/: folder contains environment information.
* uwb_markers_version>.csv,mocap/environment_<version>.csv: csv files of the UWB anchor and _Apriltag_ landmark locations, respectively, used in Setup of <version>. See Section 3 for descriptions of the different setup versions.
* (Drive only) plots of the different setup versions.
* (Drive only) vsk and tag config files used in calibration procedure.
## 2. System Description
A photo of the sensor rig used for data collection is shown in Figure 2. Attached to the rig are the UWB tag(s) and the stereo camera. A laptop 1 (not shown) is used for driving all sensors and collecting the data through the help of dedicated Robot Operating System (ROS) nodes.
Footnote 1: _Lenovo P16 Thinkpad_ with 16GB RAM and _RTXA4500 NVIDIA GPU_
Figure 2 also depicts the different sensor frames including their transforms. The frames are:
* \(\mathcal{F}_{r}\): robot frame (tracked by mocap system)
* \(\mathcal{F}_{c}\): camera frame
* \(\mathcal{F}_{t_{i}}\): \(i\)-th tag frame, with \(i=1\ldots 3\)
### UWB tag and anchors
The main UWB system used are custom-made boards provided by the Mobile Robotics and Autonomous Systems Laboratory (MRASL), where each board is fitted with DWM1000 UWB transceivers2. The board is shown in Figure 6. These boards are equipped with a STM32F405RG microcontroller3 to interface with the DWM1000 modules, and the boards are then connected to the onboard computer using USB. Details about the ranging and
Figure 1. The STAR-loc dataset contains data collected on different trajectories of a sensor rig (bottom right) in a Vicon mocap arena, providing range and calibrated stereo measurements.
communication protocols can be found in [3]. We also provide a small dataset with the off-the-shelf development kit MDEK10014 (setup s5). We use custom data collection scripts to record distance measurements.
Footnote 4: Available at [https://www.gorvo.com/products/p/MDEK1001](https://www.gorvo.com/products/p/MDEK1001).
### Visual landmarks
We use _Apriltags_[2], tag version _Tag36h11_, as known landmarks. The tags are printed on US letter paper, glued to foam board, and spread throughout the room at fixed locations of different heights and orientations, as shown in Figure 6. We use the open-source libraries apriltag_ros and apriltag_msg for _Apriltag_ detection.
Figure 3: Left: One of eight fixed UWB anchors. Right: subset of _Apriltag_ landmarks.
Figure 2: Left: the used sensor rig with one UWB tag and stereo camera attached. Right: the corresponding Computer Aided Design (CAD) model including the definitions of the sensor frames, showing the three possible UWB tag locations \(\mathcal{F}_{t_{1}},\mathcal{F}_{t_{2}}\) and \(\mathcal{F}_{t_{3}}\) and the stereo camera frame \(\mathcal{F}_{c}\).
### Stereo camera
We use the _ZED2i_ stereo camera from _stereolabs_ with a \(12\,\mathrm{cm}\) baseline and set to \(1920\times 1080\) resolution. For data collection, we employ the ROS-based camera driver provided by stereolabs, using the image_transport_plugins ROS packages for image compression.
### Motion capture
We use a _VICON_ mocap system for ground truth monitoring. For data collection, we use the ros2-vicon-receiver package by _OPT4SMART_ to publish the native data stream from _VICON_ as ROS messages.
## 3 Dataset overview
### Runs and setups
An overview of all collected datasets is shown in Table 1. The datasets are distinguished by their setup, landmarks and trajectory version. These aspects are described in more details in the paragraphs below.
Trajectory versionsThe first five listed trajectory types (also denoted "standard" in what follows) are repeated for different kinds of tag setups to facilitate their comparison. The four remaining datasets are recorded for setup _s3_ only, and serve as datasets to highlight specific phenomena. Example trajectories for the 5 standard dataset types are shown in Figure 4.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**setup** & _s1_ & _s2_ & _s3_ & _s4_ & _s5_ & \\
**landmarks** & _v1_ & _v1_ & _v2_ & _v2_ & _v3_ & \\
**trajectory** & & & & & & description \\ \hline loop-2d & x & x & x & x & x & multiple loops, keeping sensor rig level \\ loop-2d-fast & x & x & x & & & multiple loops, walking fast, keeping sensor rig level \\ loop-3d & x & x & x & & x & multiple loops, pointing sensor rig at different directions \\ zigzag & & x & x & x & & a roughly piecewise linear trajectory, keeping sensor rig level \\ eight & & x & x & & & draw large eights in the air, pointing at an area \\ \hline apriltag & & x & & & one loop with complete _Apriltag_ detections \\ ell & & x & & & multiple ell-shaped trajectories with a lot of height variation \\ grid & & x & & & static measurements at 8 grid points, including one full circle at last point. \\ loop-3d-z & & x & & & loop with a lot of height variation \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of datasets
An overview of the remaining datasets is shown in Figure 5.
Setup versionsThe datasets include 5 different setups of the sensor rig:
* _s1_: We use locations 1 and 2 (see Figure 2) for the UWB tags 1 and 2, respectively, on the sensor rig. The rig is held just above the person's height using a single _PVC_ tube.
* _s2_: hand-held up, 2 tags: The sensor rig is mounted on a second _PVC_ tube to increase the proportion of line-of-sight (LOS) regions: the rig is now a half metre above the person's height.
* _s3_: hand-held up, 1 tag: We use only location 3 (see Figure 2) for the UWB tag 1.
* _s4_: mounted on Jackal robot, 1 tag at location 3.
* _s5_: hand-held up, 1 Decawave tag at location 3.
Figure 4: Overview of the different standard trajectory types recorded with many different setups. The shown data corresponds to setup _s3_.
Figure 5: Overview of the trajectory types recorded with the setup _s3_.
_Landmarks versions_. The datasets include three different landmark layouts. The first two differ only minimally in geometry; Version _v2_ is an improved setup of _v1_ after analyzing noise patterns of both the mocap system and the UWB anchors. Version _v3_ is the layout used for _Decawave_ data and consists of only four anchors. The different layouts are shown Figure 6.
Figure 6. The three different UWB anchor layouts and two different _Apriltag_ landmark layouts used.
### Collected data
In this Section, we report some statistics taken over all the collected datasets. An individual analysis per dataset can be found in Appendix A. The names of the datasets are of the format:
trajectory-version_setup-version.
Number of messages and ratesThe collected data rates and number of messages received, for each dataset, are shown in Table 2.
UWB qualityThe quality of UWB measurements per dataset, measured by the (signed) difference between the ground truth and measured distances, is shown in Figure 7. The measurements shown are _before calibration_. For performance after calibration, please refer to the individual dataset plots provided in the dataset on Google Drive.
Figure 7: UWB performance over all datasets.
Figure 8: Stereo camera performance over all datasets.
## 4 Data Processing
### Stereo camera calibration
Stereo camera calibration involves two main steps:
1. **Data association**: Involves associating _Apriltag_ identifiers to the ground truth Vicon locations.
2. **Extrinsic calibration**: Involves solving for the transformation between the Vicon frame attached to the camera rig and the left _Zed2i_ camera frame (at focal point).
Note that the rectified intrinsic parameters provided for the _Zed2i_ camera were assumed to be accurate and were not tuned in this calibration procedure.
Data association was performed using a camera sequence with _Apriltag_labels to manually label the set of Vicon ground truth landmark locations.
For extrinsic calibration, we provide three different methodologies: fixed calibration (appendix _cal), global calibration (_cal_global) and individual calibration (_cal_individual) For fixed calibration, the CAD model of the sensor rig was used to establish a transformation between Vicon and camera frames. For global calibration, we use the dataset apriltag_s3 to find the transformation that minimizes the reprojection (also called photometric) error using Gauss-Newton optimization [4]. For individual calibration, we use the same procedure, but calculate the best transformation for each dataset individually.
To reduce the effect of outliers, we perform the calibration three times, removing outliers after each iteration. Outliers are pixel measurements with errors higher than a fixed threshold (we used 100 for _v2_ and _v3_ and 300 for _v1_).
### UWB calibration
The ranging protocol used in these experiments is the double-sided two way ranging (DS-TWR) protocol shown in Figure 9. Each of the 6 timestamps \(t_{i},i\in\{1,\ldots,6\}\) is recorded as part of the dataset, as well as the received signal power at the timestamps \(t_{2}\) and \(t_{4}\).
A time-of-flight measurement can be generated from the recorded timestamps as
\[t_{f}=\frac{1}{2}\left(\Delta t_{41}-\frac{\Delta t_{64}}{\Delta t_{53}}\Delta t _{32}\right),\]
where, as shown in Figure 9, \(\Delta t_{ji}=t_{j}-t_{i}\). Nonetheless, the range measurements are biased as shown in the blue distribution in Figure 10. This error stems from multiple factors such as timestamping inaccuracies, multipath propagation, and obstacles.
To obtain more accurate range measurements, the calibration procedure presented in [1] is implemented. The antenna delay values shown in Table 3 and the power-to-bias relation shown in Figure 11 are obtained by performing two separate experiments with 6 UWB tags fitted on 3 flying quadcopters (2 per quadcopter). A relation between the uncertainty or standard deviation of measurements as a function of the received signal power is also learned and is shown in the bottom plot of Figure 11. More information on the experiment and how these relations are obtained can be found in [1].
The timestamps are then corrected using the antenna delays and the received-signal power and the range measurements are recomputed, giving much less biased measurements as shown in Figure 10. Both the raw and
calibrated measurements are provided when available, and they can be found under the columns range and range_calib, respectively. The uncertainty of each measurement is also provided under the std column.
Figure 11: The relation between the bias and the standard deviation of the bias for the measurements as a function of the lifted received signal power. These are obtained from a separate calibration experiment.
Figure 10: Histogram representing the distribution of the bias of the range measurements for the zigzag s2 experiment. The distribution for the raw and the calibrated measurements are shown.
Figure 9: The DS-TWR protocol used in the experiments to generate range measurements between any two UWB tags.
## 5. Acknowledgments
We would like to thank Joey Zou for his help in preparing the _Decawave_ data collection pipeline and rigid adjustments. We also thank Connor Jong for his help during experiments and Abhishek Goudar for advice on UWB data collection.
|